BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage Presentations Leading a Journey to Better Quality

Leading a Journey to Better Quality

Bookmarks
43:04

Summary

Maryam Umar talks about the steps she took to define the term 'bad quality' and how to better discover it as an earlier part of the software delivery process rather than as feedback from the customer. She discusses how she launched an initiative and created “quality champions”.

Bio

Maryam Umar works in London as Head of QA of Thought Machine, a Fintech firm. She started her career thirteen years ago as a QA test engineer in the finance and mobile industry.

About the conference

Software is changing the world. QCon empowers software development by facilitating the spread of knowledge and innovation in the developer community. A practitioner-driven conference, QCon is designed for technical team leads, architects, engineering directors, and project managers who influence innovation in their teams.

Transcript

Umar: I'd like to talk a little bit about Thought Machine, which is where I work. We're a mid-level startup, just over 300 people now and going to hit 500 this year. The company has been working on creating a core banking solution. If you think about how you bank, you see your transactions and different products like mortgages, loans, and so on. All these systems are really old, and the banks are really struggling on how to update them. We're creating a solution for them. It's an out of the box product, where you can actually create these different products called mortgages. You can actually visualize the workflows. How the contracts are set up. How the ledgers are balanced, and so on.

I joined the company just over 6 months ago. The company is 5-and-a-half years old now. Some of our major clients include Lloyds, Atom, Standard Chartered, and so on. When you're in the startup world, in the beginning, it's all about develop-develop, deliver-deliver. It's all about getting the clients initially and getting to understand if people actually want to use your product. What I'll talk about is, what problem is it that we're actually trying to solve? How do you get started on improving quality? I'll talk a lot about metrics and give you some examples, and how you can maybe evolve those metrics as you go along.

I've been in quality assurance for about 13 years now. I actually studied software testing. I did my thesis as part of my post-grad degree, which is great that people actually teach this as a module. The thing that people were always asking me were, when you come in, can you come and fix our quality? Can you please understand why our clients are unhappy? Why are our releases late? Why are we finding so many incidents? Why aren't we finding them in-house? You're like, "These are a lot of problems. Where do I start?" When you come to a company, you look at all these strange incidents happening.

I actually did a video of the intro of this show with our team, where we talked a lot about all these weird issues we were finding as a system was being integrated in a very old release. The one thing we stopped thinking about is that not only quality in engineering, you also have teams which are really stressed. They are time bound on delivering the solutions and they really don't have time to think about how to innovate, especially in the startup world, which is where a lot of burnout also happens.

Bringing Quality to the Company

The three things I keep saying about when you want to try and bring better quality to the company is, one, it cannot happen with your QA team only. A lot of places I've been in, they say, "Anybody who has a title of QA tester, or estate, or so on, these are the people who will solve any quality assurance issues." That's not right. It cannot be done like that. Quality has to be baked into the entire engineering process. Secondly, the notion of quality assurance needs to come from top-down. Your management has to have buy-in into this. If they are not bought in, you will not get the time and resources to actually improve quality across your lifecycle. They need to echo the same message that the testing team is trying to enforce into the team. Lastly, when you've done all of this, you need to understand where you are currently with your quality journey, where is it that we are? How are we going to try and improve that?

Quality BINGO

A challenge I've seen in most companies is that people want to improve quality but they're not excited about it. They're just not interested. They're like, "We need to write test automation. We know what you're talking about." You're at the end of the funnel, and really not interested. I've actually come up with this game that I've tried to use to create some excitement about quality. What I'll do is I'll read out what I think quality is, if you hear a term that I say which is in your bingo card, please cross that out. Then I'll tell you what we'll do next.

Quality to me is about working with the product teams for whole product thinking. It's about using three amigos sessions. It's about ensuring the ACs we write are clear, and concise, and remove ambiguity. It's about working as a team. It helps think about how we can make sure the processes we use help remove waste. Testing is a part of quality assurance activities. Testing includes the thought process behind choosing test frameworks for unit and integration tests. Before we do this, we need to think about test scenarios. How will our customer interact with the product? Do we have a test strategy to do this? Are we truly implementing the test pyramid or slowly running into the ice cream cone problem? Have we discussed what will be covered as unit component and functional test levels? Or are we just hoping to pick things up during exploratory testing? How are we measuring quality? Are we delivering releases on time? Do these releases include the features, bug fixes our clients expect? How many bugs were found? Can we find a way to prevent bugs? Should we adopt a no-defect policy in our Sprints? What test metrics do we have? Is code coverage a useful metric? How long do our builds take? How often are our build tools available?

A lot of these things that I've mentioned comprise of what you need to do for quality. Has anybody got seven, eight, or nine words crossed out? A few of you. When I ran this exercise in an engineering or a company, all hands, one, everybody was listening, and they were trying to understand what quality is about. My goal was to create quality champions in the company. That would have helped me with my previous goal, which was, it does not happen with the QA team only. The people who had the most words crossed out, were the people who were going to be my quality champions. What was interesting was, I hoped none of their testers by some random luck get selected as one of this. The four people that ended up with the most words crossed out were the CEO, a frontend architect, a grad, and a build engineer. I was like, these people will give me really good feedback. The CEO will give me the feedback of how our clients feel when they use our product. The frontend architect will make sure that as part of building their new architecture, some of the new tools we were doing, he will make sure, can we set up test frameworks thinking about the testability of how we're framing this new architecture? It was great to have a grad in there because that's how we need to train people when they come into our industry that quality needs to be baked in. As they would see the process evolving, they would help. The build engineer, of course, if we don't have good build in the infrastructure, the feedback loop is really slow. I used all these people to help me on various metrics, which was great. You can run this exercise on your own. If anybody's interested in running this. You can ping me later and I'll share with you how you can generate these bingo cards.

Once you've run this excitement creating activity in your company, luckily, I haven't had to do it at Thought Machine, this is where the title becomes important because the company thinks of quality as a whole rather than something which resides in engineering only. Once you've created excitement about quality, you need to come up with an action plan. What do you do next? I derive my inspiration from this book called, "Leading Quality" by Ronald Cummings-John and Owais Peer. Also, a lot of the other work that I mention today will be from Lisa Crispin's book, "Agile Testing" and some other resources.

Visualize the Problem

What's in your action plan? Number one, you want to create a vision statement? What is our goal? What do we want to achieve as part of engaging with teams to develop quality? You want to identify the stakeholders. These are the people who will actually help promote quality within the organization and help you achieve that goal. Then you want to identify the areas, what are the trouble making spaces which have issues in them and where do you start? In the end, you want to define metrics, which will help you identify where you are today and then improve from there.

This is the vision statement we've come up with at Thought Machine. "Assuring quality enables teams to drive for customer satisfaction at a sustainable pace." Notice that I don't say quality assurance here. Quality assurance is more of an engineering practice, whereas assuring quality is about behavior. You want to make sure that behavior is part of engineering. The items you see in blue are the three areas we marked as the areas which need improvement. Enabling teams refers to having quality in engineering. Customer satisfaction talks about quality in delivery. Sustainable pace is quality of process. You cannot be without all these three when you want to improve quality.

Who Are Your Stakeholders?

Once you've defined your vision statement and publicized it in your company. What do you do next? You want to define, who are the people who are actually interested in improving this and why? These are usually your CEOs, CTOs, your tech leads, your actual testing team, release management, incident management. Or, these are the people who've rooted for your role, or are saying we need cross-functional teams. Really, you want to understand why do they think that quality is suffering in their company?

Then you want to go away and ask them some questions to understand what the problems are. Some of the questions I asked were, how do you know that quality is bad? Did you get some bad feedback from the client? Or, were they just angry, or they did not get the features they expected? How many incidents did we find? What's code coverage like? Which is another contentious topic. Do you have the right processes in place? Do we take way too long to do code reviews, or our sign-off isn't clear? Everything is something which can help you improve your gross margin of delivery and reduce waste.

Software Delivery Performance

Have you read the State of DevOps Report? People who've read the State of DevOps Report? This image is from the State of DevOps Report? Initially, when I was faced with this problem of, I need to come up with how do I form a baseline of where we are with quality? I refer to this report. They come up with these four factors as part of their research: deployment frequency, lead time for changes, time to restore service, and change failure rate. These are the traits which they've observed in their research for high-performing teams. These things are what you need to try and visualize and keep iterating until you improve. I highly recommend reading this report and understanding what the research is about. They come up with a new version of the report every year. They do a survey again. It's really interesting to see how things change.

Areas to Improve

Once you've identified your vision, then you know your stakeholders. You want to find out what are the areas which need improvement. This is input you will get based on the questions you ask your stakeholders. Some of the things I've seen mostly are hiring issues, hiring for people who want to do testing releases, test automation, product quality, and process quality. I will focus mostly on releases and product quality and process quality. Just to touch on the other two, hiring, we all know it's really hard to hire good testers or QA people. I hold myself accountable as well, CVs of quality people are of very low quality. They really need to work on improving what they do, why they choose certain frameworks, what are the problems they were trying to solve? Rather than, I wrote BDD scenarios and that's why I'm looking for this job. Test Automation. The problem there is there are lots and lots of open source and paid for frameworks. Everybody has a favorite. When it gets too much it just gets difficult to manage it.

Releases

We'll talk about releases first. At Thought Machine and other places I've seen, this can be a real problem. When you are continuously delivering, we actually didn't have any release visibility until about over a year ago. The first thing the team did was create a visibility landscape for releases. You also want to see how many issues were found pre-release versus post-releases? How good is your regression cycle when you're doing releases? Do you even have release notes? What's the content of the release notes? Do they talk about known issues? That's very important. You should be very transparent with your clients and share what are the issues you still can see in the product that you're delivering and when can they expect fixes for it? You also want to see how easy is it to create the deployables and how quickly can you deploy? In case of Thought Machine, we're a product company. We want to make sure that we can actually generate the deployables very quickly. If it takes two days to create a deployable, obviously you're adding more to your delivery time, and that will slow everything down if there are issues found.

This is an example of some of the content of our visibility landscape page. Just remember, everything does not have to be very complicated from the very beginning. This is a very simple Confluence page, our intranet page, where you can see what versions of the releases are actually supported by our clients right now. We know as part of our regression testing, which versions do we have to support as part of our testing cycles? How many do we need to spin up? How many do we need to check are backwards compatible? It's a really simple exercise, but you will find that when there are incidents, the first question sometimes people forget to write is, what is the affected version of your software? Where is it deployed? What environment is it deployed? Can I quickly recreate the incident? That's how a lot of chaos starts happening. Having this written down from the very beginning really helps.

The second thing we have is, we have a really simple release calendar. We do monthly releases right now. For us as a company, because we work with banks and banks work on quarterly cycles, it really didn't make sense that we tried to hit continuous delivery immediately. It's an ongoing process. What we do is we do weekly releases to our staging environments. We do trunk based development, which means dev is unstable, most of the time. It's ok for goals to be red there. To sign off on new features, we actually wait for them to be deployed to staging. Staging deploys only happen every week. Then there's a cutoff date and then we have a week of regression cycles. As a few people pointed out to me, this does not look like it's going to head towards continuous delivery. What are you going to do then? Of course, when you're starting this up, you start with a lot of exploratory and manual testing. Then you slowly start automating. You build it in as part of your engineering process. As new people join, you tell them nothing's going to get signed off or code reviewed until their test is attached. You want people reviewing them. You want to make sure it has the correct information in it. As you start doing it, the goal is that we do on-demand deploys to staging. Whenever a build is green on dev, it auto-deploys to staging and automatically runs a smoke test that we want to run. That's how we want to progress going forward.

The last one, because we have multiple clients and multiple environments, we can actually see what version is deployed for which client in which environment, which is also very useful, along with the dates. When you click through on this, actually, you can see what was the Git commit push, Git commit ID, and so on. It's really easy to see all the changes in there. This is very useful when you have incidents or any defects to see, this client has version 1.6.0. The other one also had 1.6.0. Why am I seeing this incident in one environment and not the other? What's different? It could be the test data or any other data.

Process Quality

The next thing I'll talk about is process quality. There are loads of processes that we've put in place. Some of the processes, I think, which need to be reviewed as a team, are defect management processes. You want to make sure that they're not difficult and tedious. It's flowing through the system. You want to look at how you're signing off features. You want to review your definition of done, your definition of ready. These are things which should be consistent across your engineering teams. Then you also want to look at how you're doing code reviews. What are the standard things you all look for? You want to look at your test automation practices. What does a good test look like? You can have examples of unit tests, integration tests, end-to-end tests. You can have definition of acceptance criteria and how you estimate.

What we are actually working on is what we call engineering hard culture and engineering soft culture. Engineering soft culture is more about the types of people we are. What are the traits of software engineers in Thought Machine? Engineering hard culture will have a lot of these things that we can provide to everybody new coming in, and saying, these are the things we're mandating in some way. The other things the teams decide how they're going to do.

This is my favorite part, and also a difficult part. Here we'll talk about metrics, which will be more in line with how do you know where your product's quality is? I derive my inspiration from "Accelerate" and the State of DevOps Report. I highly recommend, as also Pat said, please read "Accelerate." It's a really good book. Some of the data in there is very useful. What are the things to measure, how to measure them? What things don't work?

Product Quality Metrics

These are a few of the metrics that I'm working on putting in. With Jez's and Nicole's permission, I'm creating a State of Quality At Thought Machine Report, which I'm going to publish every quarter, which will have data from all of these metrics. MTTG is meantime to green. I actually challenged this initially in the team saying, why are the builds always red? We should stop committing on red builds. It's bad practice. I was challenged with the fact that how will you scale this when you have 5000 engineers? Can you imagine the number of people waiting for the build to go green and the amount of pressure on those people trying to make the build green? What you really need to do is actually see how long it takes a red build to go green. How quickly does the team respond? It actually will help you identify how quickly you will respond to incidents as well, if you think about it.

Test coverage. Whenever I've mentioned test coverage, I've seen developer's eyes rolled, and like, here's another manager asking us to do 100% coverage, which is not the reality. The reality is that we want to have efficient coverage. You want to make sure that the packages which are being used the most are the ones which have the most number of unit tests. QA kickback. This is a metric you can use to measure how many times your tickets go from a QA back to development. If you measure that, it can indicate quite a few things. One thing it can indicate is that, the acceptance criteria was not met. That's why it failed testing. Or, the acceptance criteria wasn't there so what did the developer write? That's why it failed testing. Or the build, which had the commit for this particular ticket actually failed, and they moved it to QA. That's wrong as well. They should have actually fixed it, which means the software is not up and running. When you can actually visualize that number, you can actually extract this from JIRA quite easily using the JQL filter. It's a metric that the teams can use to discuss, in their retrospectives and say, why did we go back and forth so much? Metrics are really for the teams and that's something we should think about.

Something else we looked at is PR commit rate per Sprint. It's very normal and frustrating for people who are actually testers that most of the commits happen towards the end of the sprint. People assume that you are going to sign off their tickets in the last two days of your Sprint. It's amazing how much this happens still. Our CEO was like, can you please measure, if you want to have a consistent flow of commits happening during this Sprint, and not have a lot of commits towards the end of the Sprint? Meaning that we're actually doing mini-waterfalls and not being proper Agile. Percentage of flaky tests. This is something I like as well. What we've done is when we actually see a test failing for more than five builds, what we do is that we run a flake detector. What the flake detector does is it runs the same test about 5000 times on that piece of code. If it keeps failing every time, we mark this test as flaky and it automatically creates a flaky test ticket on the team's JIRA board automatically. The interrupt person in that team will go and address it. You want to make sure that as you're adding more code, you're not actually adding more flaky tests, because flaky tests are as bad as having no tests. Then that's how you lose your confidence in test automation as well.

These ones are mostly about defects. I'll talk about defects found via automation or via exploratory testing per feature. I have a very interesting story around this. One of the teams I used to work with. We had our own test framework, which we were using to write integration tests. The complaint we started getting from our product owners was that it's taking too long for a feature to get signed off. More than six, seven weeks whereas the feature was not that complex, according to our estimation. What was going wrong there? I went away, and said, let's pick three stories at random of varying complexity. I looked at when the tickets were created for creating tests for them, then, give or take, looked at the pull requests for them. On average, each feature, integration tests were taking three to four weeks to be written. This was outside of the actual development of that feature, which is quite long. In my heart of hearts, I was like, "I can't tell the team to not write integration tests. I want the team to say that." How do you do that? I went away and got the stats for how long it was taking to write integration tests. The next thing I took out was also, what was the unit test coverage for that part of the code? I presented this data to the team. I said to them, this is how long it's taking us on average to write automation for these features. They're like, "No, this is taking too long. We cannot rely on this. We need to scrap this. It's the wrong framework." I was like, "We don't really have the time to invest in writing a new framework. What do we do?" "We'll rely on the unit tests." Here's your unit test coverage, which was an average of 20%, which is really bad. They're like, "No, we can't rely on this."

The next piece of data I also showed to them was that these are the number of defects you're finding via automation. These are the number of defects you're finding in exploratory testing. What do we want to do next? The team actually said themselves that we need to invest in improving the integration testing framework. What the team decided was that the developers will write the first Hapi parts test. While they will do that, they will actually improve the framework as well. We actually saw improvement in the times that we were spending. Having said that, I don't think there was a massive change in the number of defects found via automation versus exploratory testing, which also proved to the team that we cannot rely on automation testing, wholeheartedly. We always need exploratory testing and there is space for that in engineering as well.

Team feedback. It's really important to understand how the team feels when they are developing in Sprints or in Kanban mode, how happy they are. How they feel, when they're working across teams. A lot of places use the Spotify health check model as part of their retrospectives which gives a good gauge of if teams are happy working with themselves, and if cross-collaboration is not easy. If deployment takes too long. If it takes too long for them to understand what are the defects. Please look into that. That's very helpful.

State of Quality Report Graphs

Here I'm sharing a few graphs that we've come up with as part of our State of Quality Report. This was the first day sometime last year when we hit 1000 commits in one day. This data was very important for a startup, because it's a huge achievement. Of course, it's a lot higher now. The average time it also took from it to be queued to completion was nine minutes. It's already improving. If you start mapping this data from the beginning, if you can see if it starts taking too long, you'll immediately know and you can set up an alert around it and the infrastructure team can help improve this if you want more workers, or so on.

Time from Check-in To Deployed In a Test Environment

The next few things we mapped, we actually had a big reliability war inside the company as an initiative where we looked at improving our infrastructures' core building processes and looked at how we can improve it. The one on top actually shows you how long it took to deploy in a test environment, within a couple of months just because of that initiative. It took us from 40 minutes down to 5 minutes, which is a huge improvement. We're continuously monitoring that.

Test Greenness

The next one is test greenness. We run pre-merge and post-merge tests on all our environments. Pre-merge tests are unit or integration tests, and post-merge tests are your end-to-end API tests or end-to-end tests. What's interesting here is that from dev to staging, dev is not as green as staging, and pre-prod is hardly green. Is that worrying? Pre-prod is not so worrying because we are a product company, which means that our clients install our product on their own infrastructure. When they do their deployments, that's where we want our tests to be green. They learn a lot of configurations which we do not host. We do not host their environments. They host them themselves. That's important. Between dev and staging, this is my pet peeve between the team saying, you should be more green on dev than staging, because the more green you are there, the more quickly we can go to staging. This is a good way of seeing as well what can be improved and what areas we need to focus on.

Regression Testing Progress

This one, as you saw that one week of regression cycle, this is hot off the press. This metric actually shows you that for each of the teams, how many of these tests are automated versus manual. If somebody has time, they can see, the ops dashboard team needs help. We need to convert most of their manual tests to automation. Or the Shell reliability team needs help here. The other teams are pretty much there. As soon as we are quite high up, we can actually deploy it to staging quite quickly. What's interesting is that that one week of regression testing is now one and a half days already, within a span of three months. It's continuously improving. We can only understand this once we visualize this information. None of this data is meant to penalize the teams. It's only to help people understand, where can we help other teams the most?

SATs vs. Defects per Team

This was a case of when I tried to do metrics, and they went bad. That also happens. My idea was our SATs, or Service Acceptance Tests that we have, I wanted to see, where do we have the maximum number of service acceptance tests? Where do we have defects per team? If you have a large number of defects in one area, and you have very few SATs, we were like, "We need to do targeted improvement of quality there." Or, if you had too many tests and too many defects, it means you've written the wrong tests. That's also a problem. When I actually mapped the data, it gave me no data. We have 2000 tests for the accounts team, and 25 defects. That doesn't look too much. Then I was like, "This is making no sense. I don't know where to start." Maybe this but even 24 defects doesn't look bad if you have so many tests. I was like, this is not data that's useful to me. I'm not going to use these metrics. That's where the evolving of metrics comes in.

What I did was this, the one on the top is defects found in engineering. Imagine you're in dev, staging, and then you're in production mode or pre-prod mode. You want to make sure that the defects found in engineering is the place where you find the most number of defects. Then the defects found in staging after your cutoff date, are even less than the ones found in engineering. Then the ones at the end which are your production or pre-prod incidents are the ones which are fewer. The one on top is defects found in engineering. This is by a team right now. We're migrating to JIRA as we speak. We're actually converting the highlighting of these defects per service. The one in the middle actually shows you that for each release, how many defects were found in staging. Release 1.6.0., had 28, release 1.7.0., had 13, great, we improved, but 1.8.0., had 29, something went wrong there. What was it? We actually shortened our development cycle. A lot of people are on holiday and trying to come back in. There were lots of things which moved around in terms of requirements and clients. All of that extra communication clearly contributed to why we're finding so many defects. The last one, this is incidents found per client. For each release, we want to see, how many incidents did we find? For each client, how many defects did we find? How many were of severity critical, high, or low? You can actually drill down into this. I have created these graphs using Google Data Studio. Some of the backend is in sheets and some of it is from JIRA.

Psychological Safety

When you talk about metrics, this is really important. Where there is fear, you will get wrong figures. Teams can gamify metrics. Once I remember we said, the team which has the most number of defects which are not started in JIRA are the ones which have the worst quality, or something like that. You know what they did before the day we used to extract the metrics, they used to click on in-progress. That's easy to do. You can just do a bulk update and do that. They gamified it. Never knew that they actually had a huge backlog of defects and we need to help them plan better. Psychological safety is really important when it comes to metrics. Trust me, putting these numbers in front of tech leads not even the team yet, has been really hard. Of course, our code is like our baby. How can you say that my team's doing unwell? How can you say this team is doing well? That's why I've tried to make sure that when I map these defects, they are based on the services, and it's not by team. If it is by team, the data only goes to their Scrum Master, or program manager, and they are the ones who communicate with the team themselves. It doesn't go on a wider engineering scale.

How to Make Metrics Safe

How do you make metrics safe? You have to get your teams involved in creating a vision. When I created that vision, and I came up with this list of metrics, all the tech leads had input in this. I actually created a big document and they actually added comments in this, and said, this will be useful. Or, have you thought about this? Have you thought about that? They feel like they are participating in it. You want to make sure that they are happy with what you're measuring. You have to make sure that none of it is making them feel unsafe, or you're penalizing their team. Sometimes it can also indicate we maybe have too many juniors in the team. That's why their code reviews take too long versus the other teams. We need to put a rotation scheme in place. Or, this domain never has features which are well defined, maybe we need to work with that product owner better. Or, maybe the clients they're working with are quite confused that's why this data is not coming in. It's all about the other factors which affect how we work. Another thing which is very important, the QA team or testing team is not there to write tests. It's there to make quality visible. Visualization is our job, writing tests is everybody's job. Please take that away with you and please share this with your teams. The teams should set their own targets, especially when it comes to code coverage. I will never advise teams to hit 100% code coverage. I will always say to them, make sure you have code coverage which is sufficient for you. Just make sure you are writing tests. Then regular retrospectives. There's retrospective overkill. We have so many retrospectives and actions which carry over quarter to quarter, and never do anything, which is why people have stopped coming to them, or stop going to them.

Conclusion

Quality is subjective. What is high quality to me may not be high quality to you. The way you implement it for one company may not work for another company. Just make sure you visualize the problem. Definitely create a vision statement. If you want to create a first class core banking solution, we want to make sure that the quality of that core banking solution is top notch, which is why the vision I have ties in quite nicely. You must use metrics to define what bad quality means. Otherwise, those phrases is just qualitative and not quantitative. Some of the things will remain qualitative. Most of the things you can make quantitative and see how you can improve. Always remember that what you are going to work on will evolve over time. The vision statement will evolve over time. Some of the metrics I did, they actually made no sense so we did evolve them over time. Some of them will be of use. Some of them will not be of use. Be open to change, and that's how you will be able to get good quality in your systems.

Questions and Answers

Participant 1: How do you introduce test coverage in a legacy application that had zero for 20 years?

Umar: That had zero. What are the things you've tried so far?

Participant 1: I have 5800 [inaudible 00:39:21] store procedures. They have zero unit tests, zero integration tests. Eighty-six percent of the logic is on the store procedures. We do unit tests on the Java, but the Java is calling the store procedure and mocking the data, but the actual code inside, when we need to make changes, how do we proceed?

Umar: You never know what it is doing.

Participant 1: Yes. It goes into QA and everything is manual testing, because you cannot automate anything in QA?

Umar: Is there room for improvement of process? Are the people open to it? Because you can look at doing things like TDD, because you don't know what is the right functionality first, which is why you're unsure of making change, and you're scared of making the change. Maybe introduce things like TDD, where you can write the test first, to exhibit how the system's behaving now and then write more things. Then use that maybe.

A lot has to do with buy-in. When it comes to introducing test coverage, maybe start with something which is not as intense as test coverage. Maybe start with another metric like defects, which is easier to get. Sometimes you have to go into, I told you so state by using this data, that we changed this, so many issues came up as a result of this compared to the previous release. This is why I say we should add unit tests and use that. Sometimes a lot of people try and do it the other way as a way they do the ice cream cone solutions, so where they write end-to-end tests first. Then they find out, it's really slow feedback. Then they start writing more integration tests first, and they start recreating it. Then you can add more unit tests and make the feedback faster. Just some ideas.

Participant 2: I think the DORA metrics are something that we're seeing a lot in the last few years. A lot of people are saying that it's the silver bullet, we have to always look at DORA metrics. How do you communicate that and mapping that out to like the business value of an organization?

Umar: The DORA metrics, I actually got a lot of help from Jez himself on chalking these out. He said some of these are from DORA and some of these are not. The business value of metrics is visualizing where the problem is. It's really important to make that visible. For example, the problem I have right now is that the releases are on time, but we're getting a lot of defects from our clients. The CEO is like, why is this happening? It's because we're not visualizing where those incidents are being found, what part of code it is. You want to see what services were more broken, or how their interdependencies are? You want to visualize that.

Another thing is that you can also take away the architecture of your system and then say, this is where this metric will help us measure this part of the system and this will help the other part. You can actually visualize really easily, this high impact part has no testing, or we don't have sufficient coverage. Or, we're finding a lot of defects in that area, and this is where we need to invest. This is how you can actually make a selling point as well.

 

See more presentations with transcripts

 

Recorded at:

Jul 14, 2020

Hello stranger!

You need to Register an InfoQ account or or login to post comments. But there's so much more behind being registered.

Get the most out of the InfoQ experience.

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Community comments

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

BT