BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage Articles How to Sustain Quality and Velocity in a JavaScript or TypeScript Project?

How to Sustain Quality and Velocity in a JavaScript or TypeScript Project?

Key Takeaways

  • JavaScript and TypeScript developers should not tolerate the accumulation of warnings and typing errors reported by static code analysis tools. These can be used as hints to fix bugs efficiently.
  • Enable developers to be accountable for preventing regressions by letting them write automated tests that provide useful feedback in case of failure.
  • Progressively improve the design and code quality of older parts of the codebase by specifying (and enforcing) a dedicated perimeter with higher code quality rules.
  • Deploy more frequently in production and support developers when inevitable incidents happen. Encourage developers to write a post-mortem and implement long-term fixes after every incident.
  • Reduce waste by making product and development teams collaborate at all production process steps: problem prioritization, solution design, planning, review, etc.

JavaScript (even with TypeScript type checking) is sometimes criticized for its unpredictable traits and lack of conventions. This is no surprise for those who know that JavaScript was designed as a scripting language for web browsers.

Nevertheless, it has become the language of choice to develop full stack web and a popular option for cross-platform mobile applications. So, what can developers do to minimize resource waste and to sustain job satisfaction when their JavaScript/TypeScript codebases are starting to age and complexity grows painfully?

This article will build upon my 10+ experience writing JavaScript code and 5+ years rescuing JS/TS projects to show you:

  • How to assess quality and risks in a JS/TS codebase.
  • How to prioritize parts that need remediation.
  • What non-disruptive ways enable making a JS/TS codebase progressively healthier.

Clean your workbench

Every warning, typing error or flaky test in the way of shipping the next features makes developers waste time, focus and energy.

Code warnings are especially nasty because developers can get used to ignoring them, "as long as everything works as intended." So they can quickly accumulate, making it hard to use them as hints when we face a bug, incident, or unexpected behavior of our system.

Typing errors are a good example. When our users follow the "happy path", these errors sometimes don’t make sense, because the software seems to work as expected. So we may be tempted to override them, using @ts-ignore, any, or type assertions. But doing so means that, one day, a user will probably take a different path, and face a runtime error.

This means that developers have a new bug to investigate, reproduce, and fix due to a shortcut they allowed themselves to take many months ago. And finding this shortcut will take a lot of time if your code is crippled with warnings and/or overrides.

[Click on the image to view full-size]

Whenever the production database crashes with an "out of memory" error, this warning may help developers find the reason why it did.

Warnings and typing errors are hints to find the causes of bugs and incidents. The more we accumulate (or override) them, the more time developers will waste investigating, down the line. Especially on code they wrote a long time earlier.

So what can we do?

  1. Make sure developers see warnings and typing errors as soon as possible, while developing. Without effort. If possible: integrated into their IDE.
  2. Do not let warnings and typing errors accumulate. Fix them as soon as possible.
  3. Increase signal-over-noise ratio. If the team agrees that one of the rules causing warnings or type errors is not useful, disable it once for all.
  4. If you confidently need to override a rule (i.e. using @ts-ignore, any, or type assertions) on a specific part of the code, add a comment to document why you need that override.
  5. Don’t add try-catch blocks to catch programming errors (e.g. unexpected undefined values from your business logic) at runtime. Use them to handle expected errors from external systems (e.g. input/output exceptions, validation, environment problems, etc.). Programming errors should rather be caught during development, using static code analysis and unit tests.
  6. Don’t let code with warnings and typing errors go to production. Use your continuous integration pipeline to enforce that rule.

[Click on the image to view full-size]

The type checker thinks that an expected property is missing. Ignoring this error means accepting the risk of persisting inconsistent data, which could take days to investigate and fix, months later.

What tools can we use to achieve that?

There are many static code analysis tools at our disposal. The most popular are:

  • ESLint to detect syntax errors and anti-patterns in the code;
  • TypeScript (with strict rules enabled), in .ts files or JSDoc annotations, to detect typing errors;
  • Additionally, online tools services like SonarCloud, SourceGraph, Codacy or similar, can be helpful to track the evolution of several code quality metrics in a shared codebase.

Warnings can also come from other tools: dependency installers (e.g. npm, yarn), bundlers (e.g. webpack), code processors (babel, scss) and execution environments (CI runner). Don’t overlook them!

If following these recommendations makes your code highly verbose and/or complicated (e.g. defensive code), you may need to redesign it.

 "scripts": {
    "lint": "eslint .",
    "lint:fix": "eslint . --fix",
    "lint:errors": "eslint . --quiet",
    "lint:typescript": "tsc --noEmit --skipLibCheck",
    "lint:jsdoc-typing": "tsc --noEmit --allowJs `find ./ -name '*.js' -name '*.d.ts'`"
  },

Make it easy and fast for developers to detect problematic code, thanks to static code analyzers and npm scripts.

How to proceed?

Installing and setting up static code analysis tools is a good first step, but it’s not enough.

Sustainable success relies on making sure that the development team:

  • acknowledges the importance of deploying code in which programming errors are not tolerated, and trusts static code analysis tools to help them achieve that;
  • has a good understanding of how TypeScript works; (cf TypeScript: Handbook)
  • regularly fixes warnings and typing errors, more often than adding them;
  • never stops doing so.

Here are several tactics that may contribute:

  • Motivate developers by rewarding code contributions that increase code quality. One way is to use a tool that plugs into the continuous integration pipeline to track the evolution of code quality for each change pushed by developers, e.g. SonarCloud and/or Codacy.
  • Give one developer the responsibility of making sure that code quality never drops.
  • Give another developer the responsibility of making sure that dependencies are regularly updated, so the team can benefit from their logic and security fixes.

Why give each role to one dedicated person?

When nobody’s name is attached to a responsibility, the collective responsibility often ends up being overridden by other "priorities." (e.g. ship one feature more this week, at the cost of ignoring a warning)

Feel free to rotate roles regularly, to make sure that everybody gets involved and remains motivated.

Cover business-critical logic with (the right kind of) tests

Now that we have a team committed to keep their codebase clean, we are confident that our users will rarely face a programming error.

But what about errors in business logic?

For instance, what if a freshly-added feature breaks another one? What if developers misunderstood how the feature was expected to behave in the first place? And what if such a mistake ends up causing an important loss of revenue?

Like programming errors, business logic problems may be detected by our users in production, but we’d rather detect them earlier. Hence the importance of testing our software regularly. With automated and/or manual tests.

Business-wise, tests have two roles:

  • Complying with functional requirements: the implementation of each feature fills the needs for which it was developed.
  • Detecting regressions: all existing features keep working as expected, after any change made to the code.

Make sure that most business-critical features are covered by functional tests (also called "acceptance tests") and that most critical technical components are covered by unit or integration tests. Additionally, ensure Continuous Integration provides actionable feedback to developers whenever any of these tests fail.

For some developers, it’s tempting to delegate testing to another person (e.g. the Product Owner, or a QA team). It may make sense to do so once, after the development of each new feature, to make sure that the implementation complies with functional requirements, and to iterate on it collaboratively.

But delegating the detection of regressions is a bad idea, for several reasons:

  • It increases the delay between merging code and deploying it.
  • It increases the delay between finding a regression and fixing it.
  • Given that the functional scope will grow, the time needed to detect regressions will grow infinitely. If the person in charge of these tests did not automate them, they will probably end up skipping more and more tests. So, after a while, there is a growing risk of regressions creeping in undetected.

Dealing with regressions is a painful and potentially expensive burden, especially if different roles have to collaborate on them (e.g. product owner + developers). Given that automating regression tests implies saving a lot of time in the long term and that developers have the skills to write automated tests, it’s in developers’ interest to own the responsibility of detecting regressions in the first place, without having to involve other roles.

What if the functional scope to cover is huge?

Start with the most business-critical features. You can find them by asking yourself: "What’s the worst thing that can happen in production, in terms of revenue and/or resolution cost."

For instance, an e-commerce website may answer with the following features:

  • "Ordering products by credit card" brings a revenue of ~$1000 per minute.
  • "Adding a product to the catalog" costs us ~$500 per hour if sales people must ask the CTO to add them manually to the database.
  • "Print a barcode to return an order" costs us ~$500 per day if orders need to be handled manually by our customer support team.

Given these business-critical use cases, it surely makes sense to start by writing automated end-to-end tests for them.

When should tests be run?

Every time code is updated or added to the codebase, before it is deployed in production.

Relying on git hooks to run tests on every commit may be sufficient, given that it works reliably and that its duration does not incentivize developers to write fewer tests.

Whether you're using git hooks or not, make sure tests are run somewhere (e.g. preferably in a Continuous Integration environment) every time production-ready code is pushed.

[Click on the image to view full-size]

Code checks and automated tests are run in a Continuous Integration environment, for every commit.

What kind of tests should we write?

The variables to optimize for are:

  • The size of the functional and technical scope covered by tests.
  • The time it takes to get feedback from tests.
  • The time it takes to fix problems reported by failing tests.
  • The time lost because of false positives. (i.e. tests that fail for random reasons).

If your team has poor experience writing automated tests and/or testable code, start with a few end-to-end tests. Then progressively add tests for finer-scoped units of code. Doing so should incentivize developers to write code that is easy to test. E.g. by segregating responsibilities, reducing coupling and/or writing business logic as pure functions. Adopting an architecture that follows the dependency inversion pattern is a good way to achieve that. (see Hexagonal Architecture or Clean Architecture)

Should we mock 3rd-party APIs?

Automated tests (as described in this article) are intended to detect regressions on your team’s functional scope, not 3rd-party’s. Based on that statement, it makes sense to mock 3rd-party APIs in your tests.

That being said:

  • Mocks should always match the current behavior of the API. This means that developers will need to continuously watch APIs for changes and update their mocks accordingly.
  • You may still want to be warned whenever the actual API does not behave as expected.

Detecting problems in your code and in 3rd-party APIs don’t follow the same lifecycle:

  • Your scope should be tested every time your code has changed.
  • 3rd-party scope should be tested every time their code changes. (i.e. it does not make sense to test 3rd-party dependencies every time you commit changes in your code)

You want to continuously monitor that third-party providers are up and working as expected. However third-party errors need not be detected as they happen, and monitoring them on a regular basis, as opposed to each time your developers push changes to their own code, is preferable.

Therefore, setup two dedicated pipelines:

  • Your own CI pipeline tests your own scope, whenever your code changes.
  • A separate CI pipeline checks that 3rd-party scope(s) work as expected, regularly.

To write tests that will be most useful and robust in the long term, I recommend following the F.I.R.S.T. principles. And make sure that developers don’t misuse mocks.

Sanctuarize new/modernized parts of the codebase

Assuming that your codebase has been and/or will be developed for several years, it will probably lose cohesion, in terms of style and quality, as it ages. Worse: some parts may become complicated to maintain, because of technical debt, lack of tests, or accumulation of accidental complexity.

In that situation, it may be complicated to enforce a cohesive level of expectations regarding code quality in the whole codebase, as it was suggested above. And that’s ok.

What you don’t want to do is to lower expectations down to the common denominator. Instead, you can split your codebase into perimeters, and set adapted levels of expectations for each perimeter.

For instance, consider a development team about to implement a new feature to their e-commerce website. They want this feature to be more robust and easier to maintain than the rest of the codebase. To achieve that, they configure their static code analysis tools (e.g. ESLint and TypeScript) with stricter rules than the rest of the codebase, using overrides (which enables more rules) targeting the directory created specifically for that feature. By doing so, the team can ramp up the quality of newly produced code, without rushing to modernize (yet) the "legacy" part of the codebase.

"rules": {
    "prettier/prettier": "error",
    "deprecation/deprecation": "warn"
  },
  "overrides": [
    {
      // Tolerate warnings on non critical issues from legacy JavaScript files
      "files": ["*.js"],
      "rules": {
        "prefer-const": "warn",
        "no-inner-declarations": ["warn", "functions"],
        "@typescript-eslint/ban-ts-comment": "warn",
        "@typescript-eslint/no-var-requires": "off"
      }
    },
    {
      // Enforce stricter rules on domain / business logic
      "files": ["app/domain/**/*.js", "app/domain/**/*.ts"],
      "extends": ["async", "async/node", "async/typescript"],
      "rules": {
        "prefer-const": "error",
        "no-async-promise-executor": "error",
        "no-await-in-loop": "error",
        "no-promise-executor-return": "error",
        "max-nested-callbacks": "error",
        "no-return-await": "error",
        "prefer-promise-reject-errors": "error",
        "node/handle-callback-err": "error",
        "node/no-callback-literal": "error",
        "node/no-sync": "error",
        "@typescript-eslint/await-thenable": "error",
        "@typescript-eslint/no-floating-promises": "error",
        "@typescript-eslint/no-misused-promises": "error",
        "@typescript-eslint/promise-function-async": "error"
      }
    }
  ]

Different ESLint rules for different perimeters, by configuring overrides.

Similarly, if you want to modernize your entire codebase, proceed progressively. Create a dedicated directory with stricter rules, and progressively move legacy code to that directory, while fixing warnings and type errors of that code.

Where to start?

One way to proceed is to progressively migrate an old part of your functional scope into a better design. For instance, it would make sense to pick a feature for which it is difficult to write automated tests, and decide to migrate its implementation into a hexagonal architecture, where business/domain logic is separated from incoming commands (a.k.a. the "API") and side effects (a.k.a. the "SPI"). Lead that migration by writing the automated tests you want to have, and placing the new implementation in a dedicated directory with stricter static code analysis rules.

import { makeFeatures } = from './domain/features';
import { userCollection } from './infrastructure/mongodb/UserCollection';
import { ImageStorage } from './infrastructure/ImageStorage.js';


/** @type {import('./domain/api/Features').Features} Features*/
const features = makeFeatures({
  userRepository: userCollection,
  imageRepository: new ImageStorage(),
});


routes.post('/avatar', (request, response) => {
  features
    .setAvatar(request.session.userId, request.files[0])
    .then(
      () => response.json({ ok: true },
      (error) => response.json({ ok: false })
    );
});

The setAvatar feature was redesigned to be easy to test in isolation, thanks to dependency inversion. Here’s how we migrated another feature: playlist deletion.

If you decide to follow that path, here is some advice:

  • If your team is not experienced in redesigning legacy features, start with a small and easy one. Otherwise, pick one that will be most relied upon by features to be implemented in the coming weeks or months.
  • Before coding anything, clarify the scope, business events and paths to support. E.g. by organizing an event storming with experts of the domain (or bounded context) you intend to redesign.
  • Visualize the current architecture of the scope to migrate, e.g. using a dependency analysis tool like ARKit, Dependency-Cruiser or similar, and write down the problems you don’t want to replicate in the target architecture, so you don’t repeat the same mistakes.
  • When in doubt, collaborate on an adequate design, using software design tools like sequence diagrams, state machine diagrams, ADRs.

After migrating each bounded context, you will end up with a codebase in which 100% of the code is checked against stricter rules.

Deploy every day, but don’t make the same mistake twice

Despite the use of static code analysis tools to check for defects and of automated tests to detect regressions, your users will sometimes find issues in production. There’s no way to prevent that. But there is a way to reduce the probability of such issues, and to reduce the time your team takes to fix them:

  • Deploy every day. (*given that you are confident that the risk of failure is low)
  • Don’t make the same mistake twice.

Why should we deploy every day?

The lazy answer: because the DORA research project identified that most performing teams deploy every day, or multiple times per day.

The practical answers:

  • Because it makes it faster for developers to find the root cause of a new bug appearing in production. I.e. The more frequent the deployments, the fewer commits happened between the last and the previous deployments.
  • Because, for the same reason as above, it’s less expensive to rollback to the previous version (in terms of number of rolled back improvements), if the latest one doesn’t work as expected.
  • Because it encourages your team to split up their work into smaller, safer increments. Which is also a practice followed by best performing teams, according to DORA.

What about not making the same mistake twice?

It’s ok to discover unexpected behaviors in production. In some cases, it’s even a good thing.

When unexpected behaviors are expensive for the business and/or the development team (e.g. an outage that makes your website unusable for several hours), developers should take measures to prevent similar incidents from happening again.

How to detect problems in production?

There are several ways to detect problems in production:

  • Ideal case: A developer discovers a problem and fixes it right away.
  • Regular case: An employee discovers a problem and reports it to the development team.
  • Worse case: A user reports a problem to the development team.
  • Worst case: A user finds a problem but doesn’t report it.

Either way, developers will need information on what’s the problem, how it manifests itself concretely (e.g. error message), how to reproduce it (i.e. environment + procedure), and what was the user’s initial intention and expectations.

But how to get that data in the worst cases? That’s where error monitoring tools (e.g. Sentry) shine. By injecting them into our product running in production, they act as a probe to detect runtime errors and synthesize them in a list of known errors, until each of them is fixed by a developer. Additionally, they fetch data about the context of the error (i.e. user agent, versions of software being used, operating system, exact timestamps, etc.), to help developers reproduce it.

Unfortunately, like static code analyzers, these tools won’t fix problems. So, like for warnings and typing errors, make sure that every error is taken care of as soon as possible. The more your team lets them accumulate, the less motivating and efficient it becomes to use such tools.

Also, when using this kind of monitoring tools, make sure that personal and/or confidential data does not leak away from your system.
Tactically, there are many ways to proceed. You can put one developer in charge of fixing production errors, as their highest priority. This role can rotate regularly (e.g. every day), so that each developer is incentivized to write more robust code. Or new errors can be individually dispatched to volunteering developers, at every daily meeting.

How to reduce the risk of relapses?

No need to panic. Whenever an incident hits production, here’s a procedure to follow:

  1. Keep traces of what happened, before, during, and after the incident, to help you write a post mortem. (* note: adequate monitoring and log collection should be put in place before incidents happen)
  2. Communicate – internally and externally – about the incident.
  3. Stabilize production, e.g. by rolling back to a previous release that works.
  4. Write and deploy a corrective release that fixes the problem.
  5. Find and address root causes + commit to preventive measures.

The key to not making the same mistake twice is the very last part of the procedure.

It’s also the one that is often overlooked. Most times, because nobody feels personally responsible to do it. Oftentimes, because the Product Owner (or product team) pressures developers to prioritize the development of planned features, over securing existing code and/or adjusting the development process. Sometimes, developers themselves decide to ship more features instead of preventing relapses.

[Click on the image to view full-size]

Notes while investigating the root cause of an incident.

How to find the root cause of an incident?

The "5 WHYs" technique can be useful. For instance:

  • WHY did production crash? – Because a non logged-in user visited page B.
  • WHY was the user able to reach page B? – Because there was a link on the homepage.
  • WHY did the user not see a login page when trying to access page B? – Because the login status is not yet known by the back-end when the page is rendering.
  • WHY is the login status not yet known when the page is rendering? – Because our session management backend is slow, and waiting for status would decrease our web performance metrics a lot.
  • WHY is the session management backend slow? – Because it uses our unoptimized legacy database.

In this example, the root cause is that the whole website relies on a legacy session management backend that makes navigation unpredictable, sometimes causing crashes in production. So, unless the team fixes that legacy session management backend, similar crashes are likely to happen again soon in production. Should the team fix the legacy session management backend right now? Probably not. But they should commit to a remediation plan that leads to that goal.

In practice, how to achieve daily deployments with a low failure rate?

Give one developer the responsibility of making sure that unexpected behaviors in production (i.e. runtime errors, bugs, incidents…) are detected as soon as possible, fixed as soon as possible, and that actions are taken to prevent each kind of problem from happening again in the future.

By doing so, this developer will feel empowered to do their work in good conditions. E.g., by setting up adequate monitoring and logging in production, making sure that useful post mortem reports are written, and that preventive measures are acted upon.

When a good level of confidence is reached, progressively increase the frequency of deployments.

Align the product development team on the right incentives

At that point, developers are equipped to write quality software and detect defects as soon as possible. Preferably while they are designing or implementing, rather than in production. They detect and fix production errors quickly and don’t repeat the same mistakes twice. The confidence in their code and development process allows them to ship improvements in production everyday. And, while they bring expected improvements to the functional scope of their software, they progressively improve the design and quality of the oldest parts of their codebase, so it remains healthy, robust, and easy to maintain in the long term.

Unfortunately, that balance can quickly fall apart. For instance:

  • If developers lose motivation to keep the design standards and/or code quality high for the long term.
  • If some of the developers fail to follow the team’s quality guidelines, causing systematic rework, frustration, and delays.
  • If developers rush to fix the implementation of features that don’t work as expected because they misunderstood the functional requirements, at the expense of their long-term technical responsibilities.
  • If someone (e.g. a manager, product owner, or other) pressures developers to release more features per week, or to commit to tight deadlines.
  • If developers are incentivized and/or rewarded on performance metrics that are not aligned with long-term quality and robustness of their codebase. E.g. promotion for bonuses based on the number of features shipped per week.

Preventing or fixing these kinds of situations can be tricky, as it requires well-functioning leadership and/or soft skills.

A common mistake is to cultivate a mindset in which developers are expected to mostly implement features that were prioritized, planned, and designed for them.

It’s problematic because:

  • It puts developers in a posture of expecting a precise and unambiguous specification for every change they are asked to make in the software. Potentially at the cost of a healthy two-way collaboration with people who are in charge of putting up these specifications. Especially for developers who enjoy working on their own, all day.
  • It puts developers in a position where it’s complicated to justify time spent on development activities that don’t directly contribute to the functional roadmap: updating dependencies, improving code quality, training on better design and coding techniques.
  • It can make it tempting to track developers performance (or « productivity ») on metrics (e.g. velocity on the development of user stories) that discourage investment on a sustainable development practice: code quality, prevention of regressions, error management, etc.

Here are a few recommendations on how to avoid the pitfalls mentioned above:

  • When elaborating solutions to business problems, include at least one developer in the design process. This will improve their accountability in implementing a good solution to a well understood problem. And sometimes, thanks to their understanding of how things are currently modeled and implemented, developers will propose alternative solutions that can save a lot of development time while still fulfilling the requirements.
  • Make sure that the prioritization and planning of functional and technical projects are negotiated openly and benevolently by product and technical representatives. For instance, if developers need to redesign a part of the codebase, it’s in their interest to convince others of that importance by explaining what concrete improvements it will bring for the development of the next features, and what are the risks and costs of delaying that project. The same advice applies to product managers regarding the prioritization and planning of upcoming improvements to be developed: explain to convince and engage the development team. Doing so should increase the trust, collaboration and engagement of all employees that are involved in designing and implementing features.
  • In terms of management, make sure that developers are not incentivized to "just ship as many features per week as possible." Find progression tracks that align each developer’s career objectives with the team’s short-term and long-term expectations. The goal here is to prevent the situation where a developer can rightfully justify working only on short-term improvements.
  • Finally, make sure that developers are provided with resources and guidance to continually grow their skills: hard and soft. Provide training and/or coaching resources to them. Encourage them to work together on tasks, by pair and/or mob programming. Encourage them to collaborate well with other/non-developer roles: domain experts, product owners, product designers, customer support team, end users, etc.

Conclusion

The JavaScript language and its ever-changing ecosystem of packages and practices can make codebases quickly become hard to maintain. The resulting loss of development velocity and/or code quality can be prevented without rewriting everything from scratch, nor pausing the development of new features, as we have discussed in this article.

For more practical advice on how to apply these recommendations in TypeScript and JavaScript projects, I recommend considering Yoni Goldberg’s list of best practices. They were written for Node.js (backend) project but many apply to frontend codebases too.

I would like to thank my colleagues Fabien Giannesini and Josian Chevalier for their feedback while iterating on this article.

About the Author

Rate this Article

Adoption
Style

BT