This is the second post in our series on technical debt. In the first post, we explained what technical debt is and why it’s important. Now we’re exploring technical debt calculation.

It’s been several weeks since my “Technical Debt 101” blog post. As promised, it was the first in a series about this silent danger that is eating our software.

I hope you enjoyed the kitchen metaphor and can now clearly identify the different “flavors” of technical debt. After reading the first post, you should also understand how quality is affected by poor and “dirty” code and when is the right time to deal with technical debt.

In this second post, I will explain how to calculate technical debt and express it in a way that’s easily understood by both technical and non-technical team members.

Technical Debt Calculation - Measuring Tape

Image credit

What Should the Numbers Tell Us?

I’ve given many talks at development conferences. All of my presentations include at least one slide with numeric data based on research, an analysis I conducted, or another reliable source.

These metrics are simple, easy to understand, and objective. I do this in order to explain to the audience that realistic and accurate numbers can drive me to the right decisions. On the other hand, numbers that are hard to understand or wrongly computed can lead to choices that will cost me time or even money, in some cases.

Technical debt epitomizes my philosophy. It’s a critical factor of code quality measurement (and management). Companies simply can’t afford miscalculations or reports that are hard to interpret.

Moreover, technical debt should be easily communicated to non-technical team members, such as upper management, business people, product owners, and even customer representatives.

It might seem strange to share an “internal” code quality metric with the customer. But let’s remind ourselves that it’s the customers who invest in our software, and they deserve to know where exactly they’re putting their money.

What is the best way to express technical debt without using technical jargon so that everyone is able to understand it?

Developers use a variety of terms for code quality metrics, such as cyclomatic complexity or branch coverage or unwanted dependency cycles.

All of them look like Greek to business and product people.* If we asked them what value they get from such metrics, their answer would probably be “none.”

*Since I’m Greek, my sarcasm about my mother-tongue is allowed. 🙂

In the following paragraphs, I’ll discuss how to calculate the most important metrics affecting technical debt and will present the debt as a single reference number.

Before we move forward, however, let me add one more parameter. In my previous post, we discussed that there are many types of technical debt (i.e. design, defect, testing, and documentation). We also explained that there are various root causes that increase technical debt.

Every root cause can be represented by one or more metrics, but ideally we want to express technical debt as a single number that will describe the overall quality of a software system. Moreover, we want to find a metric that makes sense to both technical and non-technical folks.

Before we do that, however, let’s step back for a minute and think about financial debt.

Technical Debt Calculation - Financial Debt Comparison

Image credit

Comparison to Financial Debt

If I go to a bank to ask for a loan, I will talk with the bank representative about the amount of money that I need and how much I can afford to pay back every month. I will also sign a loan agreement that clearly states the above details and the number of months I have to repay the loan.

Wouldn’t it make sense to express technical debt in a similar way? How about telling the business that in order to eliminate all or “X%” of the technical debt, the development team needs to spend “Y” man-days working on it?

Wouldn’t that make perfect sense? Isn’t this a number that is easily understood by all stakeholders? Yes, of course!

Developers perceive this as the number of days they have to spend in order to write unit tests (increase coverage), remove duplicate code, do refactoring (decrease complexity) or any other activity that will help them eliminate the technical debt.

Meanwhile, this calculation says to non-technical team members: “If you want to improve the quality of your software — by paying back the technical debt — you need to ask the development team to spend Y man-days on it.”

We can take it one step further: If this number is multiplied by the average rate of the development team, then we can easily compute the cost of this effort in any given currency.

This approach is great because management is able to forecast how much money (and time) it will cost to put the software back on track for the desired level of quality.

Technical Debt Calculation - Calculator

Image credit

What’s Happening Behind the Scenes?

At this point, you’re probably wondering how all of these technical metrics can be boiled down to a single number.

The first step is to measure everything that contributes to the technical debt: code duplication, code complexity, test coverage, dependency cycles and coupling, lack of documentation, and programming rules violations.

Let me briefly explain how to calculate each of these technical debt components.

Code Duplication

Duplicate or copy-pasted code is usually expressed as the ratio of duplicated lines to the total lines of code within a range of 0% (no duplications — perfect score) to 100% (everything has been copied and pasted at least once — although I doubt you’d ever find such a project!)

There are many algorithms to detect duplicate code. It seems that the most popular one is to examine the same series of identical tokens with some occasional variation (adding or removing tokens). A token is the simplest programming language element such as an identifier, a keyword, a separator, an operator, a literal, or even a comment.

What a duplication detection mechanism does is split the lines of code into tokens, set the minimum number of identical tokens that can be considered duplicated code, and then search the code for the same series of tokens.

By utilizing this process for the complete codebase, it’s easy to compute the number of duplicated lines of code and divide them by the total number of lines of code to get the percentage referenced above.

Code Complexity

Code complexity can be expressed in many ways. The most widely-known metric is the cyclomatic complexity, which in simple words is the total number of different paths in a code unit (method, function, class, etc.).

Every unit of code by default has a complexity of one. Every branch in the code (if, case, switch statements) adds one to the complexity. Every exit path or exception (valid in some programming languages) also adds one to the overall complexity.

Although the accepted complexity per programming unit might vary from language to language or system to system, the following rules and ranges have been proven to work in most cases:

  • Complexity from 1 to 10: Normal
  • Complexity from 11 to 20: Moderate
  • Complexity from 21 to 50: Risky
  • Complexity over 50: Unstable/Unmaintainable

To demonstrate this, let’s quickly calculate the cyclomatic complexity of the following method:

public String getAgeType( int age )
{
if( age < 0 )
throw new Exception (“Invalid age”);
if( age > 18 ) {
return “Adult”;
} else {
return “Sorry you are not allowed to see the movie”;
}
}

There are three conditional decisions, two return statements, and one exception. So, the total cyclomatic complexity is six.

Test Coverage

The next component of technical debt we will examine is test coverage, which consists of two metrics: line coverage and branch coverage.  

Line coverage reports the total number of lines that have been hit at least once by a test case. Branch coverage reports the number of conditional branches that have been hit by test cases.

Take the following code snippet as an example.

If there is only one test that tests the method by passing the value of “15” to the “age” parameter, then the line coverage will be 43% (3/7 lines covered) and the branch coverage will be 33% (1/3 branches covered).

public String getAgeType( int age )
{

System.out.println(“Confirming age…”);
if( age < 0 )
throw new Exception (“Invalid age”);
if( age > 18 ) {
return “Adult”;
} else {
return “Sorry you are not allowed to see the movie”;
}
}

Finally we need to compute the overall test coverage, which is a combination of line coverage and branch coverage.

Dependency Cycles and Coupling

Dependency cycles and coupling describe the level of architectural quality and design of the system. An unwanted cycle can be reported between files, packages (namespaces), or even modules.

The easiest way to explain this is by the following example: Assume that File A is dependent on File B, File B is dependent on File C, and File C is dependent back on File A.

This is a dependency cycle in its simplest form, and in general, those dependencies are unwanted. The more they appear in a system, the harder the system is to understand and maintain.

Coupling describes the number of external files that depend on a particular file. The more dependencies a file has, the more likely it is to break when something is changed on those files.

Documentation and Rules

The lack of documentation can sometimes be omitted from the technical debt calculation.

But, in cases where we need to ensure that the code is properly documented, a good approach is to measure the undocumented public API and report it as a ratio (how many public methods are undocumented vs the total number of public methods).

Defects and rules violations can be handled using the same method. The usual pattern followed here is to check the codebase against a pre-defined and categorized set of quality rules and report which of them have been violated.

Each rule should be tagged with a category like performance, efficiency, maintainability, portability, security, etc. The trick here is to express everything — even duplications, complexity, etc. — as a rule.

For example, if we want to ensure that every file has at least 50% code coverage, then we should create such a rule with this threshold. Or, if we don’t accept methods with a complexity of 11 or more, then we should create a rule that will report when this occurs.

For every rule, we need to determine the time needed by the average developer to fix the problem. This is usually expressed in hours but can also be expressed in days.

The final step is to sum the total number of hours or days needed to fix all of the violated rules, which represent all of the codebase technical debt. Voila — we have our technical debt metric expressed in man-days!

Technical Debt Calculation - Abstract Numbers vs Ratio

Image credit

Abstract Numbers vs Ratio

We’ve now demonstrated that, even if technical debt is calculated based on several technical metrics, there’s common ground for both sides of the team.

Technical and non-technical team members can share the same way of expressing technical debt in man-days or currency, and keep track of progress without needing to explain all the technical details.

Of course, the developers will need to dive into the technical deep-end in order to work on reducing the debt, but this can be kept safely inside the dev team.

However, when it comes to the question: “How good or bad is the code quality of the product?,” the number above is not enough.

Let me elaborate on that. Assume that the development team says to the product owner that, in order to hit the goal of decreasing technical debt down to 90%, they need around 250 days (yes, a whole man-year!). Is that good or bad? Reasonable or not?

What does this mean in terms of the code quality? Neither the developers nor the product owner would likely be able to decide simply by seeing this number.

The main reason is that we don’t know the size of the project. Are we talking about a green-field project with 30k lines of code started six months ago or a 15-year-old “legacy,” monolithic system with 1.5m lines of code?

Clearly for the first case, one man-year to eliminate the technical debt is a really huge number, so the answer could be that the quality is bad. On the other hand, the latter example shows that the product seems to be in very good shape. Those 250 man-days don’t look that bad for such a big codebase.

It took 15 years to produce this technical debt, whereas for the first product it took only six months for the same. The problem here is reporting the man-days without giving any information about the project size.

A quick suggestion to overcome this issue would be to also report the total number of lines to the upper management and ask them to do some extra math to determine the overall quality level.

The problem is this actually violates my previous statement: “metrics should be simple, easy to understand, and objective.

Having to do additional calculations increases complexity and doesn’t guarantee that everyone will come up with the same conclusion about quality. Besides, there might be a case where we need to compare the technical debt of two or more products. So again, the lines of code are not enough.

One last thing: if man-days or currency is our (only) technical debt metric, then what is the worst possible value? The best value is obviously zero, but what would be the worst? Honestly I have no answer on that, and I have seen some really, really large numbers reported on some projects.

Given this, the solution is to represent technical debt as a ratio instead of an abstract number (man-days, money spent, etc).

This has several advantages over the previous tries:

  • It has both a best score (0%) and a worst score (100%).
  • It’s comparable to other projects because it contains the total lines of code.
  • It’s still a single, easy to understand, and objective number.

A product with a technical debt ratio of 5% is clearly in better quality shape than a product with a ratio of 25%. But when is the right time to start worrying about the code quality?

Although there are different types of projects and every programming language has its own characteristics, a safe rule of thumb is that codebases with a technical debt ratio over 10% should be considered candidates of poor quality. In such cases, the management team should work with the development team to decide how they want to start eliminating the debt.

(Happily we will discuss different strategies and action plans to work on reducing technical debt in a future post. Stay tuned!)

Before we move on to the available options of technical debt calculation, I’d like to clarify that both numbers we’ve discussed (man-days and the ratio) are equally important and complement one another. Both should be seriously considered before making any decision about the product quality or steps to eliminate the technical debt.

Technical Debt Calculation

At this point, we’ve nearly run the whole marathon. The finish line is in sight!

Technical Debt Calculation - Finish Line

Image credit

How exactly do we calculate the technical debt in man-days, currency spent, and ratio?

Manual computation is out of the question even for very small projects.

One could take the whole day to do static code analysis, compute duplications, complexity, and design issues, evaluate security and performance, measure the code coverage and test quality, and many more things in order to come up with the component metrics. Then every single metric must be given a weight in order to compute the one and only technical debt number.

Additionally, this should not be an ad-hoc process but one in which both the development side and product/management/customer side are able to view the latest reports on a daily basis.

I guess I probably don’t need to say anything else to convince you that it can’t (or shouldn’t) be done manually. 🙂

So, the next decision is whether we want to implement an in-house system or use an external tool that will do the hard work for us.

I would never recommend the first approach unless you want to really invest in that area, creating a product specifically for that purpose and then earning revenue by selling it or services based on it. 

Luckily, there are a plethora of tools out there that can save time and help us calculate the technical debt on a daily basis.

In my next post, I will share the most popular ones, explaining their key features and how you can use them to analyze a small codebase from Github.

In Summary

In this post, we discussed the value of reporting technical debt in a way that can be easily interpreted and understood by both technical and non-technical team members. We also identified the challenge of expressing technical debt as a single number that incorporates project size metrics.

Both man-days and ratio metrics should be used by management and development teams in determining their strategy for reducing technical debt because each of them answers a different but equally important question.

For man-days, the question is “what is the effort required?,” whereas for the ratio it’s “how good or bad is the code quality?”

Finally, we explained how difficult it is to manually calculate all of the components of technical debt on a daily basis, especially in large codebases with many contributors. We concluded that the only reasonable approach is to use one of the existing code quality tools that I will present in my next post.

 

Featured image credit

Ready to start your project?

Learn how ThinkApps can get your product launched faster, better, and with more value than you knew was possible.