Example Of Test Driven Development

This series has extensively discussed Test Driven Development (TDD) and provided numerous examples along the way. Now, let’s take a step back to revisit the core principles of TDD, emphasizing why it can be an effective development strategy and reviewing its fundamentals.

Test Driven Development is grounded in the idea that tests should serve as self-documenting elements. They should be the optimal starting point for a new developer to grasp the codebase, absorbing the business expectations through well-structured and assertive tests. The typical workflow in adhering to TDD involves the following pattern:

  • Red (Write a failing test. Code that fails to compile also qualifies as a failing test.)
  • Green (Ensure your tests pass.)
  • Refactor (Clean up the code after the initial implementation.)

Correct application of this pattern inherently upholds an additional principle: that the tests should execute swiftly. Due to the encouragement to swiftly progress through the cycle—from implementing a failing test to ensuring test success, to incorporating the next set of acceptance criteria for a feature—it is crucial to move efficiently for effective iteration. Critics of TDD may argue that the strict “write a failing test before producing production-level code” policy hinders developers. However, advocates assert that aside from streamlining the creation of new code by exclusively focusing on making a new test pass, TDD excels in rapidly iterating on new functionality. This is possible because the existing code coverage generated allows for confident modifications to the code with ease. Let’s delve into how TDD guides the journey from the initiation of a feature request to a fully functional feature.

The Feature Request

Your company/client comes to you with a new feature request. Right away, you’re cautious — this is clearly entirely new functionality. There won’t be any overlap with existing code, or at least none that you can foresee initially. Finance & Sales have teamed up to rework the Stages for existing Opportunities to get assigned their Probabilities using a secretive new forecasting model. In order to test out the effectiveness of the new model, they want to perform a split-test without informing the sales reps that some of their Opportunities are going to be withdrawn. In addition to holding a small percentage of Opportunities in the forecasting control group, they need the “old” Probability scores to be mapped to a new custom field on the Opportunity; there’ll be a one-time data mapping necessary to this new field, and then the Probabilities assigned to the Opportunity stages will be updated to reflect the new model.

This is meant to sound familiar. What follows probably isn’t — but that’s the nature of feature requests. Because they’re specific to the client/business, I’m instead going to focus on how to solve a problem, rather than going with something siloed to a specific industry. The feature request looks something like this:

With the new Opportunity probabilities, some of them will be updated using a workflow rule to assign the probability to an anti-prime number. When you see an Opportunity get updated with one of these sentinel Probability scores, you’ll need to unassign the existing Opportunity owner and reassign to a system user, as well as map the prior Probability to the new custom field.

Building An Anti-Prime Generator

First of all — what’s an “anti-prime” number? An anti-prime is defined as a number that can be divided by more numbers than any of the numbers before it (in other words: a number with more factors than any number before it). Since we’re operating on a percentage scale for Probability, that means we’ll chiefly be concerned with all of the anti-primes from 0 to 100. Let’s begin!

TDD states that lack of code, or lack of code that compiles, counts as a failing test. The first thing we’ll need to do is create the object that we’d like to house this business logic in, and define a well-named method that returns true/false:

public class AntiPrime {
  public static Boolean isAntiPrime(Integer num) {
    return false;
  }
}

That gives us the wings we need to confidently start down the road towards testing this feature:

@IsTest
private class AntiPrimeTests {
  @IsTest
  static void it_should_detect_one_as_an_antiprime() {
    System.assertEquals(true, AntiPrime.isAntiPrime(1));
  }
}

Now we have a failing test to work with, and we can begin implementing out this feature. The naive implementation makes no assumptions:

public class AntiPrime {
  public static Boolean isAntiPrime(Integer num) {
    return num == 1 ? true : false;
  }
}

The initial test has succeeded, but we are aware that there are several other anti-prime numbers below 100. For anti-primes, 1 is the first number because 1/1 equals 1. This implies that for the next number to contend with 1 as the subsequent anti-prime in the sequence, it must have two divisors. It’s time to draft another failing test, and subsequently, we may explore refactoring possibilities.

// in AntiPrimeTests
@IsTest
static void it_should_detect_two_as_an_antiprime() {
  System.assertEquals(true, AntiPrime.isAntiPrime(2));
}

Now we are back to the “Red” part of our TDD workflow, and we need to re-assess how we’re going to get to green. Clearly, the simplest case is again the best way:

public class AntiPrime {
  public static Boolean isAntiPrime(Integer num) {
    if(num == 1 || num == 2) {
      return true;
    }
    return false;
  }
}

Now both our tests pass, but we’re left with the sneaking suspicion that it’s time to refactor; the reason for this is because we’re now using two “magic” numbers — 1 and 2 — to represent the anti-primes, but we actually want to programmatically assign them. Time to go back to the drawing board:

public class AntiPrime {
  public static Integer primesBeforeDefault = 100;

  public static Boolean isAntiPrime(Integer num) {
    return antiPrimesBefore.contains(num);
  }

/*if you try to use the simpler singleton
pattern here, e.g. antiPrimesBefore = getAntiPrimes(),
it's fine for calls to isAntiPrime,
but the set will be double initialized
when testing against getAntiPrimes();
you also won't be able to reset
primesBeforeDefault*/
  private static final Set<Integer> antiPrimesBefore {
    get {
      if(antiPrimesBefore == null) {
        antiPrimesBefore = getAntiPrimes();
      }
      return antiPrimesBefore;
    }
    private set;
  }

  private static Set<Integer> getAntiPrimes() {
    Integer potentialAntiPrime = 1;
    Integer divisorCount = 0;
    Set<Integer> antiPrimes = new Set<Integer>();
    while(potentialAntiPrime <= primesBeforeDefault) {
      Integer localDivisorCount = 0;
      for(Integer potentialDivisor = 1;
        potentialDivisor <= potentialAntiPrime;
        potentialDivisor++) {
        if(Math.mod(
          potentialAntiPrime,
          potentialDivisor
        ) == 0) {
          localDivisorCount++;
        }
      }
      if(localDivisorCount > divisorCount) {
        divisorCount++;
        antiPrimes.add(potentialAntiPrime);
      }
      potentialAntiPrime++;
    }
    return antiPrimes;
  }
}


Now, there’s only one “magic” number—the pseudo-constant primesBeforeDefault. Its introduction has achieved three objectives:

  • Improved the flow of logic for generating anti-primes.
  • Introduced a new edge condition that requires testing—calling AntiPrime with a number greater than the lazily loaded anti-primes.
  • Established a method to test for numbers above 100 using a static integer.
// in AntiPrimeTests
@IsTest
static void it_should_throw_exception_if_number_larger_than_anti_primes_generated_is_passed() {
  AntiPrime.primesBeforeDefault = 100;
  Exception e;
  try {
    AntiPrime.isAntiPrime(200);
  } catch(Exception ex) {
    e = ex;
  }

  System.assertNotEquals(null, e);
}

@IsTest
static void it_should_work_with_numbers_greater_than_100() {
  AntiPrime.primesBeforeDefault = 120;
  System.assertEquals(true, AntiPrime.isAntiPrime(120));
}

In the AntiPrime class:

public static Boolean isAntiPrime(Integer num) {
  if(num > primesBeforeDefault) {
    throw new AntiPrimeException('Primes weren\'t generated to: ' + num);
  }
  return antiPrimesBefore.contains(num);
}

// ....
public class AntiPrimeException extends Exception {}

Now, let’s proceed with the tests to verify the accurate generation of all expected anti-primes. To inspect the current output, we’ll elevate the visibility of the private static method getAntiPrimes:

// in AntiPrime
@TestVisible
private static Set<Integer> getAntiPrimes() {
//..
}

// in AntiPrimeTests
@IsTest
static void it_should_properly_generate_anti_primes_below_sentinel_value() {
  // make no assumptions!
  AntiPrime.primesBeforeDefault = 100;
  System.assertEquals(
    new Set<Integer>{ 1, 2, 4, 6, 12, 24, 36, 48, 60 },
    AntiPrime.getAntiPrimes()
  );
}

And the test fails. Upon reviewing the output, it appears that I inadvertently introduced a bug during the refactor. Did you catch it? The issue lies in the incrementation of the divisorCount variable for numbers like 72 and 60, both of which have 12 divisors. I made a mistake by incrementing the divisorCount only when the localDivisorCount variable exceeds the last divisor count. Instead, it should be set equal to the localDivisorCount. Without this adjustment, both 60 and 72 qualify because the prior divisor count is 10 when 60 is encountered:

// in AntiPrime
@TestVisible
private static Set<Integer> getAntiPrimes() {
// ...
  if(localDivisorCount > divisorCount) {
    divisorCount = localDivisorCount;
    antiPrimes.add(potentialAntiPrime);
  }
  potentialAntiPrime++;
// ...
}

Now, all the tests have successfully passed. At this juncture, given the deterministic knowledge of the values for anti-primes below 100, there is a valid argument for deleting the first two tests, which are specific value tests.

Alternatively, one could argue for modifying the second test to specifically validate the last value below 100—ensuring that 60 is correctly detected. I would opt for the latter approach, considering that the test for getAntiPrimes adequately covers the other cases.

A Side Note on Generating Anti Prime Numbers

While it’s true that solving the anti-prime formula might be more straightforward in some other languages with more expressive and fluent array features, I urge you to consider readability and performance when evaluating the presented solutions. Many of the submitted answers treat 1 (and occasionally 2) as a special case, whereas my focus was on demonstrating how to treat all numbers equally—though one could argue that none of the solutions, including mine, particularly treat 0 equally.

Code style can be a contentious topic, and I don’t intend to present my implementation as the preferred solution. In reality, in a Java-like language, there is no way to avoid two iterations when building the solution. However, your taste and preferences regarding for or while loops may differ entirely (and understandably) from mine. I use while loops infrequently, but if you’ve read my Writing Performant Apex Tests post, you’ll know that they often outperform plain-jane for loops.

That being said, the only improvement I believe enhances the readability of the above solution is if Apex supported range-based array initializations. This would make the inner iteration in getPrimes more expressive by simplifying the for loop. Writing code, even code that needs to be extremely performant, always requires striking a suitable balance between readability and performance.

As a counterpoint, consider the F# example in the provided link—while it works, what if it didn’t?!

Completing The Feature Request


The remaining part of the feature request aligns more closely with our existing code and is therefore omitted. It’s evident that we will need to call AntiPrime from within our Opportunity Handler’s before-update method, assign the old probability to the hidden custom field, and reassign the owner to our placeholder owner if the new probability is an anti-prime.

A completed pull request for this feature will include:

  • New custom field metadata
  • Permission set and/or profile-related changes for this field
  • AntiPrime and its corresponding tests
  • Updates to the OpportunityHandler and the relevant tests
  • The workflow rule, if such items are version-controlled (which is hopefully the case) in your or your client’s org

Test Driven Development Is Your Friend

Hopefully you can see how the “red, green, refactor” mindset can help you to quickly iterate on new and existing features. Having the safety net of your tests helps provide feedback on your system’s design as it grows over time. Writing tests also helps you to focus on the single smallest step you can take as you develop to “get to green.” Though it’s true that in some big refactors, you end up having to rework tests, in general I find that even with large-scale (30+ files) refactors, I rarely have to update tests in a well-designed code base. Rather, the existing tests themselves help me to verify that everything has been successfully re-implemented correctly.

This is also because TDD fits in well with the Object-Oriented Programming paradigm, the “Open Closed Principle,” which states:

Objects should be open for extension but closed for modification

When your building blocks are small and expressive, they can effectively contribute to solving larger domain problems without requiring modifications. Likewise, when your tests are concise, you are motivated and incentivized to maintain small methods, minimal public interfaces, and clean designs. In the case of true “helper” methods, such as an anti-prime generator, static methods assist in keeping your code footprint small by reducing the number of objects that need initialization and tracking.

For an object like OpportunityOwnerReassigner, which could encapsulate the decision to reassign an owner based on the opportunity’s probability being an anti-prime, it’s essential to consider that while this specific feature involves reassignment through the opportunity’s Probability field, future requests might broaden the scope to include more fields or a specific owner for consideration during reassignment. This could even be the focus of a future request, serving as a prime example of extending an existing object’s responsibilities in response to new requirements.