Test-driven development (TDD) promises clean, reliable code, but in practice, it can sometimes feel like an overcomplicated maze of mocks, stubs, and dependency injections. A common culprit? Injecting utility functions like time.Now
or json.Unmarshal
into your struct initializers, which not only clutters your tests but also ties them to unnecessary complexities. In this article, we’ll kick off a series dedicated to refining TDD strategies and architecture in Golang. Starting with a focus on refactoring both code and tests, we’ll show you how to reduce dependency injection complexity while keeping tests deterministic and the code maintainable. Let’s simplify the madness and restore the elegance TDD was meant to offer.
Preliminaries
Let’s dive into an example using a modified version of some code I wrote before, packed with some of the bad practices we’re aiming to fix. This real-world-like scenario will set the stage for understanding why these patterns fall short and how we can elevate them to cleaner, more maintainable solutions.
1package repository
2
3import (
4 "context"
5 "fmt"
6 "time"
7
8 "github.com/aws/aws-sdk-go-v2/aws"
9 "github.com/aws/aws-sdk-go-v2/service/dynamodb"
10 "github.com/aws/aws-sdk-go-v2/service/dynamodb/types"
11)
12
13type (
14 dbClient interface {
15 Query(ctx context.Context,
16 params *dynamodb.QueryInput,
17 optFns ...func(*dynamodb.Options),
18 ) (*dynamodb.QueryOutput, error)
19 }
20
21 uuider func() string
22 nower func() time.Time
23 mapMarshaler func(in any) (map[string]types.AttributeValue, error)
24 unmarshalMap func(m map[string]types.AttributeValue, out any) error
25 unmarshalListOfMap func(l []map[string]types.AttributeValue, out any) error
26
27 UserRepository struct {
28 table *string
29 dbClient dbClient
30 uuid uuider
31 now nower
32 mapMarshaler mapMarshaler
33 unmarshalMap unmarshalMap
34 unmarshalList unmarshalListOfMap
35 }
36)
37
38func NewPrizesRepository(
39 table string,
40 dbClient dbClient,
41 uuid uuider,
42 now nower ,
43 marshaler mapMarshaler,
44 unmarshalMap unmarshalMap,
45 unmarshalList unmarshalListOfMap,
46) *UserRepository {
47 return &UserRepository{
48 table: aws.String(table),
49 dbClient: dbClient,
50 uuid: uuid,
51 now: now,
52 mapMarshaler: marshaler,
53 unmarshalMap: unmarshalMap,
54 unmarshalList: unmarshalList,
55 }
56}
The code above boasts an impressive test coverage of 94.7%. Sounds like a win, right? Unfortunately, it’s not. That number masks a deeper issue: I had written code purely to chase test coverage metrics. The result? An unintended consequence, this flawed practice took root within the team, perpetuating a cycle of inefficiency and brittle code.
This approach quickly becomes unmanageable. As shown, the initializer is overloaded with unnecessary dependency injections, components that have no reason to be injected in the first place. But why is it done this way?
Non-Determinism in tests
Non-deterministic tests are a liability. To truly unlock the power of TDD, you need deterministic tests, ones that reliably help to detect bugs and act as a robust regression safety net. When a test that once passed suddenly fails, it’s a clear signal that something has gone wrong. This consistency builds trust in your tests and empowers you to make bold, sweeping changes with confidence, knowing that your safety net is secured [1].
The original intent behind injecting a function like time.Now
was to satisfy the func() time.Time
signature, allowing for the use of a mock to enforce determinism by passing a static date.
Test Double
Let’s pause for a moment to clarify some essential terminology that will help us better understand what we’re working with. To do this, I’ll borrow some words of Martin Fowler [2]:
- Stubs provide canned answers to calls made during the test, usually not responding at all to anything outside what’s programmed in for the test.
- Spies are stubs that also record some information based on how they were called. One form of this might be an email service that records how many messages it was sent.
- Mocks are pre-programmed with expectations which form a specification of the calls they are expected to receive. They can throw an exception if they receive a call they don’t expect and are checked during verification to ensure they got all the calls they were expecting.
To Kill a Mockingbird
Using mocking tools in software testing can often lead to unintended consequences that undermine the effectiveness of the tests [3]:
- Risk of Incomplete Tests: Mocking tools can give rise to “pseudo-tests” that appear to validate code but fail to thoroughly test its functionality. For example, mocking a function to accept any input and return a fixed value might only confirm the absence of crashes, while leaving input-specific behavior untested.
- Complexity of Mocking Syntax: Mocking often involves intricate syntax that can alienate developers unfamiliar with the language. This can result in a culture of copying and pasting code without full understanding, increasing the introduction of errors.
- Encouraging Poor Code Structure: By enabling tests on poorly designed and unstructured code, mocking tools might inadvertently discourage necessary refactoring and architectural improvements.
- Fragility of Behavior-Based Testing: Testing internal behaviors (e.g., like spying) instead of focusing on outcomes often leads to fragile tests. These tests break easily when the code changes, even if the changes don’t affect the actual results.
This isn’t to say that mocking tools are inherently bad, they can be incredibly useful in specific scenarios, such as working with legacy code. However, over-reliance on mocking often results in superficial testing, brittle code, and fragile tests that obstruct long-term maintenance and improvement. Mocking should be a tool in your arsenal, not the foundation of your testing strategy.
Putting It All Together
We’ve covered a lot of ground, exploring the pitfalls of bad practices and laying the foundation for improvement. As a reminder, this is just the beginning of a series dedicated to transforming code and tests to embrace better practices and a cleaner mindset.
Go Simpler
(Pun intended.) Let’s revisit the uuider
example from earlier. In most cases, there’s little need to generate a UUID string in a way that requires injecting or mocking a specific output. For scenarios like these, calling the function directly is often the simpler, cleaner solution. This approach not only reduces unnecessary complexity but also streamlines your implementation, making the code easier to read and maintain.
1id := uuid.NewString()
Let’s Ensure Determinism
Now, let’s tackle the case of time.Now
(or any other inherently non-deterministic function like randomizers). The very nature of time.Now
means it generates different values with every call. However, as we’ve discussed, ensuring consistent outputs is crucial for deterministic tests. Mocking isn’t an option, so let’s explore a simpler way to achieve determinism using a stub.
1type UserRepository struct {
2 now func() time.Time
3}
4
5func NewUserRepository() *UserRepository {
6 return &UserRepository{
7 now: time.Now, // Default to the actual time.Now function
8 }
9}
10
11func (uc *UserRepository) Execute() {
12 // Use uc.now() wherever you need the current time
13}
In the updated example, you’ll notice the initializer has become significantly simpler, we no longer need to pass the function externally. Instead, the real function is assigned automatically when the initializer is invoked. By testing within the same package, we can override this behavior when necessary for tests, while maintaining the guarantee that no one outside the package can alter the implementation. This approach strikes the perfect balance between flexibility for testing and protecting the integrity of the code. Let’s review the test.
1func TestExecute(t *testing.T) {
2 fixedTime := time.Date(2024, 12, 18, 0, 0, 0, 0, time.UTC)
3 uc := NewUserRepository()
4
5 // Override with deterministic time
6 uc.now = func() time.Time { return fixedTime }
7
8 // Run tests with uc.Execute(), which will use the fixed time
9}
Here’s a playground testing code snippet to illustrate the concept behind the proposed solution 1.
Conclusion
This journey through refactoring and improving our TDD approach has set clearer several key points that bring us closer to a better testing and coding mindset:
- Deterministic Tests Are Key: Non-deterministic tests, as discussed in the context of
time.Now
or randomizers, undermine the reliability of our testing framework. By ensuring determinism through stubs rather than mocks, we eliminate unnecessary complexity and make our tests more robust. - Simplify Initializers: Refactoring the
uuider
example demonstrated the power of keeping initializers simple. By directly using built-in functions like UUID generators we avoided the need for external injection. - Mocking Tools Have Limits: While mocking tools can be useful, especially for legacy code, they often encourage incomplete tests, increase complexity, and promote poor code structure. Instead, focusing on outcome-based testing and leveraging simpler alternatives like stubs leads to more reliable and maintainable tests.
- Focus on Good Practices, Not Metrics: High test coverage, like the 94.7% in the earlier example, can be misleading if achieved through superficial tests. The focus should be on meaningful tests that genuinely validate functionality, not on chasing numbers.
- Clarity in Terminology: Understanding foundational concepts, such as Martin Fowler’s “Test Double”, ensures we can use the right tools for the right reasons, avoiding misuse or overuse of techniques like spying or mocking.
These insights mark the beginning of a larger conversation about improving TDD strategies and architectural patterns in Golang. By embracing simplicity, determinism, and meaningful testing, we can write code that is not just testable but also maintainable and flexible for future growth.
This is just the first step in a series aimed at evolving your testing practices. Let’s keep refining and building toward a smarter, more effective TDD methodology!
Bibliography
- M. Fowler,
Eradicating non-determinism in tests,
2011. [Online]. Available: https://martinfowler.com/articles/nonDeterminism.html [Accessed: Dec. 18, 2024]. - M. Fowler,
Test double,
2006. [Online]. Available: https://martinfowler.com/bliki/TestDouble.html [Accessed: Dec. 18, 2024]. - G. Hill,
On (not) using mocking frameworks,
2021. [Online]. Available: https://www.geepawhill.org/2021/07/13/on-not-using-mocking-frameworks/ [Accessed: Dec. 18, 2024].
-
Beware, in playgroung the time begins at 2009-11-10 23:00:00 UTC ↩︎