Showing posts with label agile. Show all posts
Showing posts with label agile. Show all posts

Saturday, January 24, 2004

An Introduction to Mock Objects as a Testing Strategy

Prologue

Mock Objects are actors who play roles in your test scenarios. They reside with your test code, and provide a clear and simple way to unit-test certain hard-to-test conditions. Mixed with a test-driven approach to software design, they can also point the way to surprisingly flexible code. Unfortunately, Mock Objects used incorrectly can add clutter and confusion. But each test really boils down to three players: The test code, the code being tested, and the Mock Object. Picture those three players as separate but important entities, and you’ll begin to see your next test as a short, simple act in an ongoing play.

Act I

Suppose we want to test an object called “Student.” We aren’t going to test Student to see what it knows: We want to know if Student knows how to do some required research. In other words, we’re going to write a simple test to determine if Student collaborates correctly.

We want to ask Student the question secondPresidentOfTheUnitedStates(). We're not really concerned whether or not Student knows the answer, but we want to verify that Student knows when to seek help from another object, Historian.

The real Historian is, of course, a magical source of knowledge. One of the amazing things that the true Historian can do is answer listOfUSPresidents(). But we have decided that Historian is too expensive (and slow) to use whenever we want to run tests, especially since we're not trying to test Historian (who has its own extensive suite of unit-tests, to be sure). So we build a MockHistorian.

MockHistorian starts out as a mere shell of an object. We tell MockHistorian exactly what to reply when asked listOfUSPresidents(). For our test, we need only two Presidents. So, we tell MockHistorian to respond with a List of two Presidents, "Washington" and "Jefferson".

Then, in our unit-test, we introduce Student to MockHistorian, explaining that MockHistorian is the Historian to use. Yes, we lie to the object under scrutiny if necessary, but what we’re actually doing is setting up the Student object’s “state”. After all, the Student’s state is, at any given time, the whole of all Student’s member variables. This includes a reference to a Historian object. That is, in a nutshell, what Mock Objects do: They represent fake state within the tested object.

After setting the stage for our test, we ask Student the secondPresidentOfTheUnitedStates() question. Student will need to reply with President "Jefferson", otherwise Student fails the test.

Note that the test does not depend on whether or not Jefferson was the 2nd president of the United States. (He wasn’t. "Dammit, Jim, I'm a programmer, not a historian...") This is a unit-test of Student, not a test of Historian (or programmer). Note, also, that we’re not actually testing the Student's knowledge of history, but rather the Student's ability to collaborate with Historian. "Is the Student capable of querying the Historian?" That's all we're going to test in this one teeny-tiny unit-test. It is a boring, but essential behavior. When Student is capable of doing that, we'll come up with some harder questions.

Act II

What if, after getting the first test to pass, we ask Student numberOfLivingUsPresidents()? Yes, we're going to have to improve MockHistorian as well as Student. Both objects will evolve. MockHistorian should always remain incredibly simple, but must also continue to implement the same public interface as Historian. MockHistorian is an actor in the service of the unit-tests, not a throwaway object. Eventually, MockHistorian may become very good at answering all questions that Historian answers, even if all the answers are prescribed.

Boring, right? Quite a dull part for the acting talents of the MockHistorian. You tell MockHistorian what to say, you ask Student for some information and assert that Student answers with the expected results. Is the test done? Perhaps not. MockHistorian can also act as a spy. Once the transaction between Student and Historian is complete, you can query MockHistorian for certain details. Did Student call a particular Historian method once, or did Student get confused and ask for the same information numerous times?

Your MockHistorian can also act out a failure condition. What should Student do if Historian is unable to check its resources? Rather than setting up a test with a real Historian, (and trying to remember to pull the Ethernet cable out of the test machine at the right moment), you can tell MockHistorian to fake it. Have MockHistorian throw a LibraryOfCongressIsClosedTodayException, and check Student’s reaction.

Act III

When working with mocks, you may get the feeling that you're not testing anything. Good! You're on the right track. You’re usually unit-testing a single scenario for a single method on a single object. If you try to cover too much territory with one test, you may not be able to tell why it fails. Eventually it will become easier to write a dozen very small tests rather than one huge test for the same set of related behavior. These tests document various bits of required behavior, and will help pinpoint problems as they arise.

Keep each individual test as simple as possible, while still exercising a useful Student behavior. Each test shouldn't test much, at all. Try to determine what you really need to ask the tested object in order to drive the simplest implementation. Get the tested object to collaborate when possible, rather than doing all the work. If you discover that Historian doesn't answer something that Student needs to know, ask yourself which object should have what pieces of the puzzle. Write tests for both Student and Historian, if necessary. Such tests keep us from giving one object too much responsibility.

A common pitfall is to write a test for Student using MockSTUDENT. Testing Student by testing MockStudent is like asking a TV Doctor to perform brain surgery: The actors have fooled even you, the director. If you have the urge to write a test for a Mock Object, you are picking up cues that your object has taken on too much responsibility. Perhaps it also tends to rely on numerous private methods for various conditional behavior. There may be two objects, two separate groups of data and behavior, disguised as one. Find the new object, and let the original collaborate with it.

Epilogue

As you can tell from this simple example, Mock Objects lead you toward highly collaborative solutions (and away from Singletons and bloated objects). These design choices are not necessarily better or worse than other designs, in the classical sense, but they tend to be more testable. And tested, testable code is certainly very high quality code.

Saturday, September 06, 2003

The Grapevine Metaphor

Building a Bridge

Software development is a young industry. The complexity of today's software challenges has far exceeded the maturity of the programming discipline. For many years, we've tried to reign in the chaos by using traditional engineering practices.

The usual metaphor is bridge-building: You plan. You plan some more. You plan for every possible contingency. You build foundation. You build the bridge. You test it for structural soundness. You paint it. You throw a big party, and you let cars drive over it. Stray From This Path, and Disaster Will Surely Follow.

What we need to recognize, first, is that even bridges aren't always built that way.

The Golden Gate Bridge is an example. During construction, engineers ran into numerous difficulties surrounding the construction of the south tower. The sandy Bay floor, rough currents, and even ships lost in the fog presented potential dangers to the viability of the construction work, and the safety of the crew. What did the engineers do?

They paused. They thought. Then, they improvised. They designed, built, and lowered a sealed "fender" around the area. Then they drained it, poured the foundation, and went on with their construction. (There were certainly dangers remaining, but they acknowledged those, and finished the work.)

Recently, the Richmond Bridge experienced a retrofit (and some traffic delays). Earthquake safety improvements, of course. Why didn't they build it for earthquake safety in the first place? They did, I'm sure. But we have learned new techniques since then, and we want to apply those, without having to rebuild the entire bridge.

I cannot even use the bridge metaphor to describe Extreme Programming. If I did, I'd claim that we could apply some of the paint first (e.g., GUI look-and-feel). That we could--and would--try driving a few cars over it (i.e., acceptance tests), knowing that the first set would fall into the bay. That we could build a temporary foundation, and swap it for a new foundation when necessary (e.g., flat files vs. RDBMS). You would back away from me, slowly, one eye on the door. You certainly wouldn't want to drive over my bridge. The problem is not that XP is a crazy approach to software development, but that we've been using the wrong metaphor to describe software development.

Growing Grapes

Software development is quite different from bridge-building. All software, despite our best efforts to plan out every detail, grows organically, like a grape vine. You can let it get out of hand and take over your whole yard, or you can prune it and shape it until it grows sturdy and drought-resistant.

The industry is beginning to learn how to contain the chaos. Not by trying to plan out the exact direction and length of each root and branch before we even plant the seed, but through vigilant care and pruning.

XP is a set of necessary tools, carefully assembled and tested by numerous experts.

The entire team of engineers, managers, and customers communicate frequently. We know what the software should do, and we know what progress has been made.

Very short iterations keep the team, and the software, on track. During each iteration, we build those features that the customer has designated as the highest-priority. Intuitively, you may see that this is an excellent approach for a project in its maintenance stage. But this rapid feedback is also a powerful tool for building software from scratch. The customer can alter course at any time. Imagine how powerful that is in any dynamic, competitive field.

We have automated unit tests, and we write them first. Why? Unit tests exercise an object's interface, and writing them first is an excellent way to define and document the intended behavior of that interface. We know that we're done with part of the implementation when our new test passes. Then we write another test. When we've completed a task, we run all the unit tests for all the objects in the project. They all have to pass, otherwise we broke something else.

If we do break a test, we are responsible for fixing it, immediately. We have the ability to change any line of code in the project. Chaotic? Not at all. We still have all those tests to keep us on track. We also have a simple coding standard to follow: Similar naming patterns and one indentation style allow us to quickly understand code written by other teammates. We reserve our creativity for problem-solving, not trivial differences in preferred brace positions.

The test-first approach is difficult to appreciate for the first two or four weeks of a new project. Programmers without test-first experience usually dread writing tests. It feels like excessive overhead. But tests are an investment. We spread the cost out over the life of the project, rather than tossing it out entirely in order to meet a delivery date. And we begin to reap rewards after those first few difficult iterations. I am always amazed by how quickly the growing network of automated tests can spot a new bug, particularly when a change is made to an object that "shouldn't affect anything else." Without tests, that bug could lay hidden for months, perhaps to embarrass us when the software reaches an end-user (e.g., the people trying to use your web site). Tests usually pinpoint a bug that would normally take hours to find with a debugger.

As a programmer, I'm addicted to the test-first approach. I can write code rapidly, confident that our all-encompassing suite of tests will catch most mistakes that get past the compiler and the programming partner sitting next to me.

We have refactoring skills, the sharpest of software pruning tools. A viticulturist will often cut off some young grapes, allowing the vine to concentrate its productive energy into the remaining grapes. Similarly, the skilled programmer can reduce duplication as a new task is implemented, thus simplifying the design. In turn, simple design makes the next task easier to implement. As the project progresses, the software becomes easier to extend.

Wait. Stop! Did you catch that? As the software ages, it becomes more flexible.

In fact, we are always working to improve the code, or at least to keep it from getting worse. As it ages, it may truly become better, like a gnarly old Zinfandel grapevine.