Tuesday, August 31, 2010

How to refactor and test a SAAJServlet

Mind you, not a simple SAAJServlet, but an old fashioned legacy transaction script without the scent of a test. Anyway... let's pick up the usual books (Working effectively with legacy code, Refactoring, Refactoring to patterns - just a starting point of course) to keep at hand as a reference and let's start refactoring.

First a word of caution: don't just copy from the books. Study them, try to apply the refactorings in a small and controlled environment, maybe in little katas, and get the basics beyond them, otherwise you'll do more harm than good.

Back to business. We basically have to test the onMessage method, that gets a SOAPMessage and returns another one. Without even looking at the code, one would expect an algorithm like the following one:
  • parse the message into a parameter
  • pass it to a service that executes whatever must be executed
  • build a new message based on the input parameter and on the outcome of the previous step
Obviously one should also consider errors management, but let's keep it simple. Let's take a look at the actual code:

public SOAPMessage onMessage(SOAPMessage message) {
SOAPMessage result = null;
Dialogue dialogue = null;
try {
SOAPBody body = message.getSOAPBody();

SOAPFactory sFactory = SOAPFactory.newInstance();
Name bodyName = sFactory.createName("myLocalName", "myPrefix", "myUri");
Iterator it = body.getChildElements(bodyName);
if (it.hasNext()) {
SOAPBodyElement be = (SOAPBodyElement) it.next();
dialogue = parseBodyElement(be);
Operation operation = operationFactory.getOperation(dialogue);
if (operation != null) {
try {
} catch (Throwable th) {
getServletContext().log(th.getMessage(), th);
dialogue.setMessage("Service unavailable.");
} else {
dialogue.setMessage("Malformed message.");
} catch (Throwable th) {
getServletContext().log(th.getMessage(), th);
dialogue.setMessage("Service unavailable.");
} finally {
try {
MessageResponseGenerator mrg = new MessageResponseGenerator();
result = mrg.generateResponseMessage(dialogue);
} catch (Throwable th) {
result = null;
return result;
Now, though PMD does not complain too much about the structure of the method - just a few dataflow anomaly analysis, null assignments and catching throwables - I smell (more than) a rat.

Starting from the beginning, the first things that strucks me is the different level of abstractions of the code: you have both a low level implementation that extracts a SOAPBodyElement to be parsed and a call to an Operation that encapsulates the requested service (the Command pattern), and this is not good. There are also too many nested ifs and try-catch blocks to my liking.

There is a separated object that creates the response message, and this is good. Why on earth are we missing an object or at least a method that parses the message in the first place?

Why do we ask a factory for an Operation passing in a Dialogue and then we have to pass the same parameter back to the operation we get back from the factory? couln't the operation hold a reference to the dialogue? Would it be better? we shall see later.

And, obviously, all the references are hard-coded...

So far, we have just conducted a small code review. But what's the point in the first place? We have to introduce a new Operation object (maybe extending an existing one, and please note the alliteration). Nothing strange so far, but the resulting object for this operation should contain some more fields than the existing one. Still nothing terrible, but as you might have noticed currently the Dialogue object works as a collecting parameter through all the method, while we would need two different objects (or modify the initial parsing method in such a way to have an instance of the right Dialogue subclass, or introduce a composition in the Dialogue class, or explore a thousand other possibilities).

By the way, it is mandatory that we do not change the interface for the service, which means that the incoming and outcoming messages for the existing operations cannot change.

The fact is that before we start refactoring we have to guarantee that. Of course we could start refactoring right away, but, as you might have noticed if this is not the first post of mine you happen to read, I find it "just a little bit" risky.

So the answer to the first question, i.e. how to refactor a SAAJ servlet, is, at least in my mind, simple: write enough tests to describe the current behaviour before you do anything else. I know: easier said than done.

That's tricky: how can you do that? The class we have to test is a servlet, so we have four options:
  • use manual testing: we already have this in place, but each single change would require at least a dozen tests, making the whole process too slow (and not reliably repeatable, as it is based on people's good will)
  • mock all the objects provided by the servlet container needed by the servlet: quite a lot of work, and I'm not going down that way (even because I'm listening to Starway to Heaven and I don't wont to spoil the happy sensation)
  • deploy the application to a servlet container and use in-container testing, possibly with a framework like Cactus: this introduces an unwanted complexity and I'll avoid it too even if it would be tempting, as the servlet is deployed in a proprietary framework in which you can only access the database if you actually start the webapp (don't ask, you don't want to know). Why? because I think it would not force me to separate responsibilities as much as the next approach.
  • use a library such as HttpUnit, which contains ServletUnit. Being this a simulated container I am sure I would never have access to the database given the current situation, so I guess I'll have to work my way to a cleaner design.
Back to the servlet: there are cases in which the method returns a message without going all the way down to the database, so as a first step I'll exercise these and keep manual testing for end-to-end situations.

The very first test consists in producing a message with a wrong format: I should get a SOAPMessage with the code property set as KO and an error message of "Malformed message". Or should I?
public void testOnMessageWithMalformedMessage() throws IOException, SAXException {

InputStream resourceAsStream = this.getClass().

ServletRunner sr = new ServletRunner();
sr.registerServlet("myServlet", MyServlet.class.getName());

ServletUnitClient sc = sr.newClient();
WebRequest request = new PostMethodWebRequest(

WebResponse response = sc.getResponse(request);
assertNotNull("No response received", response);
assertEquals("OK", response.getResponseMessage());
assertTrue(response.getText().contains("Malformed message."));
These steps are detailed in the ServletUnit documentation. Before we run the test, remember we're not going to change anything yet.

Too optimistic? actually I was. Running the tests gives me the following error:
Error on HTTP request: 500 javax.servlet.ServletException: SAAJ Post failed null.
Looks like the MessageResponseGenerator cannot handle null parameters very well. I admit I knew I had been too optimistic about the error message (how could I trust the generator had the same message the servlet used and that I was expecting?), but at least I hoped in a valid SOAPMessage... But after all isn't the NullPointerException the most widespread error in Java code?

It's a long way to go... remeber that before we change the current behaviour we have to describe it, so the way is even longer. Yet, the first step has been made.

Monday, August 30, 2010

Quarreling with classloaders

I admit that most of the quarreling, if not all of it, came from my not too deep knowledge of the subject. Anyway, at least I've learned something more.

Now that I know how to change the endpoint defined in a WSDL I was trying to put this knowledge to good use. We have a project that encapsulates the access to various web services, and we release it as a jar file that is normally used in web applications, in particular the one I'm mostly working on in these days. To easily let the administrator of the web application decide whether to use a monitoring application like wsmonitor I prepared a sevices.properties file:
Being test-infected I wrote a simple explorative test:

public void testLoadProperties() throws IOException {
Properties properties = new Properties();
InputStream systemResourceAsStream = ClassLoader.getSystemResourceAsStream("path/to/my/services.properties");
Green bar, so I put the code in the service class. Well actually before that I wrote another small (failing) test:

public void testPropertiesAreCorrectlyInitialized(){
After the green bar I switched to the webapp and started it, but the isProxyEnabled method of the service, which simply performs a test on the value of the isProxyEnabled property always returned false. If you are wondering... yes, there are the corresponding tests, which are not reported for the sake of simplicity.

After some researches, and mostly thanks to Dave Lander's contribution to this discussion, I've found the problem and changed my loading instructions:

InputStream is = MyService.class.getClassLoader().getResourceAsStream("path/to/my/services.properties");

Funny how simple things are when you know them, aren't they?

Sunday, August 29, 2010

You should be so lucky

Ahhhh se solo la maggior parte dei manager sapesse… il mondo girerebbe diversamente, ma la resa dei conti arriver√† presto caro cravattato dall’ignoranza bovina… ma sto divagando

Quoted from another interesting post by Gabriele Lana (aren't they all?).

I stole that presentation

That's not as bad as it might sound, as the author explicitely asks to do so. Moreover, as its title is "Steal this presentation!"...

The presentation gives many pieces of advice on how to prepare the slides for your presentations
(sorry for the repetition). One of them is that visual is better than text, which is true for presentations but not for blog posts, wo you'll have to endure my comments on Jesse suggestions (or switch to another website of course). And I'll use bullet points as well :-)
  1. Have a killer opening slide. Too true, this is what brought me to the presentation in the first place. As Frank'n'Further reminds us, we should not judge a book by its cover, but it is also true that first impressions do count. Should you pick a presentation in a list of ten on the same subject, which one would you go for? Do I have to answer?
  2. Use a trendy color mix. Keep a consistent look, and avoid using too many colors. They only make everything more confusing.
  3. Use stunning visuals. Your brain remembers them more than words. Images help convey a story, and the audience will better remember the story if you choose the right image. Try to have an interesting story, or also the best image won't be of much help. And remember to credit the authors.
  4. Get your text. Text is normally bad, so stick to a short sentence or two, keeping clear which is the most important one. Very important, once you have decided to put text on your slides, make sure it is easily readable. I'm afraid this last point is something in which Jesse can improve, as you can see from some of the first and last few slides.
  5. Use crap. No, not the kind you're probably using in your presentations at present time. Crap stands for Contrast, Repetition, Alignment and Proximity.
  6. Use video. I dont't like the idea very much, butI'm not a professional presenter so I don't think my opinion counts very much. Should you go for it, preloading your videos is a very good idea to avoid awkward silences, which as you perfectly know are one of the signs that the presentation is going terribly wrong.
  7. Share your work. Very important if you want to spread your ideas. And if you followed all the rules nobody that hasn't listened to your talk would do much with your slides, so don't be afraid of stealing. Sharing is making Jesse famous, and I'm (very slightly) contributing to the phenomenon. The same could happen to you!
  8. Recap. Always. Repetita iuvant.
To recap, a very nice (and useful, too!) presentation. Shall we win the war against war-and-peace-long slides full of bullet points?

Wednesday, August 25, 2010

The importance of unit tests II

Yesterday I had some fun spending some hours to introduce ajax in a legacy application (I call it legacy because it has a very small amount of automated tests). During the afternoon I was quite pleased with myself, partly due to the satisfaction associated with the work done, partly due to the "Steppenwolf" novel, partly due to the Beethoven Sonatas I was listening to, partly due to my full stomach. Being a very wise and intelllectual person, I am in favour of the latter.

All this abrupltly ended when I got an unexpected error from a web service called from a part of the codebase that I had modified in the morning, which seemed to go wrong when I submitted a foreign address.

First I checked the test that exercised the web service client, where I verified that all the parameters where correctly passed as expected. Then, I wrote some other tests trying to exercise the particular feature, narrowing the scope of my private investigations. Btw, this reminds me that I have not listened to the Dire Straits in a long time, which is a shame.

Everything seemed fine, so I spent some hours with the developers of the service trying to figure out what was going wrong with the service (if anything). After some head banging it emerged that he problem was that we were focusing on a particular parameter, while the error sneaked in another one. I hope they will not make me pay for the dents in the desk.

That would have been clear from the beginning if we checked all the parameters in the XML stream instead of focusing only on the ones that we thought were important for the specific call (the port in the service is only one, even if there should be many, and the behaviour is determined by which of the several zillions parameters are set and what their values are; I don't think this is very brilliant, but it cannot be changed, so complaining is useless... or at least it brings no business value).

So, everything went back to a wrong value, due to a "simple" setter method.

Now, you normally never write a test for a setter method. This is perfectly acceptable, as you should write tests for "everything that could possibily go wrong" and a normal setter is not included in the list.

Pity this setter was not a plain one, but contained an if:
public void setMyProperty(final String value) {
if (value != null || StringUtils.isEmpty(value.trim())) {
this.myProperty = "FIXED";
} else {
this.myProperty = value
This looks like a blunder... why that not-null check? is it to be able to trim the value or should it actually be a check the value is NOT null? If the former is true, why hasn't the author simply used the isBlank method instead? maybe because she was thinking of saving the trimmed property (which in the case she forgot to do) or because she didn't know the existence of the method?

Though strange this might sound, all this is not really important as I have access to the product owner and I could get all the answers I needed.

The point is: that method required a test, and it was nowhere to be found. You might argue that it's simple enough to avoid writing one, but all the time I wasted says something different. Still, you might add that it was my fault because I didn't check the setter in the first place, but this is only another arrow in my quiver: this is exactly the reason for which we need automated tests, as people do make mistakes and forget to check simple methods.

I'd like to think that if I were pair programming with the author of this code I'd never let her skip writing the failing tests first, at least not without fighting.

Note that I'm not pointing a blaming finger, I could have written that code myself (and sometimes I did, and I'm happy to say that I always regret it when I realize it).

I know I am fighting - and so far losing - a running battle, but tests are necessary, even if someone tells you they represent a cost. Actually they are not even a cost, they are an investment, they are one of the most important risk management tools software developers have.

It only took me some fifteen minutes to write four different tests that assessed the desired behaviour and rewrite the method from scratch. These fifteen minutes would spare three people some wasted hours. This also demonstrates how the cost of bug fixing dramatically increases with time.

Yet, for some unknown reasons and against all evidence, too many managers AND developers refuse to believe in practices like TDD or pair programming. Is it to mantain the illusion of control? is it fear of changes? is it lack of trust? is this evidence not so evident? let's try some math:
  • cost of writing the setter method: 5 minutes
  • perceived cost (usually coincides with the former): 5 minutes
  • cost of writing tests and the setter method: 15 minutes
  • perceived waste: cost of writing tests minus cost of writing the setter method = 10 minutes
  • cost of writing tests and the setter method, pair programming: 30 minutes
  • perceived as almost blasphemous waste: cost of writing tests pair programming minus cost of writing the setter method = 25 minutes
And this is where analysis normally end. On the other hand...
  • cost of finding the bug: 6 hours
  • actual cost: 6 hours plus 15 minutes PLUS 5 minutes = 6 hours and 20 minutes
  • actual waste: 6 hours PLUS 5 minutes
Also note that the time needed to write the code in the first place was completely wasted.

Still think that writing tests is too expensive? Well, the curious thing is that at this point everyone seem to agree on the reason behind the added cost: the code was sloppy. As a corollary, the blame is on the developer. Well, THIS IS NOT THE PROBLEM. The problem is that the way used to write the code was sloppy. And this does not depend entirely on the developer.

Think about it.

Thursday, August 19, 2010

7th Italian Agile Day

Unlike last year, let me be an early bird and remind you the 7th Italian Agile Day which will be held in Genova on November 19, 2010. And, unlike last year, I really hope I'll manage to be there!

The Italian Agile Day is a one-day free conference dedicated to the use of agile methodologies to develop and manage software projects. Its intended audience is composed by developers, project leaders, IT managers, testers, architects and coaches who either have experiences to share or are just starting to get interested in these subjects.

Its declared aim is to share practical knowledge, experiences on the field and achieve an active involvement by all the participants.

Free access upon registration, limited seats. For the fourth time running, the event will be self-financing.