Thursday, December 30, 2010

Don't work more: work better

Sustainable pace is one of the practices of XP; having provided the link, and assuming you know what it is, I won't further indulge on the subject. Why am I talking about it then?

Because, thanks to Paolo, I am reading The Little Big Things by Tom Peters, and I stumbled upon the following sentences (re-translated in English from the Italian version), that according to the author apply in hard times as well as in good ones:
  • Get to work earlier than usual.
  • Leave later than usual.
  • Work more.
  • Volunteer to do more.
Now. This sounds quite different from sustainable pace. It also sounds different from the slack periods that the very same author reports as very important just a few pages earlier.

Why should we work more? I can understand that arriving earlier and leaving later leaves you some time all by yourself in the office, which is when you tipically can perform better because you can get some uninterrupted quality time. But you cannot count on such a short amount of time to complete everything you're supposed to do during a whole day.

And how about your canonical eight hours? Are they all wasted as "the big slack time between early morning and late evening"? Nope, you're supposed to work. Does this come as a surprise? That's very strange, because it is precisely what you are being paid for. So why add the extra hours? You're not being more productive, you're just spending more time at the desk. No, really, Peters goes as far as saying "work more for less". No way. At least, things being as they are now.

Let's get back to sustainable time. Suppose you're a workaholic: you spend at least 12 hours a day at the office, trying to compensate with brute force for your laziness. You are often so tired that your judgment is hiding somewhere under your shoes, and you can't tell whether you still have to bang your head on the wall on little and useless details or you can pass to something else. Or, better yet, call it a day. This is not committment, this is a physical dependency. How long do you think you can sustain it before reaching a burnout? Not very much. You're doomed. And you've wasted your time, not to mention the money of your employer.

Moreover, this is also very bad for morale: not only yours, as you obtain very little despite your enormous efforts, but also of the people surrounding you. People will start feeling guilty because they work less than you do, so they'll feel compelled to stay just because you do, even if they don't have anything to do but keep their chairs warm. Isn't that absurd? Yet, here in Italy you are too often judged by the time you spend at your desk. Even if you don't produce anything worth your time.

I'm sure Mr. Peters didn't mean "waste your day spending more time at the office", but unluckily it is exactly what is going to happen.

Get yourself a life! Strive for excellence, and work better: if you do it you will achieve the same results in less time. Absurd? Yes, it is: actually you will get much better results in much less time.

A simplified example for programmers? concentrate and try to make your code easily readable, not only by you but also by other people (which also means you, as you will not even remember you wrote that piece of code in a couple of months). Write automated tests. Refactor. Eliminate duplications. Do I hear "it will take more"? Yes, it will take more. But just on the barrelhead. As I read today on twitter:

Programming is like sex: one mistake and you’re providing support for a lifetime. (Michael Sinz)

Do your maths...

Monday, December 20, 2010

Meet Flatworm

I'm OK with flat files, but why should you use a fixed-length record when every row is formed by 1320 chars, most of which are blank? Isn't it an enormous waste of resources?

Anyway, should you need to deal with flat files (it seems like no programmer can keep them at bay... does this ring a bell?) after some investigation I stumbled upon Flatworm, an interesting library that lets you read from a flat file, be it fixed-length or separated by a separator character, and instantiate the appropriate beans. It also supports repeating segments or multi-line records. Nicely enough, it also work the other way around, which it what I was primarily interested in.

All you have to do is provide a descriptor in XML format, sit back and relax. Let's see how it works.

Let's suppose we need to produce a fixed-length file with the following format:
XXvalueOne  valueTwo  
i.e. a fixed record identifier and two fields of 10 chars each.

First of all you have to provide the descriptor:
<?xml version="1.0" encoding="ISO-8859-1"?>
<!DOCTYPE file-format SYSTEM "http://www.blackbear.com/dtds/flatworm-data-description_1_0.dtd">
<file-format>
<converter name="char" class="com.blackbear.flatworm.converters.CoreConverters" method="convertChar" return-type="java.lang.String"/>
<record name="whatever-record">
<record-ident>
<field-ident field-start="0" field-length="2">
<match-string>XX</match-string>
</field-ident>
</record-ident>
<record-definition>
<bean name="whatever" class="my.package.Whatever"/>
<line>
<record-element length="2"/>
<record-element length="10" beanref="whatever.propOne" type="char">
<conversion-option name="justify" value="left"/>
</record-element>
<record-element length="10" beanref="whatever.propTwo" type="char">
<conversion-option name="justify" value="left"/>
</record-element>
</line>
</record-definition>
</record>
</file-format>
Then you can write a simple class that, given an iterator of whatever you have to export, creates a file and populates it:
public class SimpleExporter {

FileCreator fileCreator;
Iterator<Whatever> iterator;

public SimpleExporter(
Iterator<Whatever> iterator,
final String configFile,
final String outputFile)
throws FlatwormCreatorException {
this.iterator = iterator;
InputStream config = Thread.currentThread().
getContextClassLoader().
getResourceAsStream(configFile);
fileCreator = new FileCreator(config, outputFile);
}

public void execute() {
try {
fileCreator.setRecordSeperator("\r\n");
fileCreator.open();
while (iterator.hasNext()) {
Whatever whatever = iterator.next();
fileCreator.setBean("whatever", whatever);
fileCreator.write("whatever-record");
}
fileCreator.close();
} catch (IOException ex) {
Logger.getLogger(SimpleExporter.class.getName()).log(Level.SEVERE, null, ex);
} catch (FlatwormCreatorException ex) {
Logger.getLogger(SimpleExporter.class.getName()).log(Level.SEVERE, null, ex);
}
}
}
Exceptions should be managed row by row, but just stay with me for the example, OK?

For the sake of this example the Whatever class is just a POJO, so it's not worth reporting it here. So what have I done? I just created a FileCreator object passing it an InputStream to the descriptor and the path for the output file. That was not hard, was it?

If your bean has inner properties you can simply use a dot notation:
<record-element length="10" beanref="whatever.outerProperty.innerProperty" type="char">
Playing around I had some little tricks to learn: maybe there's a better way, but they work :-) for example I had to write several fields which are not present in my beans. For this I simply added a "filler" property of type String and used it in all such cases, adding a comment in the desctriptor to specify what I was substituting.

Another problem emerged when the properties in my bean were null; to fix this once and for all I simply extended the CoreConverters class adding null-safe operations:
@Override
public String convertChar(String str, Map<String, ConversionOption> options) {
return super.convertChar(str == null ? "" : str, options);
}
That said, I found the library really useful as it saved me a lot of time.

Thursday, December 16, 2010

Did you really mean "variable"?

As seen in a request for support:
Where does the system get the $ExternalParty$ variable? is it variable or fixed?

The sentence has been translated, but in no way altered in meaning.

Wednesday, December 15, 2010

How to stretch a flat file to a fixed length with awk

I never fail to wonder at the power of unix commands. Today's story is related to the need to manage a supposed-to-be fixed length flat file in which some records end many characters before the desired length is reached.

In such cases awk does a hell of a job with just a one-liner:
awk '{printf "%-116s\n",$0 }' RS="\r\n" input.txt > output.txt
That's it :-)

Tuesday, December 14, 2010

Gmail Notify plugin

Let me start with a disclaimer: this post is not against the tool, but against what I see as a toxic habit.

This NetBeans plugin enables notification in the IDE when new Gmail messages become available. I am sure many people will find it very useful, yet I am trying to draw my personal battle line. We are overwhelmed by distractions, and adding another one would only worsen the situation.

Do I really need to check that e-mail immediately, or can I safely ignore it for another 15 minutes? If you use the pomodoro technique you organize your work in short 25 minutes stints, so the average delay is absolutely trifling. The same holds true if you use similar time management techniques.

Interruptions break your flow, and getting back to what you were doing becomes increasingly hard. That's why you should strive to manage interruptions, not to be ruled by them. A notifier is an interruption, and as such should be dealt with.

I normally check my e-mail(s) between pomodoros, and that's not a secret for anyone: I am not at the service desk, so my "flow" periods are not 5 minutes bursts: there are days in which I don't even check the trouble ticketing system. I admit this could sound excessive, but there are many other brilliant people on the responsibility chain before me, and this could loosen the stiffness in your back a little. I am confident that if a pressing situation arises I'll be reached by a telephone call, and if the situation requires it I'll manage the interruption catching it and trying to solve the problem as soon as possible. I'd like to say "as soon as it is needed", but you might have noticed that saying no is not so easy. It is even less so when we're saying no to ourselves.

Yet, if we want to put the most precious resource we have to good use, we have to learn how to do it.

Wednesday, December 1, 2010

Verbal Communication

Humans used to have such a marvelous oral tradition; myths and history were passed orally from one generation to the next. Until an Athenian ruler started writing them down Homer's The Iliad so that it would not be forgotten, stories like Homer's were told, not read. Our memories must have been a lot better back then and must have started to fade sometime in the 1970s because by then we could no longer remember even short statements like "The system shall prompt the user for a login name and password". So, we started writing them down.

And that's where we started to go wrong.

This is an excerpt from Mike Cohn's "User Stories Applied", which I encourage you to read.

What's wrong with written requirements? Actually they have advantages as well (traceability is an example), but they are based on a flawed assumption: they capture every detail of what must be developed. This is practically impossible, with the only exception of very trivial systems. Nevertheless, we do want to write something down, at least to be sure not to forget important things. So what do we have to write down?

User stories are very useful for many reasons, but one of them is that they favour high-bandwith face-to-face communication; actually stories are reminders for conversation. This calls for (I'd like to say force but it would be too optimistic) the customer and the team to interact frequently, thus leading to a product that is just what the customer wants instead of - at best - what is captured on a ton of paper that nobody reads.

This might not be true for every domain; some specific software would probabily require a very complete and detailed document, but this does not apply to anything I've had to develop so far (but I wouldn't be able to say anything about software for pacemakers). Yet, I think any written documentation is not complete unless it also describes how to test a feature. That is definitely what I'd like to have, rather than a series of "the system shall...". Oh forgive me, it is "the System shall...", capital S, we don't want to underestimate the beauty and the power of our product (which by the way does not yet exist if not in our dreams of glory).

Is time spent on requirements completely wasted, then? I think it is not. Requirements do not come out of thin air, so at least all the conversations held in requirements workshops can help the developers (but most of all the customers) to clarify what the business rules and information flows really are and what the system will really need to do in order to support them.

And this only emerges through verbal communication.

Tuesday, November 30, 2010

Unlearn your MBA

Be an enterpreneur and invest about one hour of your time to watch David Hansson give a talk at Stanford.



The talk is very interesting, David repeatedly insists on focusing on the importance of delivering value in a profitable way. He's also a good speaker, so time will fly by.

Wednesday, November 24, 2010

Systems Thinking

A bad system will beat a good person every time

W. Eduards Deming

German wisdom

Do like the sundial:
Only count the bright hours

Tuesday, November 23, 2010

7th Italian Agile Day Mind Maps

Too entranced to take notes at the 7th Italian Agile Day, weren't you? don't worry, you can find the mind maps developed during some of the sessions here.

How to create a new object in NetBeans

Another useful java code template is newo, that expands to the following:
Object obj = new Object();
As behind the curtains the expression is

${TYPE default="Object"} ${OBJ newVarName default="obj"} = new ${TYPE}(${cursor});
all you have to do is change the type and note how the IDE at the same time changes the name of the variable setting the first letter in lowercase, e.g. if you hit newo TAB MyObject the result will be
MyObject myObject = new MyObject();
Neat and clean :-)

Saturday, November 20, 2010

7th Italian Agile Day: a review

Yesterday it was held in Genova the 7th edition of the Italian Agile Day, which is so far the best I've ever attended, so the first thing I'd like to do is thank everyone for being there.

As always, I studied the program for about a week before I made up my mind on which sessions I would follow. As always, I didn't follow the plan. Hey, inspect and adapt, right?

After the usual introduction by Marco Abis, who with many others made all this possible, Paolo Perrotta gave the keynote speech, which was very interesting as it revealed a totally unexpected piece of information: software projects have problems, and writing software is difficult. Strange how I never realized that in all these years :-)

Paolo also talked about the waterfall method, which - curiosly enough - was not so "waterfallish" in the intentions of its author; all the problems originated from the (in)famous picture of phases, which originally referred to the phases of a single iteration. Moreover, there was also a clear note that stated that this model could not possibily work. Somehow this detail was lost.

Anyway, great speech, and all my compliments to Paolo that keeps improving his skills as a speaker.

Then I went for "Note to managers: IT is different, get over it" by Andrea Provaglio, which was very interesting and gave me many starting points for further studies. I was a little flustered when, toward the end of the speech, he said that all changes start from the management, which corollary is more or less that you cannot start anything from the ground up. I revived a little when in a private conversation he was so kind to tell me that all you have to do is persuade the management that a change is due. Sounds easy, but it is not. That's why you have to persuade the management that a change is due by showing them why they should be interested in that change, in such a way that they can understand what you're talking about because you use their language, and what would their advantages be. At this point I was quite happy to see that all the time I spend reading Neurolinguistic Programming manuals is not wasted, as Andrea suggested just what all the manuals do... there must be a reason if everybody agrees on that!

I didn't change the room for next talk by Fabio Armani, "Scrumban, a methodology fusion". Unluckily what should have been a nice background music became too loud to be kept so he had to turn it off. Another annoyance was the projector (or maybe the computer) that kept running out of sync. Putting aside technical problems - wich are always lurking when you have a public, no matter how many times everything went right just before the showtime - Fabio explained how he trains his teams and how they perform. Man, I want to work like that!

In the afternoon, after a very long queue for lunch, the first session was "The secret art of Agile Javascripting" by Luca Grulla. I think that if I had to manage a project with more 50% of the codebase written in Javascript I could just as well get mad. I mean, madder than I already am, of course. But, again, that is because I don't know the language as well as I'd like to (and, alas, as well as I should). Anyway, I'll try to follow his advice: separate what must be separated. And, yes, the DOM is an integration point as well.

Next, I ran to follow Alberto Brandolini for "Due uomini e una lavagna", which was absolutely different from what I was expecting, yet it was another great presentation, from which you could get all his passion and sound expertise. This was the occasion to meet him in the flesh after we exchanged a bunch of posts and comments: I knew he was a prolific writer, but I didn't know he had such a high count of words per second :-)

I was quite undecided for the last session (just like for the others), then I went for "TDD per le viste", a "pomodoroed" session by Matteo Vaccari and Carlo Bottiglieri. I already knew what to expect by Matteo, having seen a slightly modified version of his slides, but Carlo's part of the presentation was quite shocking. The reason of this lies in his (paraphrased) sentence "I was not told that TDD was just for the domain, so I started applying it for everything. TDD is a support for taking informed decisions, and it must be used as such". Sounds extreme, but probably it is not, we only have to get used to it.

And, just like that... it's gone!

The conference was really rich in contents, fast paced, and very well organized. Soon the recordings of all the sessions should be available online here. Also the funding went very well, with more than €5.500 donated by the community for the community (and while donating I also I brought home a wonderful shirt with a "will refactor for food" writing!).

What else... looking forward to the 2011 edition!

Another rainy day for Italy

Today Italy lost 32-14 to Australia in a not very exciting match, despite both team showed something interesting. It is quite surprising how Italy was not able to turn their scrum superiority into more points, and it is also a pity that Rockt Elsom was allowed to "steal" a try in the last action of the match - and with stealing I mean that he was ready and able while Italians were a little sleepy, not that it was not a legal try, I'll leave those rants to "soccer experts", thank you very much.

And, talking about soccer, let me point out that hell will freeze over before you can spot an Italian soccer player sing our national anthem, while in rugby matches not only all players sing it, just like the crowd, but all the children join them too.

Tuesday, November 16, 2010

Ignorance strikes back

I know that sometimes I can be hard to put up with. Actually, "almost always" probably pictures it better. But I really can't help it, even if I know I'm fighting a losing battle. Sometimes it feels like getting blood out of a stone.

Now, to quote Wikipedia, SCSI is a set of standards for physically connecting and transferring data between computers and peripheral devices. Wikipedia also specifies that it sounds like "scuzzy". But that's only phonetical, as you always write "SCSI". Or do you?


Forgive me for the bad quality of the picture, but this is more or less a stolen image; anyway, to protect the (not so) innocent, no references to the place where it has been taken will be given.

Let's just say that the person in charge for materials also systematically writes badget instead of budget, IP adress instead of address, and that's not all folks.

Am I being mean? Probably I am. But I must admit I can hardly put up with wannabe high horses when they don't walk the walk. Let's face it, I am obviously not omniscient and make many errors as well: actually that's the only way you can learn. But if you want to improve you have to work hard, and I do. As I read somewhere, you might not get all you work for, but you will not get what you don't work for.

If you ever see me doing something like that please correct me; I'll also be happy to be exposed to laughter if that can help me to learn something.

Saturday, November 13, 2010

Argentina defeats Italy

Maybe after today's match someone will spot the difference between a wannabe, although good, and the real thing.

If you don't know what I'm talking about just take a look at what happened in a world cup semi-final...

Friday, November 12, 2010

Kinesics differences

Now that's interesting:

The observation of bilingual speakers indicate that these [behaviors and signals] are semi-independent systems which must be examined in context. The significant aspect of this is that as a bilingual speaker changes languages he also changes kinesic systems.

I'll try to pay more attention to see whether I can spot this. Actually it should come as no surprise, as we've seen a thousand times that when you're trying to express yourself in a language you don't master your gestures utterly increase.

Naked Objects Templates in NetBeans

Within the Naked Objects distribution you can find some Eclipse code templates that can speed up your work. Of course also in NetBeans you can have code templates, but I didn't know (and still don't) how to transform the lowercase property in a camel-case get or set method. After envying the Eclipse users for a while I decided to give it a try myself. I was quite deluded, as neither the Eclipse templates allow you to do it. This pushed me to implement the templates myself, and obviously to share my results.

The very first template is probably nop, which stands for Naked Objects Property. This basically creates a property with its setter and (annotated with @MemberOrder) getter. The problem is that I like to keep fields separated from setters and getters, so a template was not the best solution (at least not for me). The nop templates was then replaced with the ready-made ALT+INS, Add Property...


This opens a form that lets you specify the property name, type, and tweak the rest of the generated code a little bit.

And so much for the first template, as all I still need to do is to manually add the @MemberOrder annotation on the getter (but the autocompletion feature is of great help, as a simple @Mem followed by CTRL+space is normally enough).

To create a new code template you have to click on the Tools menu, select the Options item, and when the form opens click on the Code templates tab:


A click on the New button, some clickety clackety and a click on the OK button and there you have it: a brand new template all for you. The templates I've created so far:
  • noft (Naked Objects Factory Transient)
/** Creates a new transient ${Type} */
public ${Type} new${Type}(){
${Type} ${instance} = newTransientInstance(${Type}.class);
return ${instance};
}
  • noidicon (Naked Objects Identification Icon)
/** The name of the icon that represents the object */
public String iconName(){
return "${iconName}";
}
  • noidtitle (Naked Objects Identification Title)
/** A description of the object */
public String title(){
TitleBuffer buf = new TitleBuffer();
// TODO add properties
${cursor}
return buf.toString();
}
Note the use of the reserved ${cursor} property that positions the cursor where you are supposed to add your custom code.
  • nopval (Naked Objects Property Validation)
/** Validation for property ${property} */
public String validate${Property}(${Type} ${value}){
if ((${value} == null)) {
return null;
}
// TODO implement imperative rule
${cursor}
return null;
}
Last but not least
  • nopval (Naked Objects Property Validation)
/** Find all ${TypePlural} */
@Exploration
public List<${Type}> findAll${TypePlural}(){
return allInstances(${Type}.class);
}

I'm sure I can do better than this, e.g. there should be a way to automatically fix imports without the need for me to hit SHIFT+CTRL+I - which, anyway, is not all that work, moreover I'm more than used to combine it with ALT+SHIFT+F to format code and CTRL+S to save whenever I type something.

As with all code templates, use the Tab key to switch from a variable to the next one and hit Enter when you're done.

Maybe more on this will follow.

Wednesday, November 10, 2010

Is Open Session in View an AntiPattern?

A friend of mine pointed me to a willingly provocative article that describes Open Session in View as an AntiPattern. After sharing some thoughts with him, I've decided to post them here.

I agree on the N +1 queries problem, but unless (and until) you have devastating impacts on performances I would not worry about it, applying a YAGNI approach. On the other hand, if a view requires a lot of fetches maybe DTOs come in useful (and I would suggest Dozer). This also semplifies the objects exposed, thus hiding from the "high" layers of the application all the complexities that exist within the domain.

Personally, one of the reasons for which I found Struts 1 frustrating - yet much better than the proprietary framework we were using at the time - was the need for form beans, which lead to tons of duplications (no Dozer yet, a long time ago in a galaxy far far away we only had some Commons Beanutils) which is normally bad. OK, forget the "normally" part.

As a general rule I think that exposing domain objects can be accepted if and only if they are true domain objects, not beans with a bunch of setters and getters and transaction scripts in disguise, otherwise you could bypass many of the application logics and end up with an unmanageable mess.

And, talking about layers leaking, I would also like to quote a couple of sentences from the Domain-Driven Design Using Naked Objects book which I suggest as a very interesting and useful reading:

It takes real skill to ensure the correct separation of concerns between these layers, if indeed you can get an agreement to what these concerns really are. Even with the best intentions, it's all too easy for custom-written layers to blur the boundaries and put (for example) validation in the user interface layer when it should belong to the domain layer. At the other extreme, it's quite possible for custom layers to distort or completely subvert the underlying domain model.


The leaking of the persistence layer is not good per se, but if it does not become a problem you can live with it. As always, it depends on circumstances: why should a spend an incredible amount of resources (which means time and money) to obtain perfect isolation if I don't need it?

In general I think Open Session in View to be a good solution to manage database connections, but it would be even better leverage this strategy with join fetch queries for the most important relations. After all, when writing a service or a facade we know what objects will be needed: this does not mean that the domain or the controller should depend on the presentation layer (they must not!), but pretend that we don't know anything about it is just a waste of time. So, for example, if our facade has a result factory this could be a nice place to eagerly load all objects we know the presentation layer will need; if the view still needs more objects the open session can provide them on the fly: in this way we can optimize the most important parts and live with the rest.

Wednesday, November 3, 2010

Dozer and enums

While trying to map a persistent heavyweight class to a cleaner (and much smaller) POJO I also wanted to substitute some String parameters of the persistent class to Enums. The tool of choice is Dozer, of which I already wrote.

As I had the possibility to create the POJO from scratch I choose only the properties I needed and used the same names of the persistent class, thus having no need to write a custom mapping at all.

To check that everything was working fine I wrote a unit test - what else would you expect? :-) - as simple as that:
@Test
public void testPojoToPersistenceEntity() {
System.out.println("testPojoToPersistenceEntity");
MyPojo pojo = new MyPojo(
propOne,
propTwo,
propThree,
propFour);
MyPersistentClass persistent = mapper.map(pojo, MyPersistentClass.class);
assertNotNull(persistent);
assertEquals(propOne, persistent.getPropOne());
assertEquals(propTwoTxt, persistent.getPropTwo());
assertEquals(propThreeTxt, persistent.getPropThree());
assertEquals(propFour, persistent.getPropFour());
}
I run the test and got a green bar for free. You may notice that in the comparison there are a couple of different variables: these are the names of the Enums, and I was quite pleased with the fact that Dozer had silently mapped them just as I expected.

Happy as a clam I wrote another test for the other way round:
@Test
public void testPersistenceEntityToExistingPojo() {
System.out.println("testPersistenceEntityToExistingPojo");
MyPersistentClass persistent = new MyPersistentClass(
propOne,
propTwoTxt,
propThreeTxt,
propFour);
persistent.setPropFive(propFive);
persistent.setPropSix(propSix);
MyPojo pojo = new MyPojo(
propOne,
propTwo,
propThree,
propFour);
assertEquals(0d, pojo.getPropFive(), 0.01d);
assertEquals(0d, pojo.getPropSix(), 0.01d);
mapper.map(persistent, pojo);
assertEquals(propFive, pojo.getPropFive(), 0.01d);
assertEquals(propSix, pojo.getPropSix(), 0.01d);
}
Still smiling I hit the CTRL+F6 combination and... WTF? red bar?
org.dozer.MappingException: Illegal object type for the method 'setPropTwo'.
Expected types:
my.package.MyEnum
Actual types:
java.lang.String
Hmmm... weird... but not too much after all, I had been too optimistic.

After a little investigation, and taken for granted that I didn't want to write a custom mapper for each and every enumeration I could need, given the current conditions of use of the POJO I tried to map a one-way relationship. The problem is that when you mark a field as one-way Dozer only maps from <class-a> to <class-b>, while I needed exactly the opposite behaviour. Changing the order of the classes was not an option as we use the convention of mapping all the domain classes as <class-a> so I had to revert to the <field-exclude> syntax:
<mapping>
<class-a>my.model.package.MyPersistentClass</class-a>
<class-b>my.package.MyPojo</class-b>
<field-exclude type="one-way">
<a>propTwo</a>
<b>propTwo</b>
</field-exclude>
<field-exclude type="one-way">
<a>propThree</a>
<b>propThree</b>
</field-exclude>
</mapping>
Not exactly what I had in mind, because it doesn't let me create a new instance of a POJO, but for now it will suffice.

Any hints?

IAD 2010 sold out

I hope you registered as soon as you could, because for the umpteenth year (and running) the Italian Agile Day is sold out. Hope to see you there!

Tuesday, November 2, 2010

The power of e-mails

Have you ever noticed how your phone calls that normally fall in the virtual recycle bin are immediately answered when sent in an e-mail of which your bosses receive a courtesy copy?

In a slightly more edulcorate version you could limit the cc to your own boss only, but if she doesn't take the appropriate actions this seems to fail as well. On the contrary, it looks like people don't like their bosses to think they're not doing a good job, thus spreading the impression that their whole department is not doing a good job, leading to the need for them to take action (should a good boss be a good "politician" as well? that's the argument for another post).

I personally don't like this kind of stuff, because I think one should not work just to keep the fan far away from the soup (to be polite) but to create value, both for himself and for the company he's working for.

Sad but true, when you're back to reality and all else fails...

Wednesday, October 27, 2010

InfoQ: Sharpening the Tools

This is a very nice presentation very well led by Dan North. It offers several insights and hints, but I'd like to quote a couple of things.

At a certain point Dan says he doesn't like learning lunches too much, as the fact that you don't commit completely to learning but do it while you're eating somehow implies that education is not important, while it should be part of the job you're (hopefully) being paid for. It is sadly true, yet we have to work with what we've got: it will come as no surprise that I attend all these presentations in my lunchtime, but I found that given the circumstances it's much more effective than trying to reserve a full hour during the rest of the day.

The most important advices are found in the conclusive slide:
  • always assume you're out of date
  • you owe it to yourself to keep current
So... how much do you know what you don't know?

Thursday, October 21, 2010

Registrations are open!

It is now possibile to register for the 7th Italian Agile day which will be held in Genova on November 19. Hurry up!

The proposed program is very interesting as usual, and as usual I'd like to follow many talks at the same time... I wonder what I would do if I weren't ubiquitous...

Wednesday, October 20, 2010

InfoQ: Danger! Software Craftsmen at Work

I never liked presentations in which the presenter reads too much, or at least gives that impression; this is one of the reasons for which I found this presentation on InfoQ way too long. Furthermore, I think David Harvey takes things to the limit a little bit.

Yet, there is a sentence that I'd like to quote:

The customer is always right, but sometimes they need a little help... and we need to learn about the customer's world.


So there is a real danger here, and it lies within us. Mark the second part of the sentence, because that's the real important one: customers pay us because we provide value for their business, not because we write beautiful code. Customers don't care at all about the quality of our work as far as their business is concerned. To businessmen only business is important, not code.

A word of caution: that doesn't means that we have to be sloppy, for at least a couple of reasons:
  • our personal pride on what we do (after all, we claim to be craftsmans, don't we?)
    • I know someone will not care about this point. I do.
  • customers will sooner or later notice that each and every simple functionality they ask for costs more and more as time passes and functionalities are added, thus throwing us out of their business.
    • This would probably be no good, because we would end up showing boards...

Sunday, October 17, 2010

Varargs for creating lists

I confess I am not so used to varargs yet, but sometimes they can be very useful, e.g. when you want to create a list and punch in some objects in one shot:
List<String> stooges = Arrays.asList("Larry", "Moe", "Curly");
Without the Arrays class you should use the old boring construct:
List<String> stooges = new ArrayList<String>
stooges.add("Larry");
stooges.add("Moe")
stooges.add("Curly");
The former is much better!

Waiting for JMock 2.6.0

JMock is a library that supports TDD of Java code with mock objects. To mock an object the actual syntax (version 2.5.1) is the following one:
final MyObject myObject = context.mock(MyObject.class)
This is plain vanilla code, but things are getting even better in 2.6.0 thanks to the @Mock annotation:
@Mock MyClass myClass
I think it will be difficult to shrink it more than this... looking forward to the release!

Friday, October 15, 2010

Meet Dozer

Integration between systems is one of the most typical reasons of failure in complex projects. Yet, we find ourselves doing it day after day, so after some strategic mapping you have to get your hands dirty and map your wonderful objects to that incredible garbage other vendors still insist to call code. Or was it the other way round?

Let's assume a simple interaction via web services exposed on, say, an i-Series platform. I don't know whether all RPGLE programmers work in the same way, but almost all services of that kind I've seen so far have a single port with a thousand parameters for a hundred of different purposes and it is up to you to try and understand which of them must be used in which occasion. If that was not enough (and for me it is) it seems there is a contest for the use of the most cryptic parameter names, for which vowels seem to be banished and six characters is considered more than ehough (when not a waste of space).

One of the many things you learn with DDD is that code should be as close as possibile to your mental representation of the domain: well, LDLSDC does not make me think of a list of documents, but maybe it's my fault.

That said, before I use those objects in my code I'll have to convert them. The typical mapper has a signature similar to the following one:
public MyObject convert(YourObject yours, MyObject mine)
or
public MyObject convert(YourObject yours, MyObject.class)
The implementation is normally something like this:
myObject.setPropertyOne(yourObject.getPropertyOne);
myObject.setPropertyTwo(yourObject.getPropertyTwoButHasDifferentName)
(repeat ad lib for each property to be mapped)

This is very frustrating and most of all error prone. This is where several libraries step forward, most of them based on reflection, each of them having pluses and minuses. So far, the most interesting one I've found is Dozer, a bean to bean mapper that recursively copies data from one object to another (of a different type, of course! where would the fun be otherwise?). By the way, integrating different systems is just one of many reasons you have for mapping between different objects.

Using Dozer is just as easy as downloading the jar file, adding it to the classpath (don't forget the dependencies!) and coding right away:
Mapper mapper = new DozerBeanMapper();
Destination destination = mapper.map(source, Destination.class);
(almost utterly copied from the official tutorial). That's it. All fields with the same name are automagically mapped, with Dozer taking care of the necessary conversions (at least the more common ones). Should you need to customize mappings you may add some xml files that define how Dozer should behave:
<?xml version="1.0" encoding="UTF-8"?>
<mappings xmlns="http://dozer.sourceforge.net"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://dozer.sourceforge.net
http://dozer.sourceforge.net/schema/beanmapping.xsd">

<configuration>
<stop-on-errors>true</stop-on-errors>
<date-format>MM/dd/yyyy HH:mm</date-format>
<wildcard>true</wildcard>
</configuration>

<mapping>
<class-a>yourpackage.yourSourceClassName</class-a>
<class-b>yourpackage.yourDestinationClassName</class-b>
<field>
<A>yourSourceFieldName</A>
<B>yourDestinationFieldName</B>
</field>
</mapping>

other custom class mappings would go here.......

</mappings>
And this is copied from the official manual.

You can also introduce a custom converter that you can reuse through your application:
<converter type="org.dozer.converters.TestCustomConverter" >
<class-a>org.dozer.vo.CustomDoubleObject</class-a>
<class-b>java.lang.Double</class-b>
</converter>
and its corresponding class:
public class TestCustomConverter implements CustomConverter {

public Object convert(Object destination, Object source, Class destClass, Class sourceClass) {
if (source == null) {
return null;
}
CustomDoubleObject dest = null;
if (source instanceof Double) {
// check to see if the object already exists
if (destination == null) {
dest = new CustomDoubleObject();
} else {
dest = (CustomDoubleObject) destination;
}
dest.setTheDouble(((Double) source).doubleValue());
return dest;
} else if (source instanceof CustomDoubleObject) {
double sourceObj = ((CustomDoubleObject) source).getTheDouble();
return new Double(sourceObj);
} else {
throw new MappingException("Converter TestCustomConverter used incorrectly. Arguments passed in were:" + destination + " and " + source);
}
}
}
And now a personal reminder: if you have to map an array of objects, but you know the types that you will find in specified positions AND want to avoid class cast exceptions (and I guess you do), you can use this syntax:
<field>
<a>myObjects[0]</a>
<b>other.propOne</b>
<a-hint>java.lang.Long</a-hint>
</field>
<field>
<a>myObjects[1]</a>
<b>other.propTwo</b>
<a-hint>java.lang.Short</a-hint>
</field>
Next thing I'm gonna try is the mapping for POJOs and JAXB objects...

Spiderman busted!

This is the revenge of J.Jonah Jameson...


The picture is one of the most beautiful ones shot by Los Angeles Times Photographers in 2009; find the others here.

Tuesday, October 12, 2010

InfoQ: Agile Team Meets a Fixed Price Contract

I found another interesting article on fixed price contracts managed in an agile way. It also adds some considerations on budget spent, which you don't normally find in articles of the same kind.

Friday, October 8, 2010

Improved JUnit support in NetBeans 7.0

The 7.0 release of NetBeans will extend the support for the JUnit framework addind the following functionalities (I'm told some of them can already be found on Eclipse):
  • The 4.8.2 release of the JUnit library has been integrated.
  • You can now run or debug a single test case (method) in a suite (class) from the editor context menu.
  • It's now possible to rerun only failed tests.
  • The filtering of the test results view was improved. It' now allowed to select the result states (passed, failed, error) which will be hidden in the result view.
  • The tabbed output was implemented for the test results view.
All the relevant informations can be found on the official NetBeans site.

Thursday, October 7, 2010

Don't forget the content type

In one of our legacy projects all web services were exposed through SAAJ servlets; having to expose a new one we opted for a newer JAX-WS style, also thanks to everything that NetBeans gives you right out of the box.

So we created a new Web Service (you can check this tutorial to see how it is done) and wrote all the tests and code we needed. Everything went fine, until we tried to test the whole project (which should be mandatory before you commit).

At this point some of our old tests failed, all reporting the same error:
SAAJ0532: Absent Content-Type
The offending code builds a message that we use to test the parser:
private SOAPMessage buildMessage(String filename) throws IOException,  SOAPException {
InputStream resourceAsStream = this.getClass().getClassLoader().getResourceAsStream(basepath + filename);
MessageFactory messageFactory = MessageFactory.newInstance();
SOAPMessage message = messageFactory.createMessage(new MimeHeaders(), resourceAsStream);
return message;
}
At first it caught me off balance because our additions and changes went nowhere near the code related to the failure. Then it must be something related to the environment, which is the fact that NetBeans added a dependency on the JAX-WS 2.1 library that probably has some conflicts with the SOAP jars we use.

As some manual tests seemed to confirm that the application is working normally, we deferred the investigations (not for very much longer!) and simply fixed the test:
private SOAPMessage buildMessage(String filename) throws IOException,  SOAPException {
InputStream resourceAsStream = this.getClass().getClassLoader().getResourceAsStream(basepath + filename);
MessageFactory messageFactory = MessageFactory.newInstance();
MimeHeaders headers = new MimeHeaders();
headers.addHeader("Content-Type", "text/xml");
SOAPMessage message = messageFactory.createMessage(headers, resourceAsStream);
return message;
}
And now an afterthought. The test failed, yet the application did not. Does this mean our test was wrong? Actually it only means that the previous library was more forgiving, because the test has proved useful for a long time (also spotting a regression). It is better to have a not-so-perfect test than none at all.

Wednesday, October 6, 2010

InfoQ: Technical Debt a Perspective for Managers

Another interesting article on technical debt; the most important thing that emerges is that management must buy and support the need for getting rid of it.

The price you pay for technical debt is not on the barrelhead: it's all an hidden cost, and it grows in a non-linear manner with time. Think about it.

Monday, September 27, 2010

Nemo propheta in patria

I wonder why, when they have a subject matter expert close at hand at no cost, people almost always go looking for (very expensive) perfect strangers to (try and) solve their problems.

On repositories

I was writing code for a (actually not so) massive data export in a legacy application and I needed to retrieve all objects of a certain type modified after a given date, so I started writing a repository.

The simplest test that came into my mind was something like this:
@Test
public void queryingWithDateInTheFutureReturnsEmptyList(){
DateTime tomorrow = new DateTime().plusDays(1);
List<MyObject> result = instance.findModifiedAfter(tomorrow);
assertTrue(result.isEmpty());
}
Some clickety-clack (SHIFT+CTRL+I, ALT+SHIFT+F, CTRL+s, CTRL+F6 for the most curious) and NetBeans gives me a red bar. That's good as there's no code yet :-)

The first implementation was really too easy to write it, but as a very few people believe I work this way and all the rest think I'm wasting time I'll publish it anyway, saving my opinion for another occasion:

public List<MyObject> findModifiedAfter(DateTime startDate){
return Collections.emptyList();
}
So far, so good. Now, in the framework used in this project all persistent objects inherit from MyPersistenceObject (names have been changed to protect the innocents), which is a sort of Active Record with a hint of Row Data Gateway. Persistent object may also be managed by a MyPersistenceObjectController object, which is a sort of DAO that also manages transactions. Should you wonder, the class is not a controller at all, but the original developers thought that the name would fit. Before we go on, let me say that the framework works pretty well under many circumstances.

Not having a dedicated database that gets populated for each test end cleaned afterwards (did I mention the fact that this is a legacy application?) the next test tried to retrieve some data:

@Test
public void queryingSinceLastMonthReturnsAtLeast5kObjects(){
DateTime lastMonth= new DateTime().minusMonths(1);
List<MyObject> result = instance.findModifiedAfter(lastMonth);
assertTrue(result.size() > 5000);
}

I know it's not great, and that it breaks if someone truncates the corresponding table, but we must start from somewhere, right? Reading the tests also suggests that the method should probably be named findModifiedSince, but that is hardly the point now - even if it shows another useful feature of writing tests.

A little lookup on the existing code easily gave me a first implementation based on the "glorious" copy-paste-fix pattern:

public List<MyObject> findModifiedSince(final DateTime date) {
if (date == null || date.isAfterNow()) {
return Collections.emptyList();
}

try {
Object[] values = new Object[]{date.getMillis()};
String[] orderBy = null;
Criteria criteria = new Criteria("tms", "MyObjectImpl");
SimpleCondition simple = new SimpleCondition("tms", SimpleCondition.GE);
criteria.add(Criteria.NOP, simple);
Vector objects = new MyObjectImpl().retrieveByAlternateKey(
criteria,
values,
orderBy);
List<MyObject> result = new ArrayList<MyObject>();
result.addAll(objects);
return result;
} catch (Exception ex) {
return Collections.emptyList();
}
}
Clickety-clack, CTRL+F6...

...
...
(yawn)
...
...
(wtf?)
...
...

Green bar. After an insane amount of time. A little logging informed me that the test took 16 seconds to run, excluding the time needed to start and stop the persistence container.

A little profiling confirmed that also the memory usage grew abnormally. And all this for a little more than 6000 objects...

Now, to quote Eric Evans, a repository is

...an object that can provide the illusion of an in-memory collection of all objects of that type.

Well, this framework cannot provide that illusion, at least not whithout freezing everything else. I must admit that lately I was quite in clover, as using Hibernate I could simply write the very same method like this:


public List<MyObject> findModifiedSince(DateTime date) {
return HibernateUtil.getSession().
createCriteria(MyObject.class).
add(Restrictions.ge("tms", date)).
setResultTransformer(Criteria.DISTINCT_ROOT_ENTITY).
list();
}
and get my list, which is actually a list of proxies, in about no time. Now Evans says that repositories

...return fully instantiated objects or collections of objects whose attribute values meet the criteria, thereby encapsulating the actual storage and query technology.

Proxies are not exactly fully instantiated objects, but as they pretend to be I'm prepared to live with that :-)

At this point all I could do was to revert to the old give-me-the-ids-and-I'll-get-the-objects-myself :-(

Friday, September 24, 2010

More on classloaders

As an update for my post on classloaders, PMD informs me that in J2EE I should use
Thread.currentThread.getContextClassloader()
My bars are green, a manual end-to-end test makes my testers happy, so this will be our preferred syntax from now on.

Tuesday, September 21, 2010

NetBeans dependent project in Hudson

While configuring a Hudson job for the single project of ours which still hadn't one I had some problems, as it is a NetBeans project that depends on other NetBeans projects. That is not a problem per se, as everything has run as smooth as silk for years on each and every developer's machine.

When run on the CI server, the build reported this error:

BUILD FAILED
C:\HudsonWorkspace\.hudson\jobs\myProject\workspace\myProjectFolder\NBProject\nbproject\build-impl.xml:558: Warning: Could not find file C:\HudsonWorkspace\.hudson\jobs\myProject\workspace\myOtherProject\dist\myOtherProject.jar to copy.

Unwilling to give up the "depend on project" feature on NetBeans (my colleagues would skin me alive), I investigated a little. Once you know where to look, everything seems quite easy, and as a matter of fact it is.

The dependency is defined, as you would expect, in the project.properties file, where you can find something like this:

project.myOtherProject=../../myOtherProject
reference.myOtherProject.jar=${project.myOtherProject}/dist/myOtherProject.jar

In practice, the first line defines where the other project is while the second defines where to find the relative jar file.

All you have to do is then override these properties in the job configuration in Hudson, either using the path relative to the build.xml file or using the absolute path of the referenced project on the Hudson server.

Now you have no more excuses... :-)

Setting Ant properties in Hudson

As an update to an old post of mine, now I use a different approach for setting properties for the jobs defined on our Hudson instance. Instead of relying on the private.properties file defined for our NetBeans projects (which means all our projects) I simply copy those properties into the relative section in the job configuration page:


This also makes easier copying these properties between projects, besides the fact that it clearly points out which "hidden" variables we're relying upon.

Thursday, September 16, 2010

The FizzBuzz kata

The first time I came over this kata was thanks to a post in which Matteo was looking for design problems. I reckon everyone has played FizzBuzz at school, so I already knew what it was about; I had in mind to try it sooner or later, and it looks like somehow I finally did.

Once I wrote my own solution, I was a little curious to compare it with others, so while googling around I landed on this page which states that

Michael Feathers and Emily Bache performed it at agile2008 when competing in "Programming with the stars" in python, in 4 minutes.

That made me feel a little inadequate, since my solution took a little more to emerge, and I guess it's not because I used Java. But, after all, I'm not Michael Feathers nor Emily Bache, so I guess it's all right.

The solution proposed follows the classic filter paradigm: you register some filters in a list (after all you want the program to say "FizzBuzz" and not "BuzzFizz") each of which processes the output of the previous one. The nice thing is that you can inject filters as you like, which is good: the main class itself has the only responsibility to iterate through the filters, whatever they might do.

My solution is somewhat different, as it is based on the decorator pattern, yet it is similar because you can still inject the decorator, which is built wrapping each rule in the outer one, and the main class has the only responsibility to ask its decorator to decorate the input. If no decorators are provided, a no-op default one is used: this is a small complication compared to the previous solution. Another complication is that the decorator actually knows it's wrapping another decorator, no-op decorator excluded, while a filter has no knowledge of whatsoever other object might operate on the input.

Anyway, the most important thing is that my design satisfies Matteo's (evolving) requirements and supports the Anti-If Campaign :-)

Of course, TDD was used :-)

Don't touch my code

There is not such thing as "my code". If there is, there should not be. This is the basis of collective code ownership: everybody can touch anything. Of course, we MUST make sure that our changes do not alter the existing behaviour (unless we're fixing bugs, or altering behaviour is exactly what we're after, of course).

Sentences like "if you touch my code I'll have to spend a lot of my time to correct your errors" are based on the (hopefully wrong) assumption that your fellow developers commit carelessly modified software. Collective code ownership also means collective responsibility, that should go hand in hand with the "leave your campfire better than you found it" habit.

If I touch "your" code (with which I actually mean "the code of which you were the first author") I magically become responsible for it. But be warned: this is a responsability I share with everybody else in the team. Actually I don't even have to touch "your" code to be responsible for it, as I already am. That's it. Do I see something unclear? I'll try to clarify it. Do I see obsolete comments? I delete them. Do I see comments? I'll probably delete them too, provided that the code speaks enough. If it doesn't, I'll try to make it speak. And delete the comments :-)

Thursday, September 9, 2010

You'll never excel if you stick with Excell

Please, pay attention when you write something. I am sincerely tired of seeing the use of the word "Excell" (note the double "l") instead of Excel. I can accept it perfectly when a customer uses it, as most customers are not (why should they be, after all?) IT experts; on the other hand, I really can't stand it when a (should-be) IT expert falls in the same trap.
  • It is Excel, not Excell.
  • It is Outlook, not Outlock, Outloock, Out Look or other combinations.
  • Ajax is not a programming language.
  • Tomcat is not an application server. And, by the way, Apache sounds like "a-patchy"
I could go on forever, but I'm sure you get the point. At best you're considered sloppy because you didn't review what you wrote (which is already bad enough), but maybe you don't know what you're talking about at all. So why should I trust you? How can I think that you can possibly be able to solve my problems? Are you just a waffler? Please get your homework done.

Tuesday, August 31, 2010

How to refactor and test a SAAJServlet

Mind you, not a simple SAAJServlet, but an old fashioned legacy transaction script without the scent of a test. Anyway... let's pick up the usual books (Working effectively with legacy code, Refactoring, Refactoring to patterns - just a starting point of course) to keep at hand as a reference and let's start refactoring.

First a word of caution: don't just copy from the books. Study them, try to apply the refactorings in a small and controlled environment, maybe in little katas, and get the basics beyond them, otherwise you'll do more harm than good.

Back to business. We basically have to test the onMessage method, that gets a SOAPMessage and returns another one. Without even looking at the code, one would expect an algorithm like the following one:
  • parse the message into a parameter
  • pass it to a service that executes whatever must be executed
  • build a new message based on the input parameter and on the outcome of the previous step
Obviously one should also consider errors management, but let's keep it simple. Let's take a look at the actual code:

@Override
public SOAPMessage onMessage(SOAPMessage message) {
SOAPMessage result = null;
Dialogue dialogue = null;
try {
SOAPBody body = message.getSOAPBody();

SOAPFactory sFactory = SOAPFactory.newInstance();
Name bodyName = sFactory.createName("myLocalName", "myPrefix", "myUri");
Iterator it = body.getChildElements(bodyName);
if (it.hasNext()) {
SOAPBodyElement be = (SOAPBodyElement) it.next();
dialogue = parseBodyElement(be);
Operation operation = operationFactory.getOperation(dialogue);
if (operation != null) {
try {
operation.execute(dialogue);
} catch (Throwable th) {
getServletContext().log(th.getMessage(), th);
dialogue.setCode(Dialogue.KO);
dialogue.setMessage("Service unavailable.");
}
}
} else {
dialogue.setCode(Dialogue.KO);
dialogue.setMessage("Malformed message.");
}
} catch (Throwable th) {
getServletContext().log(th.getMessage(), th);
dialogue.setCode(Dialogue.KO);
dialogue.setMessage("Service unavailable.");
} finally {
try {
MessageResponseGenerator mrg = new MessageResponseGenerator();
result = mrg.generateResponseMessage(dialogue);
} catch (Throwable th) {
result = null;
th.printStackTrace();
}
}
return result;
}
Now, though PMD does not complain too much about the structure of the method - just a few dataflow anomaly analysis, null assignments and catching throwables - I smell (more than) a rat.

Starting from the beginning, the first things that strucks me is the different level of abstractions of the code: you have both a low level implementation that extracts a SOAPBodyElement to be parsed and a call to an Operation that encapsulates the requested service (the Command pattern), and this is not good. There are also too many nested ifs and try-catch blocks to my liking.

There is a separated object that creates the response message, and this is good. Why on earth are we missing an object or at least a method that parses the message in the first place?

Why do we ask a factory for an Operation passing in a Dialogue and then we have to pass the same parameter back to the operation we get back from the factory? couln't the operation hold a reference to the dialogue? Would it be better? we shall see later.

And, obviously, all the references are hard-coded...

So far, we have just conducted a small code review. But what's the point in the first place? We have to introduce a new Operation object (maybe extending an existing one, and please note the alliteration). Nothing strange so far, but the resulting object for this operation should contain some more fields than the existing one. Still nothing terrible, but as you might have noticed currently the Dialogue object works as a collecting parameter through all the method, while we would need two different objects (or modify the initial parsing method in such a way to have an instance of the right Dialogue subclass, or introduce a composition in the Dialogue class, or explore a thousand other possibilities).

By the way, it is mandatory that we do not change the interface for the service, which means that the incoming and outcoming messages for the existing operations cannot change.

The fact is that before we start refactoring we have to guarantee that. Of course we could start refactoring right away, but, as you might have noticed if this is not the first post of mine you happen to read, I find it "just a little bit" risky.

So the answer to the first question, i.e. how to refactor a SAAJ servlet, is, at least in my mind, simple: write enough tests to describe the current behaviour before you do anything else. I know: easier said than done.

That's tricky: how can you do that? The class we have to test is a servlet, so we have four options:
  • use manual testing: we already have this in place, but each single change would require at least a dozen tests, making the whole process too slow (and not reliably repeatable, as it is based on people's good will)
  • mock all the objects provided by the servlet container needed by the servlet: quite a lot of work, and I'm not going down that way (even because I'm listening to Starway to Heaven and I don't wont to spoil the happy sensation)
  • deploy the application to a servlet container and use in-container testing, possibly with a framework like Cactus: this introduces an unwanted complexity and I'll avoid it too even if it would be tempting, as the servlet is deployed in a proprietary framework in which you can only access the database if you actually start the webapp (don't ask, you don't want to know). Why? because I think it would not force me to separate responsibilities as much as the next approach.
  • use a library such as HttpUnit, which contains ServletUnit. Being this a simulated container I am sure I would never have access to the database given the current situation, so I guess I'll have to work my way to a cleaner design.
Back to the servlet: there are cases in which the method returns a message without going all the way down to the database, so as a first step I'll exercise these and keep manual testing for end-to-end situations.

The very first test consists in producing a message with a wrong format: I should get a SOAPMessage with the code property set as KO and an error message of "Malformed message". Or should I?
@Test
public void testOnMessageWithMalformedMessage() throws IOException, SAXException {
System.out.println("testOnMessageWithMalformedMessage");

InputStream resourceAsStream = this.getClass().
getClassLoader().
getResourceAsStream(
"path/to/my/message.xml");

ServletRunner sr = new ServletRunner();
sr.registerServlet("myServlet", MyServlet.class.getName());

ServletUnitClient sc = sr.newClient();
WebRequest request = new PostMethodWebRequest(
"http://localhost:8084/myServlet",
resourceAsStream,
"text/xml");

WebResponse response = sc.getResponse(request);
assertNotNull("No response received", response);
assertEquals("OK", response.getResponseMessage());
assertTrue(response.getText().contains("KO"));
assertTrue(response.getText().contains("Malformed message."));
}
These steps are detailed in the ServletUnit documentation. Before we run the test, remember we're not going to change anything yet.

Too optimistic? actually I was. Running the tests gives me the following error:
Error on HTTP request: 500 javax.servlet.ServletException: SAAJ Post failed null.
Looks like the MessageResponseGenerator cannot handle null parameters very well. I admit I knew I had been too optimistic about the error message (how could I trust the generator had the same message the servlet used and that I was expecting?), but at least I hoped in a valid SOAPMessage... But after all isn't the NullPointerException the most widespread error in Java code?

It's a long way to go... remeber that before we change the current behaviour we have to describe it, so the way is even longer. Yet, the first step has been made.

Monday, August 30, 2010

Quarreling with classloaders

I admit that most of the quarreling, if not all of it, came from my not too deep knowledge of the subject. Anyway, at least I've learned something more.

Now that I know how to change the endpoint defined in a WSDL I was trying to put this knowledge to good use. We have a project that encapsulates the access to various web services, and we release it as a jar file that is normally used in web applications, in particular the one I'm mostly working on in these days. To easily let the administrator of the web application decide whether to use a monitoring application like wsmonitor I prepared a sevices.properties file:
proxyEnabled=true;
proxyAddress=http://...
Being test-infected I wrote a simple explorative test:

@Test
public void testLoadProperties() throws IOException {
System.out.println("testLoadProperties");
Properties properties = new Properties();
InputStream systemResourceAsStream = ClassLoader.getSystemResourceAsStream("path/to/my/services.properties");
properties.load(systemResourceAsStream);
assertNotNull(properties.getProperty("proxyEnabled"));
}
Green bar, so I put the code in the service class. Well actually before that I wrote another small (failing) test:

@Test
public void testPropertiesAreCorrectlyInitialized(){
System.out.println("testPropertiesAreCorrectlyInitialized");
assertNotNull(MyService.properties.getProperty("proxyEnabled"));
}
After the green bar I switched to the webapp and started it, but the isProxyEnabled method of the service, which simply performs a test on the value of the isProxyEnabled property always returned false. If you are wondering... yes, there are the corresponding tests, which are not reported for the sake of simplicity.

After some researches, and mostly thanks to Dave Lander's contribution to this discussion, I've found the problem and changed my loading instructions:

InputStream is = MyService.class.getClassLoader().getResourceAsStream("path/to/my/services.properties");
properties.load(is);


Funny how simple things are when you know them, aren't they?

Sunday, August 29, 2010

You should be so lucky

Ahhhh se solo la maggior parte dei manager sapesse… il mondo girerebbe diversamente, ma la resa dei conti arriverร  presto caro cravattato dall’ignoranza bovina… ma sto divagando

Quoted from another interesting post by Gabriele Lana (aren't they all?).

I stole that presentation

That's not as bad as it might sound, as the author explicitely asks to do so. Moreover, as its title is "Steal this presentation!"...

The presentation gives many pieces of advice on how to prepare the slides for your presentations
(sorry for the repetition). One of them is that visual is better than text, which is true for presentations but not for blog posts, wo you'll have to endure my comments on Jesse suggestions (or switch to another website of course). And I'll use bullet points as well :-)
  1. Have a killer opening slide. Too true, this is what brought me to the presentation in the first place. As Frank'n'Further reminds us, we should not judge a book by its cover, but it is also true that first impressions do count. Should you pick a presentation in a list of ten on the same subject, which one would you go for? Do I have to answer?
  2. Use a trendy color mix. Keep a consistent look, and avoid using too many colors. They only make everything more confusing.
  3. Use stunning visuals. Your brain remembers them more than words. Images help convey a story, and the audience will better remember the story if you choose the right image. Try to have an interesting story, or also the best image won't be of much help. And remember to credit the authors.
  4. Get your text. Text is normally bad, so stick to a short sentence or two, keeping clear which is the most important one. Very important, once you have decided to put text on your slides, make sure it is easily readable. I'm afraid this last point is something in which Jesse can improve, as you can see from some of the first and last few slides.
  5. Use crap. No, not the kind you're probably using in your presentations at present time. Crap stands for Contrast, Repetition, Alignment and Proximity.
  6. Use video. I dont't like the idea very much, butI'm not a professional presenter so I don't think my opinion counts very much. Should you go for it, preloading your videos is a very good idea to avoid awkward silences, which as you perfectly know are one of the signs that the presentation is going terribly wrong.
  7. Share your work. Very important if you want to spread your ideas. And if you followed all the rules nobody that hasn't listened to your talk would do much with your slides, so don't be afraid of stealing. Sharing is making Jesse famous, and I'm (very slightly) contributing to the phenomenon. The same could happen to you!
  8. Recap. Always. Repetita iuvant.
To recap, a very nice (and useful, too!) presentation. Shall we win the war against war-and-peace-long slides full of bullet points?

Wednesday, August 25, 2010

The importance of unit tests II

Yesterday I had some fun spending some hours to introduce ajax in a legacy application (I call it legacy because it has a very small amount of automated tests). During the afternoon I was quite pleased with myself, partly due to the satisfaction associated with the work done, partly due to the "Steppenwolf" novel, partly due to the Beethoven Sonatas I was listening to, partly due to my full stomach. Being a very wise and intelllectual person, I am in favour of the latter.

All this abrupltly ended when I got an unexpected error from a web service called from a part of the codebase that I had modified in the morning, which seemed to go wrong when I submitted a foreign address.

First I checked the test that exercised the web service client, where I verified that all the parameters where correctly passed as expected. Then, I wrote some other tests trying to exercise the particular feature, narrowing the scope of my private investigations. Btw, this reminds me that I have not listened to the Dire Straits in a long time, which is a shame.

Everything seemed fine, so I spent some hours with the developers of the service trying to figure out what was going wrong with the service (if anything). After some head banging it emerged that he problem was that we were focusing on a particular parameter, while the error sneaked in another one. I hope they will not make me pay for the dents in the desk.

That would have been clear from the beginning if we checked all the parameters in the XML stream instead of focusing only on the ones that we thought were important for the specific call (the port in the service is only one, even if there should be many, and the behaviour is determined by which of the several zillions parameters are set and what their values are; I don't think this is very brilliant, but it cannot be changed, so complaining is useless... or at least it brings no business value).

So, everything went back to a wrong value, due to a "simple" setter method.

Now, you normally never write a test for a setter method. This is perfectly acceptable, as you should write tests for "everything that could possibily go wrong" and a normal setter is not included in the list.

Pity this setter was not a plain one, but contained an if:
public void setMyProperty(final String value) {
if (value != null || StringUtils.isEmpty(value.trim())) {
this.myProperty = "FIXED";
} else {
this.myProperty = value
}
}
This looks like a blunder... why that not-null check? is it to be able to trim the value or should it actually be a check the value is NOT null? If the former is true, why hasn't the author simply used the isBlank method instead? maybe because she was thinking of saving the trimmed property (which in the case she forgot to do) or because she didn't know the existence of the method?

Though strange this might sound, all this is not really important as I have access to the product owner and I could get all the answers I needed.

The point is: that method required a test, and it was nowhere to be found. You might argue that it's simple enough to avoid writing one, but all the time I wasted says something different. Still, you might add that it was my fault because I didn't check the setter in the first place, but this is only another arrow in my quiver: this is exactly the reason for which we need automated tests, as people do make mistakes and forget to check simple methods.

I'd like to think that if I were pair programming with the author of this code I'd never let her skip writing the failing tests first, at least not without fighting.

Note that I'm not pointing a blaming finger, I could have written that code myself (and sometimes I did, and I'm happy to say that I always regret it when I realize it).

I know I am fighting - and so far losing - a running battle, but tests are necessary, even if someone tells you they represent a cost. Actually they are not even a cost, they are an investment, they are one of the most important risk management tools software developers have.

It only took me some fifteen minutes to write four different tests that assessed the desired behaviour and rewrite the method from scratch. These fifteen minutes would spare three people some wasted hours. This also demonstrates how the cost of bug fixing dramatically increases with time.

Yet, for some unknown reasons and against all evidence, too many managers AND developers refuse to believe in practices like TDD or pair programming. Is it to mantain the illusion of control? is it fear of changes? is it lack of trust? is this evidence not so evident? let's try some math:
  • cost of writing the setter method: 5 minutes
  • perceived cost (usually coincides with the former): 5 minutes
  • cost of writing tests and the setter method: 15 minutes
  • perceived waste: cost of writing tests minus cost of writing the setter method = 10 minutes
  • cost of writing tests and the setter method, pair programming: 30 minutes
  • perceived as almost blasphemous waste: cost of writing tests pair programming minus cost of writing the setter method = 25 minutes
And this is where analysis normally end. On the other hand...
  • cost of finding the bug: 6 hours
  • actual cost: 6 hours plus 15 minutes PLUS 5 minutes = 6 hours and 20 minutes
  • actual waste: 6 hours PLUS 5 minutes
Also note that the time needed to write the code in the first place was completely wasted.

Still think that writing tests is too expensive? Well, the curious thing is that at this point everyone seem to agree on the reason behind the added cost: the code was sloppy. As a corollary, the blame is on the developer. Well, THIS IS NOT THE PROBLEM. The problem is that the way used to write the code was sloppy. And this does not depend entirely on the developer.

Think about it.

Thursday, August 19, 2010

7th Italian Agile Day

Unlike last year, let me be an early bird and remind you the 7th Italian Agile Day which will be held in Genova on November 19, 2010. And, unlike last year, I really hope I'll manage to be there!


The Italian Agile Day is a one-day free conference dedicated to the use of agile methodologies to develop and manage software projects. Its intended audience is composed by developers, project leaders, IT managers, testers, architects and coaches who either have experiences to share or are just starting to get interested in these subjects.

Its declared aim is to share practical knowledge, experiences on the field and achieve an active involvement by all the participants.

Free access upon registration, limited seats. For the fourth time running, the event will be self-financing.

Saturday, July 31, 2010

The Muppets sing Bohemian Rapsody

I confess I was looking for the original video, but once I found this...



it is hard to tell which is the best one... my children definitely love it, and so do I!

Thursday, July 29, 2010

A cool piece of advice

When you go away on holidays, ALWAYS empty your fridge.

If you didn't follow the previous one and you happen to be one of the lucky ones, here's a bonus advice: a thick layer of Vicks Vaporub on the upper lip helps a lot. And keep at hand a lot of bicarbonate and vinegar.

Wednesday, July 7, 2010

The datediff function

Next week we shall roll-out an application which provided some very interesting insights for us; it will be the core of the renewed application portfolio for one of our customers, and after about one year of hard work we all think everyone is going to be very satisfied.

The problem is that the database we are going to migrate contains very dirty data, e.g. it contains data about events that last thousands of days instead of the typical maximum, which is assumed to be five, so we're driving a hard bargain to force the customer to fix all (at least, most of) the anomalies.

A practical way to point out the errors to the customer is the datediff function:

SELECT event_id, start_date, end_date
FROM events
WHERE datediff(d, start_date, end_date) + 1 > 5

The +1 is needed as passing the third parameter equal to the second one, as in events that only last one day, would yield a 0, and I think it is clearer to leave the lower limit (5, in this case) clearly visible. Of course also

WHERE datediff(d, start_date, end_date) > 4

or

WHERE datediff(d, start_date, end_date) >= 5

or even

WHERE datediff(d, start_date, end_date) > 5 - 1

would work, but IMHO they are not as clear as the first one.