Some Thoughts on Agile

Big risks, if not the biggest risks in a project are:

  • Unclear requirements/ shifting scope – because you don’t know what to aim for, or you aim at a moving target
  • Large head count – because communication becomes an issue when things don’t fit into a single head
  • Absent/ unwilling customers – because the solution will be out of touch with what is needed (not to mention that “customer” often enough is not equal to “user”)

Many projects fail for these reasons. It would be very helpful to not have to work under these conditions. It would be helpful, too, to only have really good people on the team.

Unfortunately, many projects have to operate on this basis. Requirements are sketchy, but management demands a project plan in the first week of the project. Some project are large, because there is a ton of stuff to tackle in limited time. The people who hand you the requirements are often not the people who have requested the system, or the ones who are going to use it. And, especially in big companies, developers are a broad cross-section of the spectrum there is.

Now, if you look at the opposite of these detrimental factors, you get a good portion of what Agile is about:

  • User stories that describe what the customer wants. Maybe not in great detail, maybe not completely at the start, but this is what iterations are for.
  • Small teams, maybe a dozen people
  • A customer who is always present and cooperates, giving feedback all the time (and who has the authority to do so)
  • Highly communicative, capable developers who share a common, Agile mindset

Well, this is why I like to work in an Agile environment, too :-) I lead successful, agile projects under the prescribed conditions. I suffered like any others in big projects where the sheer amount of required cross-team communication stifles every move.

How to put this nicely… Any project that fits the second list of factors will have a much higher chance of success than a project plagued by the first list, no matter what approach is being used.

I have the impression that, at least to a degree, Agile increases project success by excluding major risk factors (please read on).

If that exclusion is by definition (“let’s only look at projects that fit the definition”), that is a cheap way to success. If a Marathon would be only 5 kilometers long instead of 42km, it would be much easier to run.

If that exclusion is by changing the approach (“let’s do it differently”), that is what Agile is about. The tough part is that places that attempt huge projects in a waterfall fashion are not exactly fertile ground for Agile approaches.

What gives?

  • Agile defines a setup which is more likely to succeed, because it tries to get rid of certain major risks
  • When a large projects with unclear scope and absent customer gives you a headache, Agile likely is not the solution to use in the situation, you would need to change the situation
  • Should you see statistics that Agile projects have a higher success rate than conventional approaches, be aware that this may be owed to the fact that the projects done Agile inherently were less risky from the beginning, and that the big nasty ones were not even attempted in an Agile way.

So, try something new by changing the way you start and execute a project, not by trying to sprinkle a new method on a doomed setup.

Where Do (Software) Architects Come From?

Where do software architects come from? There is a simple, obvious answer, but that is not what I mean :-) I am referring to company environments. I can’t help noticing that more often than not, architects come from one of the following sources:

  • Architect by appointment” (not skills). Somebody got promoted to be the architect (implying the role carries a higher value than that of a programmer), or somebody got simply pointed out
  • Not-so-great programmers who are thought to be good enough for “drawing boxes and lines on an abstract level”
  • Programmers who are so far removed from the trade they don’t have the insight any more what their options are, and what impact their decisions have
  • Non-programmers who, well, can’t code and therefore must do something that does not require coding

That is weird, assuming you choose your architect to help, improve, complement the implementation project on a level that is different from that of a programmer.

But maybe those who choose architects don’t know what to expect and what to look for. There is evidence for that: companies often don’t educate their people to be architects, or even have an expectation (that they communicate).

I can only offer my own standpoint: An architect is a person dedicated to looking at and being responsible for the quality aspects of a software project.

What kind of qualities? I would stretch that pretty far: Can the structure of the system be easily understood? Does it fit the purpose of the project? Does it address the problem and the constraints of the environment in a fitting way? Does it reduce the impact of changes to the requirements or infrastructure on the system?

While programmers must address quality issues, too, they are primarily occupied by translating functional specifications into technical reality (which is why code generation will not happen before arbitrary natural language can be understood by a computer), and wrestling with the intricacies of highly complex APIs, other people’s code, and their own.

The architect in my opinion must be a seasoned senior developer who knows what works, knows what doesn’t, and understands and can explain why that is so. S/he, too, has the (urgently) necessary people skills to promote good architecture, to convince management that additional effort will pay off, and to convince programmers that the marching direction is the right one.

You can’t study to be a “software architect”, though there are certifications. There are some great books that for sure can help. I believe a certain attitude to programming, long experience in many various projects, being awake at the IDE, and musing about why things went the way they did are a good base.

Android configuration options (HTC Desire)

Where can I find that configuration option? I created a mind map, which shows all the options offered by my HTC Desire. Available as PDF as well.

Android configuration options

How to let your smart phone access the wireless Internet connection of your laptop via WLAN

This has not much to do with Software Entropy, but still – it’s useful information!

You have a smartphone and want you want to use your laptop’s wireless Internet connection via WLAN? Well, things would be easy if Android phones could connect to the ad hoc WLAN network your laptop can provide, but, alas, they can’t. Android wants an access point (AP) and for this you need a WLAN router. The odd thing is that the WLAN router will use your laptop for Internet access – normally the laptop would use the WLAN router and its DSL connection.

How to do this? You need

  • a smart phone
  • a  laptop
  • a wireless Internet USB stick
  • a WLAN router

We start with the laptop (mine runs Vista; the recipe should work with XP and 7 as well). Configure your wireless Internet connection to be shared. This turns your laptop into a router that (A) forwards packets from the LAN socket to the wireless connection and (B) dynamically assigns IP settings to connected devices via DHCP – your WLAN router will appreciate that later. You’ll need to assign a fixed IP to the LAN connection for this to work. sounds good to me (I heard that XP automatically chooses this address).

Now it’s the router’s turn. Configure your WLAN box with a fixed LAN address. How about Please note that this is a different network (192.168.1 instead of 192.168.0). Instruct the router to get its Internet settings dynamically via DHCP. Where from? Your laptop, when you connect the routers downlink port (mui importante!) to your laptops LAN port. Make the router dish out dynamic IP settings via DHCP so your phone gets served some. Set encryption, SSID, and password to your liking.

Finally, the phone. Configure your handset to connect to the WLAN router. Voila!

How does it work? You connect your laptop to the wireless Internet. The wireless USB stick gets its IP configuration via DHCP over the air from your mobile carrier. The WLAN router does the same with the laptop. The smart phone does the same with the router.

That’s  it… Enjoy!

Musings on MDA

The noise about MDA generally has died down, and I think that has to do with expectations having met reality. MDA can do really good things, when expectations are set right. If not, disappointment is pretty much guaranteed.

In example, I was overseeing a diploma thesis about transforming EJB 2.x persistence into EJB 3.x persistence. This worked very well because the input data was already modeled in a machine-readable form (in example, EJB 2.x-style XML descriptors), that is, the human part of the work was largely done.

Regarding MDA often promises are made – especially by vendors of MDA products – that inevitably create disappointment because improvements by an order of magnitude rarely materialize, for various reasons inherent to the way software development works.

First off, “coding” (I hate this term) is only a small part of the whole process of creating software. Even if you could reduce code production effort to 10% of the conventional approach: if code production is 20% of the overall cost, you achieved an overall reduction of 8% (which still is not bad, but not as spectacular as 90%).

Secondly, all automatic approaches require that some human already did the thinking part. You’ll never get a class model generated from requirements written in prose (but you can create the DB from a class model), and you’ll never get code generated from the informal description of an algorithm (but you can create method stubs and calls from a sequence diagram).

One (big) problem that keeps me at a distance from MDA approaches so far is that round trip engineering often works rather poorly. If you always generate foward there is no problem; if you need changes in the generated code reflected back into the model (refactoring comes to mind; try that with your MDA tool), you’re often out of luck.

Review “Frequently Forgotten Fundamental Facts about Software Engineering”

“Frequently Forgotten Fundamental Facts about Software Engineering” by Robert L. Glass (  summarizes in two pages at least 80% percent of the reasons why software projects fail, in an easy to understand – and easy to believe – manner.

Some conclusions:

  • “Good programmers are up to 30 times better than mediocre programmers”. “Most software tool and technique improvements account for about 5 to 30% increase in productivity and quality”.
    Then why is so much emphasis put on tools, and not on people?
  • “Efficiency is more often a matter of good design than of good coding”.
    Big Up-Front Design might be harmful, but watching the architecture “emerge” as you code is just as wrong.
  • “One of the two most common causes of runaway projects is unstable requirements”. The other is “optimistic estimation”.
    When “managing” means demanding lower estimates and more features, reality often refuses to budge.
  • “Estimation occurs at the wrong time” (that is, when the problem hasn’t been understood yet)
    But it is totally common to demand estimates after a cursory (three day glance) at a management summary, and then treating these numbers like cast in concrete.

It takes only 10 minutes to read the paper, and if you have been working as a developer everything will seem awfully familiar – it isn’t only you seeing the emperor wears no clothes.

Polishing mud

[This article is a little stab at the "state of the art" as it is often encountered in software development projects]

I enjoy reading books and articles about software engineering. Particularly I value the works of Tom DeMarco and Gerald M. Weinberg. There is clearly tons of advice available on how to improve team work, and how to create better software.

But I still have to find a project that adopts more than 20% of what is purported in the publications I read.

I mean – really. You can measure cyclomatic complexity according to McCabe, apply agile methods or RUP, but as long as code contains methods named “validateOrder()” that delete stuff in the database the primary problems lie somewhere else. Where to start?

You wouldn’t polish your car before you hosed down the mud from your last SUV trip through inner Iceland. You wouldn’t use fine grit sand paper when you still have to take off a millimeter or two from a table you built. You would start with the easy things that bring a lot of effect first!

So, begin with the 20% of the effort that gives you 80% of the result:

  • Document design decisions to provide an overall picture for everybody (don’t waste your time with JavaDoc saying that “getFoo()”, well, “gets foo”)
  • Give classes, methods, parameters, and members meaningful names (that will avoid a lot of explaining via JavaDoc)
  • Keep methods and classes reasonably short (which furthers understanding, and will quite likely improve cyclomatic complexity, too)
  • Ensure that humans can easily understand what’s going on in the code (no, it is not sufficient the compiler is happy)
  • Remember: Programming is something that humans do (a lot of time is wasted when people have to reverse engineer unnecessarily difficult code)

These measures are easy to implement (especially since all IDEs support refactoring), and help a lot.

Book Review: “Beginning Java EE 6 Platform with GlassFish 3: From Novice to Professional”

Beginning Java EE 6 Platform with GlassFish 3: From Novice to Professional” by Antonio Goncalves (Apress)

Though Java EE never has been an easy subject – 28 JSRs need to be covered – this book is a great introduction and tutorial into Java EE 6. I would rate it the best book on Java EE I’ve read so far, and I’ve read a few (often with considerable pains).

The book is suitable

  • for beginners, because many easy-to-follow but usable examples that build on each other dot the book,
  • and experts (in example, I have EE 1.3/4 experience), because the author points out the changes from previous Java EE versions in summaries and in each chapter

The chapters can be read separately for reference (enough redundancy has been provided), or as a well choreographed read from beginning to end. The book has a good balance between overview and detail: The author spends enough time on complex subjects to enable the reader to understand what is important (in example, object relational mapping is covered by a few chapters) without getting lost in fine details that don’t help comprehension much and that “can be looked up later”.

Keep in mind, though, that to implement Java EE 6 implementations you probably will need more technical information. That is only natural and owed to the complexity of some JSRs: it is simply impossible to cover – in example – JavaServer Faces in a single chapter when there are entire books on the subject, and that for a good reason. But you’ll get pretty far with this book.

I have to admit I never heard of author Antonio Goncalves before, but I sure hope he writes some more books! His writing style is concise, clear, and easy to understand; what a relief. That really is a gift, and makes reading up on EE 6 a breeze.

If you need a book on Java EE 6, buy this one first.

[Taken from my review at Amazon]

How Much Should a Developer Cost?

I recently went through the process of finding myself a new project and once more realized that at least 75% of the decision making for hiring a developer seems to be based on price. “You want – how much? Hey, I can easily find people who do it for 10 or 20% less!”

I am sure they can, for the whole “10 or 20% less”, if they just compare keywords like “Java”, “Oracle”, and “Maven”. Just as you can buy a car for $50.000, $20.000, or $5.000. While people are well aware that the price they pay has a substantial influence on the car they get, that fact seems to be ignored when it comes to hiring people.

The general assumption seem to be that once the overall skillset fits all developers are more or less the same. Well, that assumption is not supported by fact. How much does productivity vary really, you might ask? 10%, or maybe even 30%?

Tom DeMarco and Timothy Lister conducted annual public productivity surveys with over 300 organizations worldwide in their book “Peopleware” (which I strongly recommend). Here is what they found out:

  • The best people outperform the worst by a factor of 10 (that’s a whole order of magnitude)
  • The best people are about 2.5 times better than the median person
  • The better-than-median half of the people has at least twice the performance of the other half

And the numbers apply to pretty much any metric examined, from time to finish to number of defects.

Source: “Peopleware”, Tom DeMarco and Timothy Lister (Dorset House)

Wow! I am pretty sure that even a variance in salary of “only” 30% for the same job is rare, a factor of two is unheard of, and an order of magnitude is a feverish dream. And yet the performance of people you hire will probably vary by that much.

I originally had planned to continue with a long comment on what that means for the industry, for people who hire developers, and of course for the developers themselves. I am going to cut this short by simply suggesting a few things to employers and developers.

To employers

Keep the described facts in mind when you hire people. If you pay attention to quality even a larger difference in price is easily offset by substantially larger productivity – scientific research has proven that this is the case.

Think about how to interview for quality aspects. Comparing keywords or checking for certifications will not be enough, or will be even misleading. What does it mean if somebody says “I know Java”? What is their approach to programming? How would they define “good code”? How do they ensure they are a good programmer? What books do they read? What does the customer say about there work?

Regarding certifications: Certifications are mostly about being able to recall fact knowledge, and say nothing about practical experience and craftsmanship. In my personal opinion certifications mostly ensure that you looked into all corners of a specification, that you have seen the whole thing.

Taking Java as an example: It is good to know that you have Collections, Sets, Lists, and Maps in various implementations to your disposal. On the other hand, why would I need to know whether class Foo resides in package or – it is completely sufficient that my IDE knows, or that I know where to look. That kind of knowledge doesn’t give anyone an edge over others, while practical experience in a number of different projects really does.

To developers

Highlight not only the technologies you have mastered, but also the quality of your work and your personal traits. Explain your approach to implementing a solution, what you value in good software, the books and publications you read to stay up to date and to develop your skills.

Provide some customer references that highlight your craftsmanship, that you write well structured, easy to understand, working code that comes with a high level documentation, that you are creative, innovative, professional, self-motivated, a team worker, easy to get along with.

On the Use of Tools

[Warning: This is going to be somewhat of a rant] My professional experience now comprises of 18 years (as of 2010) in companies of various sizes and industries.

Regarding the use of tools I found

  • In companies with well educated employees and good practices, the use of tools mostly increases efficiency and, to a lesser degree, quality
  • In companies with not-so-well educated employees and suboptimal practices, the use of tools doesn’t change a thing (other than now a tool is used to produce the same bad results)

And that not really is a surprise.

Example: The creation of use cases. You get good use cases when the creator has thought about

  • What is the goal of the whole thing, the purpose?
  • Who is going to read it (in the sense of “what is their role”)?
  • What information does that person need in order to do their job?”.

That is the education part. It is very helpful, too, when the company provides a standard form for use cases and explains in detail what should go into which section for what reason/ purpose. That could be called process or practice.

This all may sound trivial, but check for yourself: most use cases look like “somebody” had to fill out “some form”, just to get rid of all that empty space.

It gets especially colorful when a number of people have been writing use cases without clear guidance. Each person has their own ideas about what information goes into which section of the form.

Additionally, the level of detail and context provided varies wildly. While some will clue you in that “an order is going to be updated in the database”, the next one will jump straight into the matter and tell you to “change fields foo and bar in table baz”.

Both descriptions might lead to the same implementation, but the latter approach leaves you to reverse engineer the original intention, and robs you of the opportunity to choose a different, possibly better approach to achieve the same result.

  1. To write a good use case you don’t need a tool. Educating your employees will be your best investment, and provide the largest benefit.
  2. The use of a tool will not make a badly written use case better. See point 1. Quality won’t improve just by using a tool. What you can and should expect from a tool: to make your processes more efficient, in example by improving the organization of information you have (better overview, faster access, new views)
  3. It’s easy to write a bad use case with a tool. Again, see points 1 and 2.

You may replace “use case” with “code”, “DB design”, whatever.