Tag Archives: management

How Much Should a Developer Cost?

I recently went through the process of finding myself a new project and once more realized that at least 75% of the decision making for hiring a developer seems to be based on price. “You want – how much? Hey, I can easily find people who do it for 10 or 20% less!”

I am sure they can, for the whole “10 or 20% less”, if they just compare keywords like “Java”, “Oracle”, and “Maven”. Just as you can buy a car for $50.000, $20.000, or $5.000. While people are well aware that the price they pay has a substantial influence on the car they get, that fact seems to be ignored when it comes to hiring people.

The general assumption seem to be that once the overall skillset fits all developers are more or less the same. Well, that assumption is not supported by fact. How much does productivity vary really, you might ask? 10%, or maybe even 30%?

Tom DeMarco and Timothy Lister conducted annual public productivity surveys with over 300 organizations worldwide in their book “Peopleware” (which I strongly recommend). Here is what they found out:

  • The best people outperform the worst by a factor of 10 (that’s a whole order of magnitude)
  • The best people are about 2.5 times better than the median person
  • The better-than-median half of the people has at least twice the performance of the other half

And the numbers apply to pretty much any metric examined, from time to finish to number of defects.

Source: “Peopleware”, Tom DeMarco and Timothy Lister (Dorset House)

Wow! I am pretty sure that even a variance in salary of “only” 30% for the same job is rare, a factor of two is unheard of, and an order of magnitude is a feverish dream. And yet the performance of people you hire will probably vary by that much.

I originally had planned to continue with a long comment on what that means for the industry, for people who hire developers, and of course for the developers themselves. I am going to cut this short by simply suggesting a few things to employers and developers.

To employers

Keep the described facts in mind when you hire people. If you pay attention to quality even a larger difference in price is easily offset by substantially larger productivity – scientific research has proven that this is the case.

Think about how to interview for quality aspects. Comparing keywords or checking for certifications will not be enough, or will be even misleading. What does it mean if somebody says “I know Java”? What is their approach to programming? How would they define “good code”? How do they ensure they are a good programmer? What books do they read? What does the customer say about there work?

Regarding certifications: Certifications are mostly about being able to recall fact knowledge, and say nothing about practical experience and craftsmanship. In my personal opinion certifications mostly ensure that you looked into all corners of a specification, that you have seen the whole thing.

Taking Java as an example: It is good to know that you have Collections, Sets, Lists, and Maps in various implementations to your disposal. On the other hand, why would I need to know whether class Foo resides in package java.foo or javax.foo – it is completely sufficient that my IDE knows, or that I know where to look. That kind of knowledge doesn’t give anyone an edge over others, while practical experience in a number of different projects really does.

To developers

Highlight not only the technologies you have mastered, but also the quality of your work and your personal traits. Explain your approach to implementing a solution, what you value in good software, the books and publications you read to stay up to date and to develop your skills.

Provide some customer references that highlight your craftsmanship, that you write well structured, easy to understand, working code that comes with a high level documentation, that you are creative, innovative, professional, self-motivated, a team worker, easy to get along with.

On the Use of Tools

[Warning: This is going to be somewhat of a rant] My professional experience now comprises of 18 years (as of 2010) in companies of various sizes and industries.

Regarding the use of tools I found

  • In companies with well educated employees and good practices, the use of tools mostly increases efficiency and, to a lesser degree, quality
  • In companies with not-so-well educated employees and suboptimal practices, the use of tools doesn’t change a thing (other than now a tool is used to produce the same bad results)

And that not really is a surprise.

Example: The creation of use cases. You get good use cases when the creator has thought about

  • What is the goal of the whole thing, the purpose?
  • Who is going to read it (in the sense of “what is their role”)?
  • What information does that person need in order to do their job?”.

That is the education part. It is very helpful, too, when the company provides a standard form for use cases and explains in detail what should go into which section for what reason/ purpose. That could be called process or practice.

This all may sound trivial, but check for yourself: most use cases look like “somebody” had to fill out “some form”, just to get rid of all that empty space.

It gets especially colorful when a number of people have been writing use cases without clear guidance. Each person has their own ideas about what information goes into which section of the form.

Additionally, the level of detail and context provided varies wildly. While some will clue you in that “an order is going to be updated in the database”, the next one will jump straight into the matter and tell you to “change fields foo and bar in table baz”.

Both descriptions might lead to the same implementation, but the latter approach leaves you to reverse engineer the original intention, and robs you of the opportunity to choose a different, possibly better approach to achieve the same result.

  1. To write a good use case you don’t need a tool. Educating your employees will be your best investment, and provide the largest benefit.
  2. The use of a tool will not make a badly written use case better. See point 1. Quality won’t improve just by using a tool. What you can and should expect from a tool: to make your processes more efficient, in example by improving the organization of information you have (better overview, faster access, new views)
  3. It’s easy to write a bad use case with a tool. Again, see points 1 and 2.

You may replace “use case” with “code”, “DB design”, whatever.

Agile Projects in Design Troubles

More and more reports indicate that larger agile projects are running into design problems. In order to deliver functionality fast in small increments code gets written without too much thinking about overall design.

The code gets deeper and deeper in “technological dept” (I like that term), resulting in slower and slower speed regarding the implementation of new features, and increasing costs per feature.

Is seems to me that the term “architecture” is poopoed by proponents of agile methods, and it is being derided frequently as “big upfront design” that is never finished.

Even big names in software engineering are only very carefully suggesting (not to look like a SW engineering relic in an modern, agile world) that thinking about overall design before you start is not generally a bad idea.

I do agree with the statement that “big upfront design” often happens, delays implementation, and is never finished. But that does not mean you should have no upfront design at all, or that you should not write any code before the design is completely finished.

When you start implementing software without planning and agreeing on a general layout of your application (aka architecture) you keep adding pieces that eventually will form a structure, one way or another.

That structure could be called an architecture – some call it an “emergent architecture” – but isn’t that just a nice technical term for something that rather “somehow happened” opposed to “has been systematically constructed”?

Having a structure makes it much easier to guess where functionality has been implemented (in example, to fix a bug or add a feature), just like sorting a list makes it easier to find an item.

Sorting is an additional effort that doesn’t even change the items. But it adds information to the data that is useful. When you know the overall picture, it is fairly easy to determine where a puzzle piece needs to go.

The compiler is happy with anything that compiles; architecture is mostly there to help humans to understand how the system works and how it can be extended.

So: Architecture is there to help you to get the job done, not another bump in the road that needs to be flattened.

Do spend some time on thinking about architecture when you know enough about the requirements to do so.

Make sure that newly added code fits that architecture. Adapt the architecture if it doesn’t fit new requirements. Treat architecture like a feature of the system.

The Cost of Quality: Two Hands, Two Bodies

Paying attention to software quality does pay off. Anybody who ever had to understand, repair, or extend existing software knows that lack of documentation (preferably on higher than code level), architecture, and coding guide lines, can turn maintaining even small amounts of code into a lengthy, risky and thus costly adventure.

Then why is software quality the first thing tossed over board, long before scope or – often arbitrary – deadlines?

Well, software works even with very low quality, if left untouched.

Having worked with a few large companies I found that the people benefiting from higher software quality are often not the same people that develop the software originally. Here is one concrete, real-world example.

Department A is responsible for initial software development. Department B is responsible for maintaining the software. B would clearly benefit from good documentation, architecture and so on once other developers than those comprising the original team are making changes to the software. But it is A that has to pay for additional quality that has little or no value to A.

What happened? Shortcuts taken by A mushroomed into exponentially rising costs for even the smallest changes by B, much to the dismay of the customer (department B, that is). Unfortunately there was no instance at the customer controlling cost of A and B in the context of the project they worked on.

That this won’t work is obvious – A has no incentive to spend extra money, and B has no influence on original development. Sounds contrived, unrealistic, and exaggerated? I wish it was…

So: If you find yourself in a similar situation try to find a sponsor that sees both sides, and inform him or her that investing in software quality pays off when the whole process is taken into account. Or at least inform somebody responsible for departments A and B that major and easily avoidable money wasting is in progress…

Why Racing Probably Will Slow You Down

Many programmers will have gone through the following cycle.

The project time line has been tight from the beginning. Finally the specification gels into a vaguely discernible form, though pretty late. No time any more for architecture, design, and generally engaging the brain before typing. The deadline must be made! Everybody rushes to the goal, best practices and common sense are trampled into the ground, and hackers feel finally freed from those bureaucrats that call themselves architects.

Inevitably the project is late anyways, and the code is a huge mess that will be cleaned up when there is time. That is: never. Time is money; where should the money come from?

The customer? “Hey, the stuff finally works, why would we pay for beautifying the code? And, by the way, don’t you guys advertise high coding standards?”

Your own company? “Well, the budget is blown, and the customer won’t pay for it! And, by the way, aren’t you guys using best practices?”

A good solution would have taken maybe 20 or 30% longer than the ordered estimate. In the end the project consumes the same amount of time. Additionally you get a much poorer result, angry customers, frustrated programmers, and the kind of code that makes you want to switch projects before the next release. Cutting corners when your experts beg for some time to think before coding will buy you nothing in the long run.

Software Industry != Software-Industrialization

In my opinion somebody has been seriously confusing the production of material goods with the development of software.

Let’s look at the example of Lego bricks. A lot of effort goes into setting up injection molding machines and a distribution network. Once the complex machinery is in place you produce millions of identical items by employing the same steps over and over, with very little human intervention.

This is not how software is being produced. The multiplication and distribution is the smallest problem. In the case of software created for a specific project and purpose the product might be installed only once. Pretty much all of the effort goes into the specification, design, and development of a single unique product.

Where is the industrialization in that? What is industrialization, anyways? Making a single item is expensive. Constructing a machine for creating a single item is even more expensive. But using that machine to make a lot of items reduces the cost per item enormously.

So what that is software industrialization then? The best analogy I can think is the use of tools like IDEs, debuggers, or maybe even MDA. And using standardized methods, tools, and processes. But the advantages of applying all this only go so far; creating software is done mostly by working with the head, not with tools.

This means that developing software is not an industrial process performed in an automated factory, but rather tool-assisted handcrafting in a manufacture, gladly employing sophisticated methods. This is not a bad thing; better methods and tools will help a lot to improve the quality of software, and to speed up the development process.

Breaking Up Architecture and Development

From practical experience, the emphasis in “breaking up architecture and development” is on “breaking” – it does not work.

A customer I have been working for decided to split software development across two different organizational branches: project management, specification, and architecture on one side and software development and testing on the other. The transition to this organizational structure was made in a “jump into cold water” fashion.

The idea was that the high level and therefore expensive work would be done in Germany and the low level cheap work would be offshored to a “development factory” (“industrialization” was another frequently heard term, but that is another story).

This did not pan out as planned, for a number of obvious and maybe not so obvious reasons:

  • Architects in that company often are programmers that are either totally out of touch with current developments, or clearly less than decent programmers, or aren’t even programmers at all. IMNSHO an architect must have years of programming experience and must be a really good programmer with clear ideas on what kind of programming style promotes high software quality.
  • Estimates where done independently by the architecture and the development team. Since both teams had different ideas about how the software would be designed and because these ideas never got synchronized the overall estimate often was the total of planned work for substantially different approaches.
  • The architects had no real influence on the programmers. If concepts were not liked the development team simply priced them out of range (“if we do it like this (solution that we don’t like), it will cost 30% more”). Project management did not have the necessary insight to call shenanigans or to understand and act on long term consequences.
  • When architects and programmers are separate persons a huge amount of information must be transferred in the form of papers, talks, and meetings. It is much more natural and efficient to let the senior developers, possibly with some guidance, develop the architecture, and then let them move on to programming as the project continues.
  • Developers in India are not just “coders”, if that management idea of “low level programmers” would exist. They can and want to do architectural work, too. Intelligent programmers will resist to micro-management on the code level.
  • The separation felt and worked like contracting development out to an external, untouchable company. Estimates had to be accepted like carved in stone; risk surcharges, profit margins, and baseline costs where added, and there was no direct talk to the programmers. Code reviews weren’t even on the agenda.
  • And – the whole thing just does not work out financially. The plan was to specify everything down to a pretty low level and then let the cheap “code drones” take over. The problem: You spend 50% of the time or more on expensive experts working out a detailed specification and then try to get your money back by executing the last 50% or less with cheap labor. And then you find that instead of ten programmers you need fourteen plus two group supervisors plus one off-site manager to get the same job done, and your customer is not willing to deal with documents written in English. So much for comparing hourly rates by dividing one through the other and drawing the wrong conclusions.

As you might have guessed, the split between architecture and development has been reversed after a year of pain and suffering, and surely a lot of money and enthusiasm lost.

Why Doesn’t Management Fight Entropy?

Software development projects pretty much always run longer than estimated, and the quality often is poor. Books about software development point out exactly the same problems over and over again since the 70s, and the books are still as valid as ever. Many people I know wholeheartedly agree the described problems are a common pattern in IT business.

So how come the situation never changes? I had parked this posting for a while when I came across an IEEE article about lying in the IT business (“Lying on Software Projects”, IEEE Software November/ December 2008) and found there was an interesting overlap.

Frequency of lying:

  • Cost or schedule estimation: 66%
  • Status reporting: 65%
  • Political maneuvering: 58%
  • Hype: 32%

Who is lying, and why (Cost/ Status/ Politics/ Hype):

  • Management (53%/49%/ 44%/ 31%)
  • Project Lead (48%/ 54%/ 34%/ 32%)
  • Developer (45%/ 30%/ 19%/ 29%)
  • Marketing (40%/ 20%/ 26%/ 36%)
  • Customer (11%/ 12%/ 13%/ 16%)

From these number I get that most often management up from and including the project manager is lying, and most lying involves cost/ schedule and project status. Anybody who knows the IT world from inside who is surprised?

(A) There is always pressure on estimates and schedules to beat the competition and to please management (knowing that the difference between what the customer is willing to pay and what the developers are willing to commit themselves to is income). (B) Later in the project there is pressure to hide cost or schedule overruns, a subject closely linked to status reports.

What has that to do with fighting software entropy? There are three basic screws you may turn to adjust project parameters: scope, cost/ time (the number of people aka “resources” is part of this aspect), and quality.

Scope rarely can be discussed. For sure not at the beginning, because nobody wants to try to win a bid by telling the customer “OK, we can do it, but only if you drop this, this, and that requirement”. Later in the project cutting scope still is awkward because you can’t do it without discussing it with the customer, or the customer noticing if you skipped the discussion part.

Often there are hard constraints on cost and time at the beginning: You have to (or figure you have to) beat somebody’s price, and there is a hard date when the project must be finished. Often enough cost and/ or time slips later in the project, but this is always very visible to management and customer, and nobody likes that.

That all makes quality an easy taget. The problem already starts with the definition and perception of software quality. For some the absence of (noticeable) bugs is pretty much the only criterion for quality. The next person cares about maintenance and extension, and therefore about programming style and architecture. But the absence of that kind of quality is pretty abstract for management, customer, and regrettably for many developers as well.

So what happens? In order to cut cost and development time way too often proper software design is tossed over board first (and I saw that happen while developers where cheering). Enhancing software quality seems to be an extra, some sort of beautification that can be lived without. And it can be left out, but not without very real consequences in the future (which is what this blog is all about).

Lack of awareness. Many project managers don’t know all that much about the presence, absence, or effects of architecture and programming style. How many projects do you know where the project lead or somebody appointed for the task regularly checks the source code for software quality? And if you know some projects: were these checks about formatting and naming conventions, or about architecture, coding philosophy, and programming style?

If management is not aware of software quality they can’t pursue it, plan for it, educate people on it, foster it, advertise it to their own managers, sell it to the customer. Software quality is worth something, and not everybody has it! Others (competitors?) may claim they do, but ask them what software quality is in their eyes, and educate yourself about what it should be, and what the advantages are.

Low software quality does not poke you in the eye like bugs noticed by a customer. But there are a number of symptoms that are easy to spot even without ever looking at the code:

  • It becomes more difficult from release to release to get features implemented (the cost and time per function increases strongly).
  • Reasonable features are being rejected by the developers because they know what trouble the implementation would cause.
  • Developers tell you it makes no sense to take on more developers because they “don’t know the internals” (of course we all know that programming does not scale the same way as filling sand bags does, but it should not be impossible to take on new programmers or replace some).
  • The cost for even small changes becomes so large that even when all details can be explained it simply doesn’t feel right, and it is embarrassing to the tell the customer.
  • Programmers strongly avoid or plain refuse doing any work on certain parts of the application (knowing any change to that cesspool of code will break things in an unforeseeable way).
  • Programmers are leaving the project in droves.

While this may sound like the developers are responsible, they quite often are not: it can’t be expected that sufficient time gets spent on architecture and design when the only goal is “to have something deliverable by date x”. On the contrary I have been working on projects were the programmers “ganged up” and “secretly” refactored code, sometimes even on their own time, because they felt they never would be granted the time to change something that turned out to be awkward, an obstacle, or a constant pain in the neck.

Programmers must know about and code according to good programmings standards, and management must allocate the resources for this. This includes selecting and/ or educating programmers with the right skills, and factoring in software quality into estimation, planning, implementation, and marketing.

Conflicting goals. I worked for a company where one unit was responsible for initial software development and another for operations, maintenance, and extension. I know that this is not exactly an exotic situation. The problem was that there was nobody watching the project who was high enough in the hierarchy to oversee both initial development and continued maintenance.

Good architecture costs some extra effort, no doubt about that. Good architecture and the improved software quality that comes with it often doesn’t you get that much of an advantage in the first release of a software. The payoff comes later when changes and additions are easier and code is more understandable for new members on the team (just to name a few points).

In our case we just could not get the customer to pay for some additional architectural efforts. They even agreed that it would be better to put in some more effort for a better architecture, but since they would not receive the benefits they simply could not justify the expenditure. Of course there would have been an overall savings, but, no, the decisions were made by two different units with two different goals. And, of course, we got blamed again and again for high development costs when the software was extended during over a dozen releases.

[This article very likely is going to be extended, and/ or broken up in smaller units]

Is There a Job Called “Coding”?

“Management has announced that project management and application design will be done locally while coding will be done offshore.”

This announcement conveyes a number of messages.

The tone implies that coding essentially is some type of basic work. The thinking has been all done; now the results have to be written out in full and typed into a terminal, just like entering data from a fax or letter. Unless application design is done to a very fine level of detail, writing source code requires a lot of intellectual work to convert common language specifications into a running program.

The announcement implies, too, that coding is some kind of low level work. The architect has designed the building. Now the building company takes over, and while the construction of foundation, walls, and roof requires specific skills that an architect might not have it is quite clear there is a difference in the attributed value of work, both intellectually and financially. Writing source code requires very specific knowledge of ever changing APIs and frameworks. Apart from that, software architecture and implementation never are as independent as planning and building a house. While the latter is a pretty tried and true standard affair the former means solving a new problem with new means almost every time.

Since the company employs business engineers that capture requirements and produce a specification, architects that design the internal structure, and developers that write the software, “coding” must be what developers do, and coding is going abroad. Not too happy a perspective for “coders”, whose value has been downgraded while being informed they will be made redundant. The offshore “coders” probably don’t see themselves as data entry personnel, either. In example, I found Indian programmers at least as capable (and ambitioned) as their local counterparts.

The offshoring of “coding” work encourages management to relabel developers as architects, which might be what they are – or not. Often, the following questions are never answered: What does an architect typically do? What qualification must an architect have? Does the company provide formal training, or are architects simply being appointed? This non-definition of what an architect is leads to the employment of architects who never have been doing any serious programming, which in my professional opinion is a sine qua non prerequisite.

Management does not quite understand what their software developers do when they talk about about “architecture” (“planning the system down to the details”) and “coding” (“typing in source code”), especially in the context of offshoring. Of course development work can be split up like that. But then architecture would comprise 80% of the effort, and not much could be saved by getting 20% of the work done at low cost. Additionally, problems and insights during implementation often prompt an adaptation or change of architecture, requiring a more iterative approach than the waterfall modell “design here, code overseas” provides.

Going Offshore – Things to Keep in Mind

One of my clients decided to offshore development work. The idea is that high level, well paid work – maintaining direct contact with the customer, capturing requirements, specifying and designing the application – gets done locally while the coding gets done offshore for cheap.

This didn’t quite work out as planned, for reasons that were not too difficult to anticipate. The scenario described above touches on a number of issues that I would like to address in a series of blogs.

The anticipated subjects:

  • “High level work” – What separates high level from low level work? Why is high level (= high cost) work not offshored as well?
  • Separation of architecture and development – Does this provide benefits? Improve quality? See “Breaking Up Architecture and Development“.
  • Going offshore to reduce development cost – How much cheaper is offshore labor really? What additonal costs are incurred? See “Comparing Work by Cost Only“.
  • “Coding” – Is there a job called “coding”? What does it consist off? What happens to the onshore “coders”? See “Is There a Job Called Coding?“.
  • Ethical aspects – What are the implications of producing offshore in low wage areas of the world and selling the product in high price markets? What does it mean to the current workforce?
  • Prerequisites – What kind of projects are suitable for offshoring? What processes should be in place before going offshore is attempted? See “Prerequisites for Offshoring“.
  • Economical aspects – Are there other, better ways to reduce development cost? What does “better” mean in this context? See “Comparing Work by Cost Only“.

What does that have to do with Software Entropy? Software quality gets influenced by many factors, and the longer I work in the field the more I recognize that most problems in software development aren’t technical. Management decisions have a huge impact, and going offshore is one of them.