OK, I know what you’re thinking, I’ve either gone mad or I have lost the ability to count. 81 is better than 80, then what about 81, 82, 83, 84, ….. they’re all more than 80. 80% therefore cannot be better than 100%. Or can it?

Actually, I believe it is. If you’re willing to read on and possibly discover that I’m not just a crazy woman, I’d like to explain.

We’re taught from school that we must always give 100%. Later in life, when we undertake work performance reviews our output is often compared against targets, our bonuses are based on hitting 100% of the objective. That’s great if you’re a mathematician but the reality is, we’re human. First of all, one person’s 100% is very different from another’s. For Usain Bolt to give 100% in a 100-metre race will achieve very different results from you or I.

Anyway, I digress a little. You see, working in Technology Management Strategic Consulting, I’ve learnt two great truths,

1)     Great people make mistakes

2)     Yes, to really screw things up does indeed take a computer!

So, mistakes are an inevitability in the technology industry. How often and how big underline those mistakes. We can’t engineer out mistakes no matter how much the process guru’s try and convince us otherwise. Sorry to my old clients, I was once an IT process guru and I used to insist that we could remove all errors. I was so, so wrong.

You see, I’ve noticed that the faster others are running, the more frequent the mistakes. The more likely they are to occur and the larger the impact. The need to give 100% inevitably means that something will be missed due to speed. We quote simply can’t get 100% of the things right 100% of the time. It’s not possible. No matter what others tell you. It’s back to points one and two above.

A mistake is inevitable and our reliance on computers opens the door for further error. We’re led to believe that computers don’t make mistakes. That’s right, but it’s also wrong. If I told a computer that the formula to add two digits together was (x – y) then every time it added two digits together, it would, without fail, deduct y from x. Every time.

A computer is just an expensive, stupid box that does nothing that we don’t tell it to (I see the AI geeks at the back squirming). If we’re rushing to get things done, never stopping, never pausing to check, never allowing for our own infallibility, we will give a computer the wrong instruction. A software program, after all, is just a set of very complex instructions.

If we’re tired from running between Point A to Point B all the time, then we are in all likelihood going to increase the frequency, probability and impact of mistakes. A mistake needs fixing, it has to be checked and resolved once detected. We’ve applied corporate language to this, and we refer to it not as a defect or bug fixing nowadays, we call it refactoring. It sounds so much nicer and more constructive.

Smart software companies and technology houses get this. They understand that fixing something later in the process is more expensive than taking the time upfront. Valuing that we cannot run at 100% all of the time matters.

In part, the new Agile framework has caused some of the issues. I’m not against Agile, we needed to fix the old five-phase waterfall model, but we talk a lot about Velocity. A teams velocity reflects the speed at which dates can be met when it comes to delivering a product. That absolutely makes sense. If I work on this, what’s my team’s velocity and therefore how long would it take?

I’m not for one moment implying that we abandon all metrics and KPI’s from our work, they absolutely have a role to play. One of those roles should be measuring just how fast we are running to ensure that we are not all running around at 100% speed and making mistakes.

But, the old-fashioned mentality is still to exceed expectations no matter what. It rapidly dissolves from quality output but down to how many user stories can be delivered in a set time period and therefore, what can a manager do to speed up that velocity. Then, the inevitable happens. Quality falls, costs go up, and dissatisfaction grows.

This issue doesn’t only apply to software engineers though. It also applies to our hardware, network and support team members. Trying to complete too much in a day inevitably leads to mistakes as sometimes the small things are missed in the desire to get more done. The obsession to hit targets, either set for them or self-implemented, leads to an inevitable rush to get things done. And we know what rushing things leads to when considering points one and two again.

I’ve observed that teams deliver the best results when deliberate measures are applied to slow things down a little. Individuals are not measured purely on the speed of delivery, but first-time accuracy is valued over tick lists.

The net result of this slow-down is that the incidence of the need to refactor drops and in turn the true cost of delivery improves. The engineers are happier, more relaxed, they have time to learn and grow, and end-user/ customer satisfaction grows.

If I see anyone running around, in technology or other trade, constantly pushing to get more done and quicker, I worry. I know the resulting impact. Mistakes will be higher, quality results slower, and the service or products ultimately costs more.

When measuring another’s output (it is a subjective measure), I look to see if they’re trying to run too fast. If the culture of the team or organisation is about measuring velocity first, I know that there will be problems. I know to look at refactoring rates, costs, burn-out, end-user/ customer satisfaction.

I’ve learnt that 80% really can be better than 100%.

error: Content is protected !!