What Metrics Should We Have?

I am sometimes asked “What metrics can we put in place to know that our people are doing a good job?”

They never like my answer. I just want three metrics to start:

1. Are our end users happy?
2. Are our customers (the people paying the bill) happy?
3. Does our team have joy?

They look at me like I am insane.

“That is not nearly enough.” They reply emphatically, “We measure 100s of things. You can’t manage it if your can’t measure it!”

“I agree.” I reply.

“Stop managing and start leading.”

Moore’s Law

Note: I am posting this in the off chance you haven’t read something like it before.

Gordon Moore1 observed in 1965 that the surface area of a transistor was being reduced by 50% each year. The press called this observation Moore’s Law. It is especially significant to us because the transistor is the fundamental building block of computation technology, the integrated circuit.

He predicted in 1975 that, for the foreseeable future, computer chip density would double every two years and with it computer power. At the same time, Moore observed that the cost to manufacture the computer chips was remaining relatively constant. If you bought your first new microcomputer in 1975, according to Moore’s Law you have observed the following: 

1977 – New computers are 2 times faster than mine in 1975
1979 – New computers are 4 times faster than mine in 1975
1981 – New computers are 8 times faster than mine in 1975
1983 – New computers are 16 times faster than mine in 1975
1985 – New computers are 32 times faster than mine in 1975
1987 – New computers are 64 times faster than mine in 1975
1989 – New computers are 128 times faster than mine in 1975
1991 – New computers are 256 times faster than mine in 1975
1993 – New computers are 512 times faster than mine in 1975
1995 – New computers are 1024 times faster than mine in 1975
1997 – New computers are 2048 times faster than mine in 1975
1999 – New computers are 4096 times faster than mine in 1975
2001 – New computers are 8192 times faster than mine in 1975
2003 – New computers are 16,000 times faster than mine in 1975
2005 – New computers are 32,000 times faster than mine in 1975
2007 – New computers are 64,000 times faster than mine in 1975
2009 – New computers are 128,000 times faster than mine in 1975
2011 – New computers are 256,000 times faster than mine in 1975
2013 – New computers are 512,000 times faster than mine in 1975

Do you have enough computational power to solve your business problems yet? 

Practically speaking, Moore’s Law has been in operation throughout the entire careers of most people in the software development industry. Our computer hardware is threatened with obsolescence every few years.

You can also think of Moore’s Law another way. In 1975 you would have paid $67,000 for computational ability you can buy today for a few cents. In the world of computers the golden triangle of faster, smaller, and cheaper is actually true.

Moore’s Law impacts us in ways we often don’t first realize. For example, answer the following question: How many computers do you own? Before you jump to a quick answer think for a moment. How many computers are in your car? In your home? In your kitchen alone (the microwave, stove, refrigerator, toaster oven, etc. all likely have computers inside)?  How many computers are you carrying on you right now?

Moore’s law affects a lot more than just the “personal computer.” It has created a demand for customized computers and software for just about every electronic device imaginable; more devices, more computers, more software, more complexity in the technology that runs our society.

 Are you developing software to run on your customers’ current computers? How many changes of operating systems and hardware should your software be designed to survive?  What will it look like in the new iPad, on a larger cell phone, in Google Glass?

To survive in this environment you need to make and continually revisit strategic business decisions including which new technology to adopt, how much and when. Yesterday’s answers may not be at all appropriate today for the simple reason that you can no longer even acquire yesterday’s technology. This is a game you cannot get out of, some of us have tried. 

1 Dr. Gordon Moore co-founded Intel Corporation in 1968 and served as CEO from 1979 to 1987.

Moore, Metcalfe, and Disruptive Technology

Context matters. The context for developing software products is a hyper-accelerated society. Technology advances at an ever accelerating rate and business is a direct beneficiary, and sometimes casualty.  The world has seen more technological innovations in the past 50 years than in all the years of previous human history combined. And there is no sign that the rate of change is letting up. In my lifetime, I’ve watched our society embrace and adopt the following significant new technologies in an increasing accelerated pace:

    • Tablet Computers: 3.5 years to adoption by ¼ of U.S. population
    •  
    • The Internet Browser: 7 years to adoption by ¼ of U.S. population

 

  • Cellular Telephones: 14 years to adoption by ¼ of U.S. population
  • Personal Computers: 16 years to adoption by ¼ of U.S. population

Do you see the trend? It is important to understand the context of the society we find ourselves in, an accelerated society. The following three principles help capture the this context:

  • Moore’s Law
  • Metcalfe’s Useful Equation
  • Disruptive vs. Sustaining Technology

I will post on them separately, or if you are impatient you can Google them yourself. If you are going to do agile development, these are worth studying and understanding.

Security Crisis

The Department of Homeland Security, U.S. Cert, and other private organization continue to raise concerns about the significant vulnerabilities that exist in U.S. Information Technology (IT) infrastructure (e.g. computers, operating systems, phones, software, servers, databases, and networks).

Our economy has become significantly dependent on our IT infrastructure to conduct almost all business and this trend continues to expand. Unfortunately, there is reason to believe that a highly coordinated and sophisticated attempt to disrupt the operations of this infrastructure could succeed. Clearly, our networks and computers are vulnerable to attacks where even unsophisticated high schools students can inflict more economic costs than a Florida hurricane.

Attacks come in many forms. Hacking is where a user gains direct control over a computer, usually by thwarting the log-in and firewall mechanisms. Viruses are self replicating code fragments that infect computers automatically. Trojan horses are social tricks, where the user is tricked into executing hostile code by appearing to be something else.

The current responses to these security threats is a reactive one, typified by updating virus software and downloading various application and operating system software patches. The weakness with a reactive response is that it typically occurs only after an attack has been successful. It takes time to identify new attacks, it takes time to update the virus filters and security holes in the operating systems and other software, and it takes time to distribute these new updates to all of the computers on the network. During all of this time, significant damage is occurring to the economy.

The reactive response does provide increased security but at great risk. Reactive responses require that the filters be continually updated, and because each new attack requires a customized response, even hundreds of people can’t properly keep up with all of the attacks. These reactive programs, in an attempt to combat the attacks, continue to grow in complexity and size causing a signification reduction in machine performance, plus they tend to introduce new defects and more vulnerabilities, and negatively impact worker productivity.

We have been fortunate that no enemy has really released a truly morphing virus, which continually changes form and method of attack. Such a virus resists all standard filter attempts. We have been fortunate that no enemy has tried more subtle attacks; such as changing just a few of the numbers in every spreadsheet on a machine and then deleting itself. These types of attacks could bring a halt to the information economy as companies spend trillions of dollars trying to sort out good data from compromised data.

So, what conditions have most led to our current crisis of vulnerability? Interestingly enough it is our historic strengths in mass production and uniformity that cause these vulnerabilities. Currently, all of our machines are fundamentally the same. If you can successfully infect one machine you can successfully infect most of the machines. Hundreds of millions of machines in government, business, and private homes all have the same software installed, the exact same version, they look exactly alike. If you break one machine, you have successfully broken a hundred million machines.

Note, these vulnerabilities are almost impossible to eliminate simply by better programming.

We need a new solution. A solution that takes the hundreds of millions of existing machines that all are exactly alike and makes them all dramatically different—automatically. We need a proactive active strategy to protect against infection, before a new attack is even conceived. A proactive solution does not wait to see how computers are compromised and then add a new filter to stop that attack. Instead, the system leverage the best of encryption, advanced pattern detection, and proprietary polymorphic behavior (i.e. continually changing forms) to insure that a virus, hacker, or Trojan has no place to go.

A proactive solution continues to work even if a machine in successfully attacked, that machine automatically identifies the attack and actively attempts to remove it and restore operations.

A proactive security world has all of the machines expressing in different forms. If any individual machine is compromised, it is highly unlikely that that technique can be used to attack any other machine. A system acting more like a human body, responding automatically to infection by changing its defensive forms until the hostile code can no longer survive. Our machines must actively fight infection.

This will only be accomplished by taking a fundamentally different approach to the problem. We must leveraging the power of the processors already in the computer to full advantage as security processors. Security must become a fundamental property of the machine, not an afterthought downloaded from a virus software vendor. No virus updates and search patterns to download. No large teams attempting to respond to the latest attack. No single points of failure.

In a way, the machine is becoming more self-aware. Using the resources of the computer and the processors to build, monitor, and continually change defenses on that machine that are totally unique to that machine. In this way, almost all of the security holes and vulnerabilities that currently exist in our IT infrastructure can be closed.

Of course, these technique must run quickly and compactly and not require a lot of additional resources. The techniques must scale both up and down the spectrum, allowing for a style of security to be applied to all computational devices from mainframes, to servers, to desktops, to laptop computers, to hand-held devices and cell phones.

Our economy has become too dependent on IT infrastructure to allow security to be handled haphazardly.

It is time for drastic change.

Mathematics, not Programming

The coming software productivity revolution will be based in the rigorous application of mathematics, not in clever programming tricks, “enterprise” Java or language du jour  “integrated” development tools.

MIT’s Technology Magazine reported that IT teams spend up to 80% of their budgets removing defects they themselves introduced into the code.1 Imagine the possible savings if a software product could be produced defect free the very first time. The only way to achieve this is to have a mathematically rigorous process of creating software, a mathematically rigorous process of turning business needs into executable systems.

Although loath to admit it, most software developers will confess that the internals of their software systems have much more in common with a Rube Goldberg cartoon than a mathematical equation. This is unfortunate, for only the rigorous application of mathematics enables the rapid production of error-free software systems.

I’ve seen it done, repeatedly.

The day is coming, burning like a furnace, when traditional development will be chaff; that day will set it ablaze, leaving neither root nor branch.2

I look forward to that day.

-Tom

1 MIT Technology Magazine, “Why is Software So Bad,” August 2003.

2 My homage to Malachi 4. Still waiting for the arrogant and evildoers to be chaff.

Don’t Offshore, Automate

Automation changes the nature of work. It improves productivity and significantly reduces defects, by reducing opportunities for human error. Automation improves quality, while decreasing costs.

This is true for manufacturing, it is also true for software. New software products can be produced for significantly less money, in dramatically less time, with little or no defects–through extremely aggressive automation.

Automation is the future of software products as sure as it has been the path for all other commodity industries. If you are not actively engaged in discovering how to make your products a part of the software productivity revolution then it is definitely time to begin.

I’ve watched large corporate client after large corporate client offshore software development to reduce costs. May I make a suggestion?

Don’t offshore, automate. The answer to building software products faster, better, and less expensively is not cheap labor. The answer is in eliminating most costly and error prone manual labor altogether.

A study of over 30,000 software development projects reported that two-thirds experience major problems and over one-quarter fail outright. In one recent year alone over 30,000 projects failed wasting over 56 billion investment dollars1. The rate of failure is so high and the variation so great that the success or failure of any given project is, to most managers, essentially random.

It is not surprising that sponsors are reticent to support software development initiatives. It is not surprising that so many companies are eager to send software projects overseas where at least they diminish the costs of failure.

Currently, market forces are acting on the belief that the future of software development is offshore cheap labor2. The emphasis on the single characteristic of unit costs per programmer hour is tragically flawed and outdated.

Cheap labor diminishes costs, but it does not improve productivity or quality.

I am making a bold pronouncement, I say it is possible to eliminate 90% of the programming labor of most projects entirely, and I have the case studies to prove it.

Although cutting unit costs per programmer hour is a reasonable goal, the benefits gained from this approach are insignificant when compared to automating most of the programming and testing tasks and eliminating most manual labor entirely.

If you were going to dig a tunnel from England to France, would you seek to hire 5,000 Indian laborers and arm them with picks and shovels? They are really cheap per day!

No – it is an insane way to dig a tunnel. It’s an insane way to build software. Eventually the industry will wake up. But they haven’t yet. So if you learn the secrets of automation you can be way ahead of your competition.

Cheap labor WAS NOT the most efficient way to build the Chunnel.

Cheap labor IS NOT the most efficient way to build software.

Automation is.

Automation makes the current trends in off shoring software development irrelevant.

This is my notice to the software industry, it is time to seriously raise your game.

1 “Standish Group Chaos Report,” 2003.

2 Wired, “The New Face of the Silicon Age,” February 2004.

The Process Trap

It is an easy trap to fall into, a project somewhere in the company has a few struggles or even outright fails and management sends in a team to “fix the process.”

After studying the failure, the team suggests a new document or checklist to add so that other teams will not have this problem in the future too.

Problem solved.

Unfortunately, this solution likely simply exacerbates the problem. The solution was yet another checklist, yet another way to remind people there is something they need to be thinking about, yet another form to fill out so the team is reminded ”do not forget this important thing:.

The Agilist understands there is another way, a better way, to help teams be successful.

First, keep the tribe together dedicated and co-located so important knowledge about the system is maintained in the group automatically over time. Keeping the tribe together eliminates the need for a lot of documents and checklists.

Second, pair everybody. Ensure key knowledge is spread across the entire tribe by pairing and eliminating specialists, so everybody has an opportunity to be exposed to core issues.

Third, if it is so important you are inclined to put it on a check list to make the team constantly review it then AUTOMATE the test for it. If you are worried you need to remember something then eliminate the worry by automating the test for it.

UNTIL YOU HAVE DONE ALL THREE OF THE ABOUT DON’T CREATE NEW CHECKLISTS!

There is a really important reason why, care to guess what it is?

 

 

 

Hero

An extremely popular approach to business, almost iconic in the United States, is one we’ll call Hero. Hero is what most people actually are doing when they say they are being Agile. They sometimes look alike, but are fundamentally different.

Companies who have a Hero approach and who think they are doing Agile–will truly struggle to be Agile.

So what is Hero?

To illustrate Hero we use the 1971 Clint Eastwood motion picture, Dirty Harry.

In the movie, a killer threatens to randomly kill a person each day unless the city of San Francisco pays him a ransom.To lead the investigation the Chief of Police and the Mayor of San Francisco assign “Dirty” Harry Calaghan the task.

Why Dirty Harry?

Because he gets things done… he is a hero you can rely on. He doesn’t care about rights, doesn’t follow rules, breaks laws… in order to get the job done. A more recent example of this type of character in popular fiction is terrorist fighter “Jack Bauer” in the TV series 24.

The Hero Approach

The Hero is a maverick. Independent. Confident, yet highly skilled. He is driven by his own internal moral compass and doesn’t let rules get in his way.

Americans like Heroes, and our biggest grossing films almost always feature “Super Heroes.” A business philosophy built around heroes says we make our own rules to get the work done. We do not constrain our teams with bureaucracy and paperwork, but leave them to themselves to discover the best way to succeed.

Laws don’t really apply to super heroes. Or to software heroes.

It is a compelling way to work, and some of the most interesting American companies and products were birthed in a heroic fashion. Very significant companies such as Apple, Google, and Facebook followed a hero launch pattern.

What does Hero look like in business?

A small group of people working together in a small room or garage dedicated to a specific task. Apple and Google started in garages, Facebook in a dorm room.

Smart, dedicated, passionate people working closely on a project they love. Send pizza and Mt. Dew into the room and hope something great comes out.

Large businesses actually ask for this type of thing all the time: “Give me a war room and get out of my way.” The executives are known to say, when something they think is really important needs to be done.

Of course, the executives also steal the best people from every other project to put in their “war” room. Heroic methods require, no, demand heroic staff.

However, hero has some down side. Care to guess what they may be?

 

Uncertainty

An attribute of Agile software development I have been pondering lately is uncertainty.

We might argue that uncertainty is just one of many sources of friction, but because
it is such a pervasive trait of software development, I will treat it singly.

All actions in our software development life cycle take place in an atmosphere of uncertainty. Uncertainty pervades our operations in the form of unknowns about the competition, about the environment, and even about our own business. While we try to reduce these unknowns by gathering information, we must realize that we cannot eliminate them—or even come close. The very nature of business makes certainty impossible; all actions in business will be based on incomplete, inaccurate, or even contradictory information.

Business is intrinsically unpredictable. At best, we can hope to determine possibilities and probabilities. This implies a certain standard of executive judgment:

What is possible and what is not?

What is probable and what is not?

By judging probability,we make an estimate of our competitor’s designs and act accordingly. Having said this, we realize that it is precisely those actions that seem improbable that often have the greatest impact on our outcomes.

Because we can never eliminate uncertainty, we must learn to act effectively despite it. We can do this by developing simple, flexible plans; planning for likely contingencies; developing standing operating procedures; and fostering initiative
among subordinates.

One important source of uncertainty is a property known as nonlinearity. Here the term describes systems in which causes and effects are disproportionate. Minor incidents or actions can have decisive effects. Outcomes of battles can hinge on the actions of a few individuals, and as Clausewitz observed, “issues can be decided by chances and incidents so minute as to figure in histories simply as anecdotes.”

By its nature, uncertainty invariably involves the estimation and acceptance of risk. Risk is inherent in business and is involved in every project. Risk is equally common to action and inaction. Risk may be related to gain; greater potential gain often requires greater risk. The practice of concentrating business resources toward the main effort necessitates the willingness to accept prudent risk elsewhere. However, we should clearly understand that the acceptance of risk does not equate to the imprudent willingness to gamble the entire likelihood of success on a single improbable event.

Part of uncertainty is the ungovernable element of chance.Chance is a universal characteristic of business and a continuous source of friction. Chance consists of turns of events that cannot reasonably be foreseen and over which we and our competitors
have no control.

The constant potential for chance to influence outcomes in our business initiatives, combined with the inability to prevent chance from impacting on plans and actions, creates psychological friction. However, we should remember that chance favors no one exclusively. Consequently, we must view chance not only as a threat but also as an opportunity which we must be ever ready to exploit.

(Note: This is an exercise in rewriting existing text created for another purpose. Any guess as to the source material for this post?)

Modern Control Systems – Part 1

Modern Control Systems – The Secret to Understanding Agile
We make a simple promise to you, read and understand this overview of modern control theory and you will understand how agile software development works, in a profound way. You will understand it better than many people who lecture on Agile techniques at conferences.

Open Loop Systems
We begin with a simple description of open loop systems. An open loop system is a modern control system without feedback. “What is that?” you ask. Fortunately, it is easy to understand, it is as easy as making toast.

A modern control systems can be described in three simple parts:

 Part 1: Desired Output

You begin  with an idea of a specific output you desire to create. If you are creating a piece of toast then you being with an image of what a perfect piece of toast may look like to you.

Toast

Having the idea of the desired output is the first part in modern control theory. You need some idea of what you want so you can discover a way to create it. That leads us to:

Part 2: Process

After you have a desired output you create a process to give you that desired output. If we want to make a process to give us the perfect piece of toast it will likely contain the following steps:

1. Acquire bread, remove from package.
2. Put bread into toaster.
3. Set desired darkness.
4. Press down control lever lowering toast into toaster and engaging heating elements.
5. Wait until toast pops up.
6. Remove toast.

You implement this process to create your desired output, having the desired output in mind helped us describe this process.

[Desired Output] →[Process]

Simple, even a child can do it. In fact, making toast is one of those childhood delights.

Glowing toaster

That leads us to:

Part 3: Desired Output 

At the end of the toasting cycle we removed the perfect piece of toast and enjoy. Our entire process looks like the following:

[Desired Output] →[Process]→[Actual Output]

In modern control systems we call this an “Open Loop” control. It is perhaps better described as a “No Loop” control. No loops because there is no feedback in this system. You set the controls, wait, and whatever you get is whatever you get. Hopefully, the actual output is close to the desired output and you have the perfect piece of toast.

Toast

Burnt Toast

Sometimes, however, the actual output looks more like this:

Toast

What happens if you accidentally create a piece of toast that looks like this?

Well, you can try to do additional processing and scrape the burnt part off in the sink. But that doesn’t create a very compelling piece of toast. The typical solution is to toss the toast into the trash and start the whole process all over again, setting the darkness knob on the toaster to a slightly lighter level.

The process we outlined for making toast contains no feedback (no loops). Step 1, led to step 2, led to step 3,… and so on.

1. Acquire bread, remove from package
2. Put bread into toaster
3. Set desired darkness control
4. Press down control lever lowering toast into toaster and engaging heating elements
5. Wait until toast pops up
6. Remove toast

In open loop systems like making toast, with no feedback loops during the toasting process, if your toast comes out burned you really have to just start over. If your toast comes out under toasted you also have a difficult problem, because if you just press the lever down again it is very likely the next time the toast pops up it will be burned!

Closes Loop Systems

Nowhere in the previous system do we use information while the toast is toasting to understand how well our process is working. What might it look like if we try to gather information while our toast is toasting?

I can guarantee that you have already done it.

What do you do? You ADD A FEEDBACK LOOP to prevent your toast from burning.

How do you do this? You start peering into the top of the toaster to see how your toast is doing!

If the toast is getting too dark, you pop it out! Even children know how to do this. You get FEEDBACK by looking at the toast. You COMPARE what you OBSERVE with your idea of your desired output and you adjust the toaster CONTROLS to try to force the toaster to give you a perfect piece of toast. The steps look something like this:

1. Acquire bread, remove from package
2. Put bread into toaster
3. Set desired darkness control
4. Press down control lever lowering toast into toaster and engaging heating elements
5. Watch the toast with your eyes
6. Compare what you see with your eyes with the desired toast
7. If the toast is lighter than then desired toast go back to step 5
8. Pop-up the toast manually
9. Remove toast
The steps 5, 6, and 7 repeat over and over. They become a loop. The system described above is called a CLOSED LOOP control system. It contains a feedback loop that helps you make decisions based on what is going on inside the toaster.

In control theory these steps are labeled observe, compare, and control:

Observe: Watch the toast with your eyes

Compare: Compare what you see with your eyes with the desired toast
Control: If the toast is lighter than then desired toast go back to step 5, otherwise pop-up the toast manually.
It turns out, that even making something as simple as toast can benefit from “closed-loop” controls. Imagine how something as complicated as software may benefit.