It is possible to collect a wide range of metrics on a project. Metrics about the codebase, unit tests, performance tests, build, delivery progress etc. It is also possible to define red, amber, green thresholds for each metric. Some go for standardized organization wide thresholds while others are comfortable with project specific ones. The obvious next step is to create a dashboard that rolls up all these metrics into a single red, amber, green project status indicator. Middle managers spend a good part of their life managing inputs to these dashboards. Some "programme managers" assiduously refer to their dashboards as balanced scorecards.
Trouble is, most of the time, an overall green status indicator doesn't mean anything. All it says is that the things under measurment seem ok. But there always will be many more things not under measurement. To celebrate 'green' indicators is to ignore the unknowns. The value of a metric is not when it is green. Lets take unit test coverage for example. Is 100% test coverage a cause for celebration? What if most tests don't have any assertions? When measurements become targets, they encourage gaming. On the other hand, a coverage of 50% at least tells you unambiguously that half the code is not covered (or there is an error in measurement).
One may argue that it is only a matter of measuring even more so that the overall green becomes truly indicative of overall project health. This is somewhat unrealistic in a fast paced knowledge work environment. Tools and technologies keep changing. Measurement tools don't keep pace. It takes significant effort to maintain the measurement infrastructure on a project.
Avoiding subjectivity in assessments is also cited as a reason for resorting to metrics. But that's like saying that a person can be certified healthy without the judgement of a physician by a dashboard that rolls up a hundred diagnostic tests.
In my experience, metrics are much more useful when they report bad news than when they are green. In the words of the Way of Testivus, "Good tests fail". Unfortunately, the tendency to roll up metrics into dashboards promotes ignorance. We forget that we are only see what is under measurement. We only act when something is not green.
Then comes the final argument. Don't blame the tool for human laxity. Dashboards cannot promote ignorance or even wisdom. Unfortunately, tools aren't value neutral.