[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Measurements



In the 1970's, there was a large movement for doing program evaluation -- evaluating the effectiveness of social programs and interventions of various kinds.  The social scientists doing this kind of research addressed many of the same issues, and developed research methods that, at the least, allow approximations to be made of the differential effects of various interventions.  I commend reading some of the intellectual discussions that are published regarding program evaluation.  While measurement is necessary, it is necessary to know how, what and when to measure and to use the correct statistic and statistical analysis as well.
 
One example of how program evaluation methods might apply involves the following scenario.  We all know that in any industry, there are those who are open to assessments of their performance and those who are not.  In the oil industry, Mobil was open, Exxon was not, for example.  However, waste generated per unit of production and other measures are submitted in reports to government agencies.  Therefore, one can compare companies in the same industry to see what has been effective and what has not.
 
In addition, comparisons can be made across industries, across communities, and across states.  Plus, at times, a government agency can actually conduct a social experiment by offering the limited services to some first, and others later (for obvious budget reasons) while gathering data on the performances of both.
 
Just some ideas.
 
RC
Ralph E. Cooper, Ph.D.
Mediator, Attorney & Counselor at Law
9901 IH-10 West, Suite 800
San Antonio, TX 78230
210.558-0555
----- Original Message -----
Sent: Friday, July 12, 2002 9:15 AM
Subject: Measurements

Forwarding to all on the listserve

-------- Original Message --------
Subject: Re: Question about P2 Program Effectiveness and MEASUREMENTS
Date: Thu, 11 Jul 2002 15:59:48 -0700
From: Kathy Barwick <KBarwick@dtsc.ca.gov>
To: Sue.Schauls@uni.edu


Some thoughts on measurement from my boss, Kim Wilhelm:
I was at a workshop recently (on emerging environmental issues) where some interesting examples came up to think about in the context of "measurement".  The first case is Y2K.  In the late 1990's the projection was that this would be the beginning of the "end of the world" (at least as we presently know it).  Armageddon!!  Starting with a collapse of the economy because all of the financial records would be lost or corrupted.  Without finances there would be no commerce and with no commerce there would be food shortages, then riots then.....?  Other projected that nuclear missile systems would launch, planes would fall out of the sky, etc.  The response was to spend several BILLION $ on prevention and preparedness.  After the fact, when Y2K came and went and absolutely nothing happened, some people--focusing on the "outcome"-- concluded the billions spent were a huge waste of money.  Conversely other people may con
clude a few billion to prevent Armageddon was a real bargain.  We have results, we have activities, we have costs......all the "measures" you could want.  So what is the answer?  Was the expenditure a good thing or a huge waste?  Isn't that what some people are trying to conclude (make a judgement) when they demand "measurement"? 

I think this is a good example for people in P2 to consider because the end result we are trying to achieve is similar to the Y2K investments for an alternative future.  We maybe can measure how the future turns out with our P2 efforts, and maybe even ascribe some of the changes to our activities. But we can only speculate on how things might have been without those efforts, which makes the calculation of the "difference"-- what we actually achieved--unreliable or at least subject to debate.  Also, in the Y2K example, maybe it really doesn't matter whether it was a good or bad investment, because it is a done deal, the money is spent and gone. Perh
aps the real question should be, "what did we learn?"  Maybe that is what we should really be looking at from our P2 measurements, not to judge good/bad/better but to see what can we learn so we do better in the future.

The second example that came up was 9/11.  Look at the FBI.  Before 9/11 they could measure an extremely good track record of little or no successful foreign terrorist activities in the country ever.  They maybe could have counted the number of foreign born suspects they nabbed, but if they used this as a metric, they probably would have been beat up by the ACLU.  If they had tried to extrapolate, we removed 100 suspect foreigner and if only 10% were terrorists; we maybe saved 3000 lives and several billions of dollars.  Maybe sounds like a bargain by today's standards, however, no one would have believe this or taken it as a credible measure before 9/11. No one could even have imagined intentionally flying a plane full of people into a packed office building
, so how could one even have projected a "measure".  After 9/11 the FBI is getting severely criticized for not having done more, for not having anticipated and prevented the attacks.

Again, I think this is a good example to keep in mind for P2 or regulatory programs.  It is very very difficult to quantify and measure what we have "prevented", i.e., the events (or waste) that don't occur because of our efforts.  But if something bad occurs, it is pretty easy to point fingers and say someone messed up.  Clearly "what got measured" by the FBI before 9/11, (which at the time looked pretty good), did not "get it done" , at least not when it really counted.  Conversely, who is to say how many other 9/11 like events have actually been prevented by the FBI in the last 5 years, that no one, including the FBI, even knows about?

So if the mantra is "what gets measured, is what gets done", then one corollary needs to be "just  because you can't measure it, doesn't mean you haven't done
 something" e.g., Y2K ? prevented Armageddon.  And a corollary number 2, "just because you can and do measure things, it doesn't really mean you are getting it done" e.g., 9/11.

My conclusion is that "measurement" is a tool, that is best applied and most useful for making program improvements, and that while it clearly contributes to decision-making processes, it cannot substitute for judgment.  Measurement is best used and most helpful when included in the planning process of a project, to say what it is we hope to accomplish, to provide a clearer target and to get buy-in to specific goals and objectives up front (make them "measurable").  Then used to check to see if we did what we said we would do.  In this way measurement helps provides both focus and feedback.  Measurement is best used in a cycle, plan?do?check (a.k.a. measure) and act (make a new revised better plan). 

My final point is that just because something has good measures or just because it does not, is not 
enough to judge whether something is good or bad.  P2 maybe doesn't have the best measures or it is very costly to get good measures, but you cannot judge its value and contributions on the numbers or lack thereof. 

This is just some food for thought.  My intent is to stimulate some discussion and hopefully explore what "measurement" can and cannot do and view the debate on the difficulty in doing "good" measurement from an alternative perspective.  Who knows, maybe we will figure out a better way of doing it or using the information.  I would like to hear some other opinions on these examples and my conclusions.


>>> Sue Schauls  07/08/02 11:33AM >>>
And so - the cart pushed the horse - so to speak.

I think this thread is hitting home to the very heart of MEASUREMENTS IN 
P2 - an issue or at least a discussion that has persisted for several 
years. Had the collective "we" in P2 baselined the data and eff
ectively 
implemented measurements of "low hangng fruit" we would, in fact, know 
the answer to the question by now.

And so "big brother" can now say - Yes, measurements are necessary in P2 
efforts (at least my big Brother Ed can say that now) furthermore, let's 
tie the measurement efforts to their funding.

I agree with each of the well written replies, the next question I am 
posing is...
What is going on in national P2 measurements?

meaning....
What are the best resources available in P2 Measurements? What needs to 
be developed? When will national standards be developed? and, finally, 
What mechanism is/will be used to do the job?

always on my mind,
Sue Schauls
IWRC


* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *
p2tech is hosted by the Great Lakes Information Network:
http://www.great-lakes.net 
To unsubscribe from this list: send mail to majordomo@great-lakes.net 
with the command 'unsubscribe p2tech' in the body of your message. No
quotes or subject line are required.
About : http://www.great-lakes.net/lists/p2tech/p2tech.info 
* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *