[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: Site Visits
This string which began as recommendations on how to be more effective
with companies which are we are working with on-site is now discussing
on-site versus other program activities. A recent reply seemed to say it
shouldn't be an A or B decision, but then seemed to go there, with
measurement being the driver. What is most effective, A or B, seminars
or on-sites? No silver bullets for me.
My experience with pollution prevention assistance delivery is that
everything is connected and that pollution prevention is a continuous
improvement process. It occurs over time. The most important thing is
to get the company to do something that works. It doesn't have to be
the biggest money saver, or waste reducer. It builds their confidence
Our program, as many others, provides training seminars, telephone
assistance, newsletters, information materials and on-site assistance.
The newsletters and training seminars provide information, but they are
also marketing and outreach activities to involve the mainstream
business community (not just early implementers) with our program. By
seeing that we have useful information and some level of competency,
businesses then call for additional information and eventually invite us
on-site. Program utilization by a large cross-section of the business
community is an important goal for us. I strongly agree with John Katz
on this, but believe that "quantitative" measurement pressures could
drive us away from it.
If we were just to focus on early implementers and spent all our
resources working on-site with them and measuring their implementation
we would probably come up with pretty good pollution prevention
numbers. Chances are, if we didn't work with the early implementers,
they'd still implement pollution prevention pretty well themselves.
Measurement could drive a program design which could show great
effectiveness but wasn't really the cause for the improvements.
Establishing measurement requirements for reports on reduction and
cost-savings might keep us out of some of the shops we most want to
Looking at all the program components is critical to pollution
prevention implementation. Our best on-sites are companies that have
attended our seminars, called us for information, invited us out, and
then continued to work with us. (You can teach them to fish, but they
still need the fly shop, bait, and tackle store.) Our worst on-sites
are companies facing enforcement action, which are referred to us and
haven't been involved with our program in the past. These people need a
"regulatory lever", enforcement, to make them improve.
My message is that other components of the assistance program you
provide are important to conducting effective on-sites. We can't look
at individual program components like on-sites or early implementer
companies only when assessing what works and what doesn't. We need to
look at programs and the business community as a whole in assessing
For me, the measure of continuous improvement is the number of companies
maintaining contact and working with our program over time. Trying to
measure whether my seminars or my on-sites are more effective would be
similar to assessing whether my heart or my brain is more effective. If
my measurement instrument were to determine that my brain was more
effective and I ceased supporting my heart, I'd collapse pretty
quickly. Without food (funding) my brain and heart aren't going to work
well for long.
I'd be interested to learn how other EPA programs are responding to
GPRA. For instance is RCRA measuring the environmental benefit of EPA
Notification Requirements? How does this benefit compare to Biennial
Reports. What is the relative environmental benefit of labeling verses
manifests, or accumulation times versus land disposal restrictions?
As for our program's continuous improvement, I'm really interested in
what Pat's doing in New Mexico.
A and B, yin and yang, seminars and on-sites,