By Ramon Jackson
on Friday, September 15, 2000 - 10:15 pm:
In another thread an anonymous
questioner, who we presume is a contracting type, eventually
revealed the pressure to incorporate the technical proposal is
being driven by the functional specialists on the selection team
(Anonymous).
Technical team members bring important knowledge to the process.
They may be almost ignorant about contracting issues. In fact,
they may be prejudiced as a result of being victims of
contracting gone sour. As Anonymous' post indicates they may
then become drivers in unwise or questionalbe contract related
issues well beyond their technical expertise.
I was a technical type who in late career was placed
unexpectedly into a high powered contracting organization and
given benefit of some specific training, but most valuable,
constant contact and confrontation with very expert and
experienced contract professionals and a wide variety of
contracting issues. It was something of an intense trial by
fire, but a great opportunity to learn "the other side." As a
result, when I watched our technical teams in action I could see
myself in previous form and, I believe, greatly help explain
some pitfalls in what members were sometimes trying to push.
The team members may be there only for technical expertise, but
in practical and political application they can have impact on
contracting decisions well beyond technical analysis. Anonymous'
post is the tip of the iceberg.
What training should selection team members have? At what level
should it be prescribed (agency, department, perhaps in FAR)?
By
Eric Ottinger
on Monday, September 18, 2000 - 11:54 am:
Hate to tell ya, Ramon,
But my answer is, “Very Little.”
Evaluation Board members shouldn’t be contracting experts
anymore than jurors should be legal experts.
I want a Chairman who has been though the process a couple or
three times. Otherwise, I will brief the evaluators and tell
them everything that they need to know.
There is usually one fellow on the evaluation board who passes
on some gem of contracting wisdom that he picked up from a PCO
at another agency. Then I have to explain that different
agencies and different PCOs do things differently, and that is
OK.
Lest I be misunderstood-- My first source selection was at DOE.
When I was still green as grass I was thrown in with some very
smart, experienced technical people. They taught me a lot. And I
am still grateful.
Eric
By
Kennedy How on
Monday, September 18, 2000 - 01:03 pm:
I agree with Eric that the
technical types don't really need to be Contracting experts;
that's why you have contracting experts in the first place. What
they DO need is an understanding that what THEY want may not
necessarily be something that CAN be done, and the person making
that decision is the Contracting expert.
On the flip side, the Contracting expert needs to be
understanding of what the technical experts need, and work with
them to get as close to what they want as possible into the
final result.
Above all, I don't really want everybody to be experts in
everything, because all that will do is confuse the issue.
Kennedy
By
Eric Ottinger
on Monday, September 18, 2000 - 01:57 pm:
Kennedy,
We probably don't really disagree. But I should note that one of
my basic principles is that the contracting office does NOT
determine the requirement.
Generally, going into a source selection, the technical person
knows what he/she wants but he/she isn't sure that the
competitive process is going to produce the right result. I have
to ask them to trust the process. It works if you do it right.
Various offices and PCOs do different things. Many technical
people naively believe there is a single "right" way to do the
job, and they are very uncomfortable if I tell them to take an
approach different from what they did last time. I've never had
a problem with really experienced technical poeple, who have
been through the process many times. But sometimes, a little bit
of knowledge is dangerous.
Eric
By
Charlie Dan on
Monday, September 18, 2000 - 05:57 pm:
In response to Ramon's
questions...
What training should selection team members have?
I believe selection team members need a general understanding of
the basic source selection process to be used on the procurement
(assuming that key business strategy decisions have already been
made). I don't see a benefit in training the selection team on a
Lowest-Price-Technically-Acceptable process, if you will be
using a more traditional Best Value process (i.e., trade-offs
between price and non-price factors). This can usually be
provided by the contracting officer.
I think you must provide fairly detailed training in Procurement
Integrity and ethics matters. It never fails, whatever you skip
over in this area is the exact area that hits you in the face at
the worst possible time in the process. This kind of training
works best when you follow up an hour or two of formal training
with an attorney well-versed in this area, who can provide
one-on-one Q&A for each team member.
Finally, when I serve as the CO on a major source selection, I
make sure to provide some on-the-job training for each key stage
in the process, just before we enter that stage. For example, I
sit and tell "war stories" about preproposal conferences as we
plan the agenda and decide who will speak at a preproposal
conference. I give a few tips on oral presentations and then
negotiations (i.e., discussions) just before we schedule those.
Last, but certainly not least,
before we announce our selection decisions, I sit down with the
selection team and tell them what I expect and what they should
expect when it comes time for debriefings.
Looking at training from a different perspective -- the
backgrounds of the various selection team members... The best
selection teams I've worked with have included people with
strong program or technical knowledge, seasoned generalists, an
attorney familiar with procurement in a federal environment, and
an experienced contracting officer. When I am that contracting
officer, I try to involve a contract specialist from my staff
who has not yet worked on a source selection - just to give them
some insights to the process.
"At what level should it be prescribed (agency, department,
perhaps in FAR)?"
I'm not sure I understand this question. Nevertheless, here are
some thoughts. I work in DOE, and years ago we prescribed formal
training for selection teams, to take place as soon as the
formal selection board was established. That worked fairly well.
But in recent years, DOE abolished that requirement, and my ad
hoc training has worked just as well. Either way, I believe the
best training teaches the selection team how things are done in
the specific agency. More generic, more formal training on FAR
requirements is great. It's a wonderful introduction to the
source selection process, and is one of the best ways to show
the team that other agencies can do things differently and be
successful. But it's no substitute for agency-specific training.
Anyone who's been around can tell you that things are done very
differently in the various agencies, and what works well in one
agency may be dead-on-arrival in another agency (or at best, may
require heroic efforts to implement).
By
joel hoffman
on Monday, September 18, 2000 - 10:51 pm:
I develop a booklet for each
source selection to review with the source selection board, a
separate booklet for the technical advisors/reviewers and
occasionally a third booklet for price analysts. These briefings
are conducted group by group, prior to them seeing a proposal.
The booklets parallel the announced evaluation procedures in the
RFP, in detail - step by step. I include the proposal submission
requirements, the detailed evaluation criteria, discussion on
the trade-off procedures, what can occur during discussions and
an explanation concerning the basis of award. I also include a
section on procurement integrity. The evaluators can refer to
the booklets for the rest of the source selection.
Technical evaluation members are expected to be thoroughly
familiar with the RFP for their assigned tech areas, as should
the SSB on a service contract. The SSB should be generally
familar with the SOW for a construction contract.
Happy Sails! Joel
By
Kennedy How on
Tuesday, September 19, 2000 - 01:50 pm:
Eric,
I don't think we disagree at all. My main point was that I
didn't want the Technical reviewer to get too hung up on
procurement end of it, to the detriment of his assigned task,
which is the technical end. If he spends his time
second-guessing the procurement side, or, he spends his time
being both the contracts and tech expert, he is, in effect
wearing both hats and will be a contracting office determining
the requirements.
I like Joel's booklet idea, as well as Charlie's OJT meetings.
Kennedy
By
Eric Ottinger
on Tuesday, September 19, 2000 - 02:27 pm:
Kennedy,
I am also a great believer in Workbooks. I should have said as
much.
I give the evaluation teams wide latitude in determining how
they want to do the evaluation.
Like Charlie I brief the team at each stage of the process and
make sure that I am available at all times.
I expect the Chairman to be the team leader. I don't step into
that role unless the Chairman is totally ineffective.
There is a lot of half-baked or altogether bad information
floating around. I've had a couple of occasions where evaluators
tried to tell me how to do my job. Like I said, knowledge is
always good. A little bit of knowledge can be dangerous.
I am not always successful (even with 1102s) but I constantly
urge people not to force the correct answer or jump to the
bottom line.
Just evaluate, rate, discuss within the team, prepare questions,
brief and let the process work.
Hope this clarifies,
Eric
By
Ramon Jackson on Wednesday, September 20, 2000 - 01:56
pm:
I think you have driven out one
of my concerns. Technical (here I use the term covering a wide
range, including "management") evaluators need to stick to
weighing the validity and appropriate application of proposal
statements falling within their expertise to advise the
selection authority on whether the proposal indeed fits the bill
or not.
I think one weakness in acquisition is the widespread
misunderstanding of its workings by most non-acquisition people.
It is akin to some sort of secret fraternal order with arcane
rituals in the minds of most line managers and workers in
agencies other than the big acquiring organizations. Too often
the rituals bring plague, drought and misery instead of happy
results back home. Some evaluators come with a determination to
see this does not happen and instead of focusing on the job at
hand tend to want to influence the process in other ways.
Nothing will cure the evaluator sent on a mission, but a focused
briefing on the process and rationale for the selection at hand
may reduce fears and focus them on their specific role.
My personal view is that some sort of intensive briefing on the
context of the specific evaluation, including some discussion of
how contract type is being applied, should be required.
The procurement integrity briefings are often required and
candidate evaluators are "spared" from home organizations for
that. Some apparently do not attend more voluntary briefings and
requested working sessions on their role in the selection.
It appears to me you have things in hand. The story of the
evaluators apparently driving a spur of the moment proposal
incorporation reminded me of cases I've heard of where things
were out of control.
By
Eric Ottinger
on Wednesday, September 20, 2000 - 05:04 pm:
Ramon,
Sure. Isn’t this one of the issues that is best dealt with
proactively in the planning stage.
What kind of contractor do you see winning this award?
Who do you expect to compete?
How can we convince potential offerors that we are serious about
doing a fair competition?
How do we structure the selection factors to get you what you
need?
If this is done right and the technical side has been fully
involved in the planning process, the technical side should be
confident that the process will give them what they need.
Eric
By
Ramon Jackson on Thursday, September 21, 2000 - 12:08
am:
Eric,
If it isn't dealt with in the planning stage it will likely bite
you -- with one nip being what we have been discussing.
Unfortunately, I have secondary knowledge and some personal
experience with the technical (requirer) side being "too busy"
or otherwise unwilling to do its part in the planning. Too often
the idea seems to be to bundle a bunch of "requirements" and
throw them over the wall. DoD has a definite process, quite
often ignored, but one that covers most of the bases (sometimes
termed "from lust to dust"). I wish it were followed more
faithfully in spirit and function as well as form.
For the process to work it takes work by all parties with
a high degree of understanding and cooperation. One military
officer, who I think was thought well of used the terms
"communicate, cooperate and coordinate" as keys. Where the
requirer just wants its requirement met, the contracting office
the rules met and the program office its schedule and budget met
without working together reasonably we have these problems.
The widespread lack of knowledge about just what the others must
do is something I believe to be a problem. Contracting staff
often do not have a "business sense" of the agency's business.
They really don't know how this "requirement" fits into the
mission. Program offices have their focus sometimes to the
neglect of other real needs. Requirements people too often want
what they want with no inclination to learn how to really help
get what they can get within budget and other constraints.
I don't think cross training is the answer. I do think a sound
general knowledge of the issues and operations of all parties is
necessary for effective procurement.
By
Vern Edwards
on Friday, September 22, 2000 - 10:34 am:
The source selection team
includes the evaluators, the contracting officer, and the source
selection authority. These persons should receive the following
two-part training before they go to work on the source selection
plan or the RFP:
First, the team should be taught the rules of source selection,
as laid down in FAR Part 15 and as interpreted by the GAO and
the agency's policy-makers, with emphasis on planning and
evaluation processes. This training should describe the must-dos
and the must-not-dos that the team members need to know. The
objective is not to make experts of the evaluators or the SSA,
but to make them knowledgeable enough to do their jobs. This
training can be done in one-half day.
Second, and most importantly, the team should receive training
in the business processes of evaluation and decision-making.
They must be taught the meaning of "evaluation," "evaluation
factor," "relative importance," "risk," and "tradeoff analysis."
They should be given a step-by-step description of what should
take place during evaluation and tradeoff analysis processes and
the relationship between the results of those activities and the
legal requirements for source selection documentation. They
should be taught the difference between "evaluation" and
"scoring" and the function of scoring in a decisionmaking
process.
The second part of the training is more important than the first
part, because it will teach the team how to make a sound
business decision, which is the main objective in source
selection. Two or three days or more should be devoted to this
training, depending on the backgrounds of the members of the
team.
Interestingly, I have found that most contracting officers have
little or no knowledge of business evaluation and decisionmaking
processes and cannot provide training to the other members of
the source selection team in that regard. When I ask COs about
"evaluation" they go into a recitation of the rules in FAR Part
15, and perhaps into a discussion of GAO decisions. But they are
lost when I ask them, "What is an evaluation factor?" They can
give me examples of evaluation factors, but they cannot give me
the logical intension (not intention) or connotation of the
term. They cannot explain the true meaning of "relative
importance," or its significance in source selection
decisionmaking. They cannot describe and explain the tradeoff
process.
This is knowledge that contracting officers should have and
share with the other members of the evaluation team. Some book
recommendations:
(1) Decision Analysis and Behavioral Research, by Detlof
von Winterfeldt and Ward Edwards (Cambridge University Press,
1986), a thorough treatment of theory and practice, funded by
the U.S. Navy;
(2) Making Hard Decisions: An Introduction to Decision
Analysis, 2nd ed., by Robert T. Clemen (Duxbury Press,
1996), a college text;
(3) Decision Analysis for Management Judgment, 2nd ed.,
by Paul Goodwin and George Wright (John Wiley & Sons, 1998), my
favorite--an excellent practical treatment with lots of worked
examples; and
(4) Smart Choices: A Practical Guide to Making Better
Decisions, by John S. Hammond, Ralph L. Keeney, and Howard
Raiffa (Harvard Business School Press, 1999), a short handbook
by three giants in the field.
This is the kind of knowledge that Dee Lee is talking about.
By
Ramon Jackson on Friday, September 22, 2000 - 06:39 pm:
Interesting. Vern has hit another
evaluation team problem that has bothered me.
When evaluating the evaluator's material how often are we
confronted with a high recommendation and an extemely wispy
stregnth or a low with a weakness weakly described? How many
really are "I don't like it" without a shred of documentation?
Has anyone looked over some "evaluator's" work and wondered if
they were even there or evaluating the proposal and not a movie
mentioned at a break?
Some is fear of "essay" questions. It is one reason for
designing forms for past performance or evaluations without
check boxes. Still, even when forced, some will "write a check
box" in fact. Still, I'm afraid much is simply lack of skill at
evaluating -- and that is a bit scary when the evaluators are
technical!
By
Eric Ottinger
on Tuesday, September 26, 2000 - 06:03 pm:
Vern,
Best way I can put this—
This is good advice for people who think this is good advice.
Training is always good and there are some of us who are always
ready to tell other people what to do and how to think.
You seem to assume that the selection is managed like some
hippie commune with decision making IAW some concept of
“participatory democracy.” Otherwise, there is no reason to give
everybody your briefing on the theory of correct decision
making.
I don’t see it. Only one person on the team gets to be the SSA.
Only one person has the ultimate responsibility for making the
trade-offs and making the decision.
If the SSA doesn’t have good business sense or an understanding
of appropriate decision making techniques, you have picked the
wrong SSA. A broad view and good business sense are the
ESSENTIAL PRE-QUALIFICATIONS for an SSA.
Like I said -- Don’t force the decision and don’t jump to the
bottom line.
Evaluators need to understand the rules for discussions if they
participate in discussions. This should be done with some
reference to the Comp. Gen., but I don’t see a need to cite
specific case law.
There is always someone on the team who wants to go off in a
corner and fiddle with his/her personal point scoring system. I
usually permit this because it makes him happy, and he has less
opportunity to cause trouble if he is off in a corner fiddling.
(It is usually a "him.")
Eric
By
Vern Edwards
on Tuesday, September 26, 2000 - 08:19 pm:
Eric:
I said that evaluation teams need to be trained in two topics:
(a) the source selection rules, and (b) business process
decision making. It appears to be the latter recommendation that
bothers you, so here is what I said:
"Second, and most importantly, the team should receive training
in the business processes of evaluation and decision-making.
They must be taught the meaning of 'evaluation,' 'evaluation
factor,' 'relative importance,' 'risk,' and 'tradeoff analysis.'
They should be given a step-by-step description of what should
take place during evaluation and tradeoff analysis processes and
the relationship between the results of those activities and the
legal requirements for source selection documentation. They
should be taught the difference between 'evaluation' and
'scoring' and the function of scoring in a decisionmaking
process.
The second part of the training is more important than the first
part [about the source selection rules], because it will teach
the team how to make a sound business decision, which is the
main objective in source selection. Two or three days or more
should be devoted to this training, depending on the backgrounds
of the members of the team."
You think that because the SSA makes the decision the team
members don't need to know anything about decision-making
processes. But I think that they do, because they must plan the
source selection evaluation on the SSA's behalf. I think that
they need to know what kind of information the SSA needs in
order to make a sound decision. They must make findings and they
must document the bases for their findings. They should know how
to do that properly.
As I'm sure you know, most SSAs rely on the input that they get
from the evaluators. They usually don't evaluate the proposals
de novo themselves. The higher the rank of the SSA the
more likely that is to be the case.
I've been the CO in source selections where the SSA was anything
from a Lt.Col. to a Lt.Gen. I have never been in a position to
pass judgment on the SSA's qualifications or to object to his or
her appointment, even if I thought that he or she was
unqualified. The SSA was the SSA because someone above my pay
grade said so. I suspect that you don't get much to say about
SSA appointments either, so I find your comments about picking
an SSA to be specious, at best.
If I am CO I want the evaluators have a good understanding of
how to do what they have been assigned to do--evaluate competing
proposals, make sound findings based on sound principles of
evaluation, and report those findings clearly. I have learned
that most COs who want training for their evaluation teams want
training about effective evaluation and decision-making
processes and how to streamline them.
I never suggested that the evaluation team needs to know about
"decision theory," so I don't really know what your carping at
me about. Frankly, I'm surprised that you object to my
recommendations. If you don't like my suggestion then don't take
it. You are right--my advice is good advice for people who think
that it is. I provided it in response to Ramon's inquiry. He
seemed to think it good.
In the Summer 2000 issue of Acquisition Review Quarterly,
which is published by the Defense Systems Management College,
Robert Lloyd, CPCM and NCMA Fellow, a former DOD contracting
official who is now with the Dept. of State, recommends that
agencies use my evaluation streamlining techniques as a way of
eliminating one of what he calls "contracting pathologies." See
"Government Contracting Pathologies," pp. 245-258.
Acquisition Review Quarterly is available on the Web. You
can get to it through the agency newsletter link here at Wifcon.
By
joel hoffman on Wednesday, September 27, 2000 - 01:17
am:
Vern, I am in 100% agreement with
you.
Eric, there is no need to get into another personal
confrontation over this.
The SSA doesn't need a source selection evaluation team or
advisors if he/she intends to to personally review the proposals
and develop in-depth, independant evaluations.
Eric, I don't encourage and in fact have learned to prohibit
individual scoring of proposals by evaluation board members
during their initial, personal proposal review. It is a waste of
their and my time. We rate proposals by a consensus process. The
board develops a consensus narrative evaluation, listing the
strengths, weaknesses, deficiencies, etc. for each factor. In my
system, the score or adjectival rating simply "falls out", on
the basis of the narrative comments. In my opinion, you should
never allow "someone on the team who wants to go off in a corner
and fiddle with his/her personal point scoring system" to do
that. Total waste, when scoring or rating by consensus. Just my
opinion.
Happy Sails! Joel
By
Vern Edwards
on Wednesday, September 27, 2000 - 09:15 am:
Joel:
Thanks.
One reason that it is important for the evaluation team members
to receive training in effective evaluation and decision
principles and procedures is that SSAs often (but not always)
want award recommendations from them. In order to make a
recommendation, the evaluators must reach a consensus decision
among themselves, even though the SSA can reject it. They should
reach their consensus on the basis of sound principles
and procedures. For example, they should know the proper way to
make a tradeoff analysis.
Current USAF guidance for the conduct of small source selections
says:
"3.3 Award Decision. The SSA takes into consideration all the
evaluation results provided by the evaluation team and makes a
decision based upon integrated assessment as to which proposal
provides the best overall value to the Government. Award is then
made based upon this decision. The award decision will be
documented in a Source Selection Decision Document (SSDD) as
described in paragraph 4.13 below. Due to the noncomplex
nature of the basic source selection process and the use of a
small team, a consensus decision is the natural outcome of the
process. In the rare instances when there is disagreement
between the team members regarding which offeror is the
successful offeror, the SSA, is final authority for making the
selection decision."
USAF Source Selection Procedures Guide, March 2000.
Italics in original.
It is clear from this passage that the Air Force expects the
team to reach a consensus decision about which firm should
receive the contract. In order to do that effectively, the team
should receive some training in effective decision-making
processes.
By
Stan Livingstone
on Wednesday, September 27, 2000 - 10:38 am:
Joel and Vern,
Experiences at my agency parallel the points you both made -
consensus opinions without the need for individual scoring works
best. In fact, we're try to get away completely from point
scoring and rely on the narrative as much as possible. In some
instances, we use what we term as "succinct narrative" to
capture relative differences among offerors and even avoid
adjectual ratings - the narrative speaks for itself. To get at
this level of consensus, the team members need training in
effective decision making as Vern states. This may also include
such faciliation support to ensure all members have the
opportunity to air their views equally and openly (not a hippie
commune type setting - just kidding, Eric!). We found this
succinct narrative forces people to focus more on what really
matters rather than splitting hairs over whether a company gets
a 4 or 5 on some mundane issue.
By
Vern Edwards
on Wednesday, September 27, 2000 - 12:50 pm:
Scoring has long been the subject
of debate. Some people argue for numbers, some for adjectives,
some for color-coding, some for scoring by individual
evaluators, some for scoring only at the team level, some for no
scoring at all.
The thing to remember about scoring is that it is merely a data
simplification device. The function of scoring is to represent
complex findings in a simple form. A number, adjective, or color
code is simply a symbolic, summary expression of complex
findings. For example, the characterization of an offeror's past
performance as "marginal" presumably summarizes more complex
findings of fact and opinion.
Simplifying the evaluation data facilitates tradeoff analysis by
making it easier to compare offerors. There is a considerable
body of research that has demonstrated that people have a hard
time processing voluminous or complex information. Scoring is
only necessary when an agency uses a large number of
differently-weighted evaluation factors, such that the
evaluation findings are voluminous and complex, making tradeoff
analysis difficult. Scoring can also be helpful in summarizing
complex findings about individual factors, such as experience or
past performance.
However, data simplification sacrifices detail. That's why an
SSA should not base his or her decision on scores, but should
use scores just to get a general orientation to the evaluation
team's findings and to make preliminary tradeoffs that are
subject to validation on the basis of the more detailed
documentation.
Should individual evaluators assign scores? Only if they need to
consider multiple or complex factors and to integrate their
individual findings for presentation to the other members of the
team. It may facilitate discussion. On the other hand, an
evaluator may get overly committed to his or her position that
Offeror A's past performance was only "satisfactory," making it
harder for the team to reach a consensus. This is especially
likely if there was no initial agreement on the definitions of
the scoring terms.
As far as the numbers vs. adjectives debate goes, if
simplification is what you want, then numbers permit the
greatest degree of simplification. But I won't debate the
relative merits of numbers vs. other forms of scoring. Some
people like numbers and some don't; I'll let it go at that. What
matters in the end is the quality of the evaluation and
decision-making process, not the type of scoring system that you
use. If you like numerical scoring, then use it. If you don't,
then don't.
However, unless agency policy provides otherwise, contracting
officers should allow SSAs and evaluators to use the system that
works best for them. The CO should not impose his or her
preferences in that regard on the other members of the team.
Nothing is more embarrassing than watching some poor, innumerate
CO try to tell a bunch of systems engineers why numerical
scoring is a bad idea. After all, they use it all the time when
performing complex system design tradeoff analyses.
The CO should make sure that he or she understands how the
evaluators’ system will work and the kinds of results that it
will produce, and how the SSA and the evaluators will use those
results. The CO should be able to point out logical or
structural defects in the system that could distort the
evaluators’ findings. (In order to do that, COs must possess the
kind of knowledge that Dee Lee wants them to have. The kind of
knowledge that you get in a good business school course in
market research or decision-making.) The CO should make sure
that the SSA knows not to base his or her decision on the scores
or explain or justify it in terms of the scores. The CO should
also make sure that the decision is consistent with the RFP’s
statement of the evaluation factors and their relative
importance.
By
bob antonio on
Wednesday, September 27, 2000 - 01:04 pm:
Vern:
I read the article by Floyd. It is a worthwhile read and amusing
too. In one area, he mentions the confusion with the terms
"procurement" and "acquisition." Our government-wide
"procurement" regulation is the Federal "Acquisition"
Regulation. I think the change from procurment to acquisition
occurred in the Carter Administration.
Anyway, Floyd said to forget "procurement" and "acquisition" and
call it "contracting." Been there; done that. Mr. Floyd is
welcome to Where in Federal Contracting? anytime.
By
Vern Edwards
on Wednesday, September 27, 2000 - 03:34 pm:
Bob:
Yes, as I recall, the change from "procurement" to "acquisition"
came in 1977. That's when the Armed Services Procurement
Regulation (ASPR) became the Defense Acquisition
Regulation (DAR). I have a copy of the DPC (Defense
Procurement Circular) that made the change around here
somewhere.
By
Eric Ottinger
on Thursday, October 12, 2000 - 07:15 am:
It takes two to tango, Joel.
Nobody has had a personal confrontation except where he was
looking for a personal confrontation.
In several of the more rancorous threads I have taken a load of
stuff for supporting the point of view in the regulations and
other official policy documents. This is another one.
Now any one of us can agree or disagree with policy. However, to
be fair to our readers we need to keep our personal opinions
separate from formal policy.
There is a lot of bad information and confusion floating around.
Most of it results from individuals imposing their personal
preferences or theories on formal policy or reading such into
formal policy. I haven’t gone out of my way to “confront”
anyone, but it is fair to say that this kind of misleading
argument bothers me, and I am usually motivated to say so.
This thread has a weird self-referential quality to it. You are
illustrating the tendency many of us have, of forcing a
consensus by treating a minority opinion as some kind of breach
of etiquette. This isn’t the best way to get to consensus, and
that is pretty much the crux of our disagreement.
I believe in consensus. But I see it as a natural result of a
process, not an end in itself. I always allow for the
possibility of a minority opinion because I respect my
evaluators. I recruit strong-minded, highly competent people for
the evaluation team, and I expect that there will be different
points of view. I tell the team that they will move toward
consensus, and they usually do.
(Old) FAR 15.612(d)(1): “The source selection authority shall
consider any rankings and ratings, and, IF REQUESTED, any
recommendations prepared by evaluation and advisory groups.”
I have had several really convoluted arguments with people who
can read plain English but want to do something else. The (old)
policy was: “You don’t do a recommendation unless and until you
are asked.” But this is very hard for some people. They feel a
compulsion to carry the evaluation all the way through to the
bottom line decision, and they feel the job is somehow
incomplete if they don’t.
(Rewrite) FAR 15.308 Source selection decision: “The source
selection authority’s (SSA) decision shall be based on a
comparative assessment of proposals against all source selection
criteria in the solicitation. While the SSA may use reports and
analyses prepared by others, THE SOURCE SELECTION DECISION SHALL
REPRESENT THE SSA’S INDEPENDENT JUDGMENT. The source selection
decision shall be documented, and the documentation shall
include the rationale for any business judgments and tradeoffs
made or relied on by the SSA, including benefits associated with
additional costs. Although the rationale for the selection
decision must be documented, that documentation need not
quantify the tradeoffs that led to the decision.”
There may be a shade of difference between the old FAR and the
Rewrite. The Rewrite doesn’t address the question of when and
under what circumstances the evaluation team should make a
recommendation to the SSA. On the other hand it is perfectly
clear that any evaluation team that tries to paint the SSA into
a corner has overstepped its authority.
This is a democratic process but only up to a point. The
Chairman will brief to the SSA, offeror by offeror and factor by
factor. It will be left up to the SSA to “integrate” all of this
and make a decision. If the SSA should want a recommendation or
a show of hands or whatever, that is his/her prerogative not
mine.
AFFARS 5315.308 Source selection decision: “An SSDD [Source
Selection Decision Document] shall be prepared for all Air Force
source selections and must reflect the SSA’s integrated
assessment and decision. The SSDD must be the single summary
document supporting selection of the best value proposal
consistent with the stated evaluation criteria. The SSDD clearly
explains the decision and documents the reasoning used by the
SSA to reach a decision.”
The question is not whether or not there should be an
“integrated” assessment. We all have a designated role to play
in this process. We need to respect the prerogatives of each
participant. The evaluators are there to evaluate independently,
based on their professional judgment and experience. They are
not an “amen chorus” for the Chairman or the PCO. The SSA is
there to make a decision. He (or she) should never be treated
like a rubber stamp or a figurehead.
“Integrated” in this context does not mean “add up the points
and award to an offeror with the most points” and it doesn’t
mean “plug all of the inputs into the algorithm and see what the
computer spits out.” In fact, it means pretty much the opposite.
It means, look at the big picture and use your good judgment.
CGEN Theisinger und Probst Bauunternehmung GmbH, (Mar. 25, 1997)
“Source selection officials in negotiated procurements have
broad discretion in determining the manner and extent to which
they will make use of technical and cost evaluation results.
Grey Advertising, Inc., 55 Comp. Gen. 1111, 1120 (1976), 76-1
CPD 325; Mevatec Corp., B-260419, May 26, 1995, 95-2 CPD 33.
Agencies may make cost/technical tradeoffs in deciding between
competing proposals and the propriety of such tradeoffs turns
not on the difference in technical scores or ratings per se, but
on whether THE SELECTION OFFICIAL’S JUDGMENT concerning the
significance of that difference was reasonable and adequately
justified in light of the RFP evaluation scheme. See Wyle Labs.,
Inc.; Latecoere Int’l, Inc., 69 Comp. Gen. 648 (1990), 90-2 CPD
107.”
CGEN Mevatec Corporation, (May 26, 1995)
“In reviewing protests concerning the evaluation of proposals,
the General Accounting Office will examine the agency’s
evaluation to ensure that it had a reasonable basis. … Further,
source selection officials in negotiated procurements have broad
discretion in determining the manner and extent to which they
will make use of the technical and cost evaluation results. In
exercising that discretion, they are subject ONLY to the tests
of rationality and consistency with the established evaluation
factors.”
CGEN Juarez & Associates, Inc., (Feb. 08, 1996)
“Source selection officials in negotiated procurements have
broad discretion in determining the manner and extent to which
they will make use of the technical and cost evaluation results.
In exercising that discretion, they are subject only to the
tests of rationality and consistency with the established
evaluation factors. See Mevatec Corp., B-260419, May 26, 1995,
95-2 CPD 33. A SOURCE SELECTION OFFICIAL’S DEVIATION FROM THE
RECOMMENDATION OF A TECHNICAL EVALUATION PANEL PROVIDES NO BASIS
TO UPSET AN AWARD unless the source selection official’s award
decision itself lacks a reasonable basis. See Gary Bailey Eng’g
Consultants, B-233438, Mar. 10, 1989, 89-1 CPD 263.”
CGEN KRA Corporation, (Apr. 02, 1998)
“In this regard, the SSO explained that his primary focus was on
whether or not to make an award to [Offeror A], the technically
seventh-ranked offeror with a technical score of [deleted] at a
relatively low cost of $[deleted]. The SSO explained that he and
the evaluators had closely reviewed and analyzed the differences
between the Walcoff and the [Offeror A] proposals because of the
possibility that [Offeror A], whose technical evaluation was
only [deleted] points lower than Walcoff’s and whose price was
$[deleted] million less than Walcoff’s, should have been awarded
a contract. HOWEVER, IN BALANCING THE TECHNICAL AND COST
FACTORS, THE SSO WAS DEEPLY CONCERNED THAT [OFFEROR A’S]
PROPOSAL HAD RECEIVED A SCORE OF 2, OR “POOR,” ON ONE SUBFACTOR.
The SSO believed that an offeror with such a low score on any
criterion presented a real risk of unacceptable technical
performance. In contrast, Walcoff’s proposal was evaluated as a
5 or better on all evaluation factors and subfactors. Taking
this performance risk potential into consideration, the agency
concluded that award to [Offeror A] was not warranted,
notwithstanding [Offeror A’s] relatively low cost.”
CGEN Development Alternatives, Inc., (Aug. 06, 1998)
“We do not agree with DAI that the SSO was required to resolve
the differences between the two evaluators who believed that
Barents could better ensure successful transition and the one
evaluator who favored DAI; SUCH DIFFERENCES ARE NOT UNCOMMON
GIVEN THE SUBJECTIVE NATURE OF SUCH JUDGMENTS. Since we have
concluded that the evaluation was reasonable, we have no basis
to question the SSO’s reliance on the fact that the majority of
the evaluators had more confidence in Barents to complete the
project. See KRA Corp., B-278904, B-278904.5, Apr. 2, 1998, 98-1
CPD 147 at 13-14; see also SEAIR Transp. Servs., Inc., supra.”
CGEN, Basic Contracting Services, Inc., (May 18, 2000)
“The SSO reviewed the full technical evaluation record
(including significant strengths, strengths, weaknesses, and
concerns) cited for each proposal, as well as the resulting
adjectival ratings, past performance ratings and cost evaluation
results. IN HIS SELECTION DECISION, THE SSO SPECIFICALLY ADOPTED
THE EVALUATION DETERMINATIONS DETAILED IN THE PEB’S
RECOMMENDATIONS AND USED THESE FINDINGS IN HIS OWN TRADEOFF
ANALYSIS.”
When I said, “I am not always successful (even with 1102s) but I
constantly urge people not to force the correct answer or jump
to the bottom line” that was straight out of the book.
In the Air Force Guide for the Basic (i.e. small) source
selections: “Due to the noncomplex nature of the basic source
selection process and the use of a small team, a consensus
decision is the natural outcome of the process. In the rare
instances when there is disagreement among the team members
regarding which offeror is the successful offeror, the CO, as
the SSA, is final authority for making the selection decision.”
(1) This posits a simple “noncomplex” source selection. (A
“complex” source selection would be another matter.)
(2) Consensus is the “natural outcome” of the process. (I agree
100%. Don’t force it; let the process work.)
(3) There will be “rare instances” when the team members
disagree. In that case the SSA (i.e. CO) decides.
The Median source selection is a different matter: “Under the
median source selection procedures, the SSA PERFORMS an
integrated assessment AFTER the SSET evaluation briefing and
selects an offeror for award.”
The language for an Agency level source selection is the same,
only there is an SSAC as well as an SSET.
DFARS 215.303 Responsibilities.
“(b)(2) For high-dollar value and other acquisitions, as
prescribed by agency procedures, the source selection authority
(SSA) shall approve a source selection plan (SSP) before the
solicitation is issued. The SSP--
…
(C) Shall include, as a minimum--
…
(3) A description of the evaluation process, including specific
procedures and techniques to be used in evaluating proposals;
and …”
In this light, it would not be prudent to introduce any
“specific …techniques to be used in evaluating the proposals,”
just before you open the boxes. All of this should have been
thought through and written down, well before that point.
Lest I be misunderstood--
Is there a middle ground between “just briefing the factors one
by one” and “painting the SSA in a corner”? Sure.
Do I have any problem with strict consensus (i.e. unanimity)?
Like the Comp. Gen. I believe “differences are not uncommon
given the subjective nature of such judgments.” However, if we
don’t have 100% agreement on a narrative comment, we keep
working until everyone is comfortable. Partly, this is a
recognition that it is the narrative comments that really
matter. Partly, it is a recognition that if you have three weak
green votes and two votes on the high side of yellow, the amount
of time and effort required to get everyone into agreement is
generally more trouble than it is worth.
Is this a hippie commune, everybody participates exercise?
Normally, No. Putting aside the small dollar simple source
selections contemplated by the AF Guide, the evaluators prepare
the ratings and the narrative comments. The Chairman (or the
SSAC), with the assistance of the PCO, “integrates” the
evaluation in the process of preparing the briefing to the SSA
and a draft decision memorandum. The SSA gets the final word.
Since I believe that it is best if the PCO is the guardian of
the process and directly involved in the decision only in the
most extreme circumstances (i.e. the SSA wants to make a
selection inconsistent with the published selection factors), I
have no problem with the AF policy re simple, “non-complex,”
small dollar selections. I read it as an exhortation to the
technical side to get their act together, and a caution to the
PCO to not impose his/her personal preferences unless absolutely
necessary. Often the PCO defaults to LPTA (Low Price Technically
Acceptable) because it appears safe, and the technical side
defaults to a whine and complain mode. The message to the
technical side is to make a clear case for a best value
selection. The message to the PCO is to accept this
recommendation unless it is patently unreasonable.
My Eric Ottinger, personal preference is “the more the merrier.”
My goal is to maximize the sense of participation. I have always
recommended that the entire SSET participate in discussions with
the offerors. If I were SSA I would ask for a show of hands
before I made a selection. But that is my personal style or
preference, not policy. As PCO, I recommend, but I don’t insist.
Our Navy readers should pay heed to NAPS 5215.305a.(1) Proposal
evaluation.
“Cost or price evaluation. Methods of evaluation which assign a
point score to cost or price and combine it with point scores
for other evaluation factors generally should not be used. Point
scores can be helpful in summarizing subjective evaluation of
technical and other factors, but are not needed in evaluating
cost or price and tend to obscure the tradeoff between
cost/price and other factors, rather than clarifying it. …”
Summing up. I have no problem training my evaluators to be
evaluators, and most of this has been well covered in this
thread. I am not going to train my evaluators to be contracting
officers.
As the DFARS clearly states, any “specific evaluation
techniques” intended for the source selection should be thought
out and written down, well before the proposals arrive. (This
would not, in my opinion, be a bar to adopting a particular
approach to address a specific situation that arises in the
course of the source selection.)
Eric |