By
GeneJ on Tuesday, May 06, 2003 -
02:37 pm:
Has any one a good reference on establishing multiple
evaluation teams to look at the various sub factors (within the
technical factor)?
We have a complex hardware/software evaluation ahead of us.
There seems to be more offeror interest in this effort than in
our recent past as well. We are expecting more than the many
(good) bidders we had last time. So far so good.
My problem today is that our legal advisor (read Czar) is
insisting that every technical evaluator rate every technical
subfactor. Legal’s position is based on the fact that the FAR
and the sups on Farsite don't reference SSEBs, TEPs, or SSETs
only the singular form of those words (SSEB, TEP, and SSET).
I must admit that this is true (aggravating but true). All of
the material that I can find showing sub-factor teams goes way
back in time to my days at Wright Patterson and 80 person
evaluation teams. I won't argue that it is relevant to the
fairly slimmed down structure we have now.
The only current support that I find for me is in this decision
from 1999:
Proposal Evaluation – Not All Evaluators Evaluate All
Information Requested By Agency North State Resources, Inc.,
B-282140, June 7, 1999.
We could have each evaluator look at the software maturity
matrix, but some people have already made it clear that they
intend to be real unhappy about this. Also I agree that it would
be a waste of time. I'm not so pig headed after all this
research that I can't support legal’s position, but I just can't
see any value here. Are other people sub-dividing their
technical evaluation teams?
GeneJ
By
jerry zaffos on Tuesday, May 06,
2003 - 02:44 pm:
Can we assume that Legal believes that every technical
evaluator is competent in each area covered by the subfactors.
Otherwise, having someone who is not knowledgeable in a
particular area evaluate the proposals in that area is not only
a waste of time, but dangerous, since it could lead to the wrong
selection. Other than the grammar, what is Legal's logic?.
By
GeneJ on Tuesday, May 06, 2003 -
03:35 pm:
It's the old if it's not permitted in the FAR it's not
permitted, versus if it's not forbidden in the FAR it's not
forbidden argument. In fairness maybe another way to state it is
why give our opponents another argument to try in protest when
for a small expense we can be sure we are coloring inside the
lines.
GeneJ
By
Vern Edwards on Wednesday, May 07,
2003 - 10:25 am:
Gene:
If your legal advisor has really taken this stance as a legal
position, and you are not making this up just to amuse us, then
your legal advisor is incompetent. I suggest that you check to
make sure that he or she really graduated from law school or
even has an undergraduate degree.
There is no requirement that the same evaluators evaluate every
proposal or that the same evaluators evaluate every factor or
subfactor. There is no such requirement in the FAR or the DFARS
or any other regulation of which I am aware, nor has the GAO
ever taken such a position.
Instead of you going on the defensive, tell your "legal advisor"
to provide you with a regulatory or case law citation in support
of his or her position.
By
formerfed on Wednesday, May 07,
2003 - 10:47 am:
One of the "eye-opening" experiences I've got from
this forum is apparently many lawyers are out there working with
COs but don't understand contracting. I think Gene has one.
I was fortunate in my entire Federal career to have really good,
knowledgeable, and seasoned attorneys. I took for granted this
was the norm. Then Vern and several others pointed out that
isn't always true.
Gene, I would be getting nervous if I were in your shoes with
that person as your lawyer and you facing a complex evaluation
ahead and a lot of offeror interest. Do you have means to bring
another attorney in?
By
Anonymous on Wednesday, May 07,
2003 - 12:22 pm:
Note that Gene said "legal advisor (read Czar) is
insisting" implying that it is going to take more than a demand
to provide legal basis from Gene to overcome this idiocy--and
idiocy it is.
The very idea that every panel member in a "complex
hardware/software evaluation" will have competence to
effectively evaluate in every area is itself, as Vern points
out, incompetent.
Unfortunately, in relative status, the legal "Czar" probably
holds the winning cards over the mere contracting staff of
"business advisors" within the agency.
Perhaps we can find examples Gene can use for solid evidence. I
know it is typical in competent agencies to apply specialized
knowledge in these cases. Can we get by the sensitivity of
source selection details to find support for Gene?
By
formerfed on Wednesday, May 07,
2003 - 01:32 pm:
Anonymous of 5/7 - 12:22,
Good point.
Gene,
This is from Nash and Cibinic's "Formation of Government
Contracts:"
Objectivity is generally sought by having the same evaluators
review each proposal or portion of a proposal. However, if it is
impractical to do so, proposals can be evaluated by different
personnel. See Design Concepts, Inc., Comp Gen Dec B-186125,
76-2 CPD, 365 (30 proposals divided among the evaluators); T.V.
Travel Inc, 65 Comp Gen 109 (B-218198), 85-2 CPD (37 proposals
evaluated in six groups); Innovative Logistics Techniques, Inc.
Comp Gen Dec B-275786.2, 97-1 CPD (one evaluator reviewed both
proposals and two evaluators each reviewed one proposal). Oral
presentations can also be evaluated by different personnel,
Quality Elevator Co, Comp Gen Dec B-276750, 97-2 CPD. The key
question is whether the evaluation fairly reflects the strengths
and weaknesses of each proposal.
Agencies may change evaluators during the course of a
procurement. For example, initial proposals may be evaluated by
one set of evaluators, and BAFOs by another, Data Flow Corp, 62
Comp. gen. 506 (B-209499), 83-2 CPD. Similarly, an agency may
re-evaluate proposals using a more experienced group of
evaluators, Pro-Mark, Inc. Comp. Gen. B-247248
By
Eric Ottinger on Wednesday, May 07,
2003 - 02:00 pm:
GeneJ,
Take look at page 6 in the Army guide (which is available on the
WIFCON site).
“The SSEB is usually comprised of multiple groups of evaluators
who are responsible for evaluating specific areas of the
proposal against the RFP requirement.”
http://dasapp.saalt.army.mil/library/Army_Source_Selection_Guide_Jun_2001.pdf
Arbitrary dicta, based on semantic simple-mindedness and overly
literal-minded readings of regulations, are common in our line
of work. From what I have seen in this forum, lawyers are prone
to this kind of thing, but other self-appointed contracting
experts fall into this trap regularly.
Formerfed,
Cibininc and Nash are a good authority, but your cites predate
the Part 15 Rewrite.
Eric
By
12:22anon on Wednesday, May 07,
2003 - 03:02 pm:
Fromerfed, I don't think the issue was as unusual as
not having uniformity in evaluators for the proposals. I
would try to avoid that when possible myself. If for nothing
else, it requires extra work to synthesize stregnths and
weaknesses as viewed by different eyes.
What Gene has is much more straightforward and much, much less
controversial. His issue, as I understand it, appears to be
resistance to the appointment of subject matter experts to
specific technical criteria groupings. The appointees would then
review all the proposals as they address the criteria within
their area of expertise.
The lawyer is a fool. The very idea of having say a data base
expert pass with equal weight as a network expert in network
issues is as stupid as the reverse. It is a recipe for disaster
in a "complex hardware/software evaluation" comparable to having
the cost experts review data base architecture and data base
experts view cost issues.
That said, my preference would be to have the subject matter
experts in sub-teams after a general technical review of each
proposal by all technical members. They would then go off for
detailed review, prepare their observations and report back in a
general final review.
My reasons are simple and have been proven by experience. Even
subject matter experts can have tunnel vision. The first general
review will tend to identify cross discipline stregnths and
weaknesses for precise analysis by the appropriate expert teams.
The final review will then resolve both the specific and cross
discipline issues for a coherent position. Each team's report is
then incorporated into a full team position, sometimes with some
corrections.
I've seen it work. The result was much better than most of the
botched and half baked results of either full team review where
non expert opinions began to sway expert knowledge or where
experts went off into a cave and delivered fine little pieces on
system fragments; ignoring system wide issues.
By
GeneJ on Thursday, May 08, 2003 -
12:03 am:
WOW
I must have been bad in my previous life to deserve this. This
is not my worst legal experience. For your amusement and in
gratitude (especially for that Army Guide). Those two were:
First in Illinois we had a lawyer who said that any non-lawyer
reading a GAO case was practicing law without a license and
would be reported to her boss and the Illinois Bar. She even
CCed that email to the whole world.
The second was in Maryland when we awarded to the second highest
technical score (slightly) and the lowest cost (much) offeror
out of several proposals. We then had to debrief the highest
scoring bidder without showing them the most probable cost
adjustments on their proposal, but the on debriefing charts
showing the winner’s cost we could only use the number after the
most probable cost adjustment. The cost team had inflated both
offers by about 10%. So in effect we had to tell the loser that
they had the low cost and high score. AMC headquarters made us
redo the procurement before it got to GAO. With a new lawyer
though.
In the interest of equal time, I feel I must now go to Jerry’s
email list and post two stupid Contracting Officer stories.
Unfortunately or fortunately we seem to draw all of our folks
from the same human race.
GeneJ
By
formerfed on Thursday, May 08, 2003
- 07:19 am:
12:22anon,
I tried to highlight in boldface part of the first sentence I
quoted about "same evaluators review each proposal or portion of
a propsoal" but it didn't work." I thought the "portion" part
would get the point across but the highlighting didn't work.
GeneJ,
It sounds like you worked in Rock Island (or Joliet) and
Aberdeen. Is that right?
By
12:22anon on Thursday, May 08, 2003
- 10:27 am:
Formerfed, I caught that. What I was picking up on was
the following sentence: "However, if it is impractical to do so,
proposals can be evaluated by different personnel." It can
be done, but I expect we would tend to agree that it is the hard
way to do the job and adds some risk.
On the other hand, to get into stories, I've had one or two
cases where the same individual appeared to have more
variance than might be found in such a scheme. The worst was one
in which the evaluator found one proposal to have a weakness for
a particular conclusion and approach while the same evaluator
gave another one, so similar the fact was noted by other
evaluators, a strength.
This one evaluator did not seem at all biased. He just seemed to
have likes and dislikes that came and went by the day and even
the hour. Of course he was pretty worthless in the end. His
overall conclusion for all proposals was "good" and the overall
discussion of strengths and weaknesses came to a wash. He was a
useless null.
By
Eric Ottinger on Thursday, May 08,
2003 - 11:44 am:
GeneJ,
Actually, the AFFARS does refer to SSETs in the plural. My first
thought was to look at the AF source selection guidance. I
didn’t find anything and defaulted to the Army guidance. On the
second time through the AFFARS I found --
5315.303(a)
…
(b) “The SSA shall: (1)(A) Appoint the SSET chairperson(s) and
the SSAC chairperson and PRAG chairperson (if the SSAC and PRAG
are used);”
It struck me that a software maturity evaluation would be
conceptually similar to a PRAG. I would hope that your lawyer
has heard of a PRAG.
I would agree that asking a nonspecialist to do a software
maturity evaluation makes about as much sense as asking an
amateur to install the electrical wiring in your house.
Since I don’t like to argue with [differently abled intellects]
I would probably finesse the situation as follows:
“Listen up. Our lawyer wants everyone to vote on the software
maturity rating. The software maturity team will do the ratings.
Then we will all vote by acclamation and approve such ratings,
(unless, of course, someone wants to write a minority report).”
I’m feeling affectionate toward lawyers right now, so I will
save my dumb lawyer stories for off-line discussions. However, I
do remember the time that I asked two different lawyers the same
question and received wildly divergent answers. The agency
general counsel berated me for “forum shopping.”
Good Luck,
Eric
By
Vern Edwards on Thursday, May 08,
2003 - 03:21 pm:
Eric:
While I suspect that you're not being entirely serious, I have
to say that I would not finesse the situation that way. Voting
is not the same as evaluating and an uninformed vote such as you
described could become problematical. I wouldn't do it.
Unless someone in GeneJ's organization mounts a serious challege
to his legal advisor, his organization will continue to be in
the hands of an incompetent person.
Vern
By
Eric Ottinger on Thursday, May 08,
2003 - 03:56 pm:
Vern,
If I understand correctly, software maturity is an industry
standard in the same way that ISO 9000 is an industry standard.
Of course, the only people who can determine these ratings are
experts with specialized training and experience.
The lawyer’s obliviousness on this point makes the whole thing a
bit of a joke.
If the whole SSET is going to vote, the individual members of
the SSET can either endorse the ratings determined by the
software maturity experts or expain why they think they are
smarter than the experts.
We had a discussion on the topic of “consensus” in another
thread. Acclamation is one acceptable way to reach consensus,
albeit it is always prudent to give dissenters a voice, to make
sure that all issues are properly aired out.
I wasn’t being facetious.
I don't recommend to any of our less experienced participants
that they fight with wth their supervisors or their counsel. It
is always better to have a less than perfect approach, with your
supervisor and counsel on your side, than the reverse.
Eric
By
12:22anon on Thursday, May 08, 2003
- 05:25 pm:
I have to disagree Eric. It goes way beyond
independent things like software maturity evaluation. Gene
specifically mentioned a "complex hardware/software" situation.
I've been there and done that. I take Gene seriously when he
says "complex" and know more than a bit about hardware/software
issues.
With no further information I'd guess there are several
specialized areas in software and several more in hardware that
need more than a generalist. Worse, as I mentioned above, there
are interface and integration issues where specialists need to
be tweaked to consider general system issues. It isn't going to
help much for network specialist to be stove piped so tightly
that is all they consider. The key, and trick, is to get the
expert evaluators (And is Vern right there about voting is not
evaluating!) working to the global issues.
It isn't up to any of us to "recommend to any of our less
experienced participants that they fight with their supervisors
or their counsel," but I have to wonder whether it "is always
better to have a less than perfect approach" when less than
perfect is more likely another failed government IT
effort. All we can do is advise on what is at stake. They will
have to decide to fight or not. A good bit is at stake here.
This is an area done so poorly so often at such great expense
that it is a hidden scandal. It can be difficult for an
excellent match of contractor to requirements. It is a certain
disaster if a bad match is made. In my considered and
experienced opinion, people who know better just going along
with the ignorant, if not incompetent, agency powers is a large
part of that problem.
It is always easier to go along. It can also bring about long
term unease as you watch another failure. Success with some
scars is more immediately painful, but there are long term
satisfactions. You won't win them all. You won't win completely
most of the time. Giving up that easily is a good way to lose
all the time and watch a string of dismal failure trail behind.
By
Eric Ottinger on Thursday, May 08,
2003 - 09:19 pm:
12:22anon,
You may be right. I simplified the problem down to one SSET and
one software maturity team. This may not be what GeneJ had in
mind. However, I trust that after he shows his lawyer the AFFARS
and the Army guide he can have as many SSETs as he wants.
I share your feelings regarding botched IT procurements.
Eric
By
GeneJ on Friday, May 09, 2003 -
12:40 pm:
All is well. We ended up as Eric predicted. He's not
stupid just incredibly literal. I don't mind too much because
I'll be making mistakes the opposite direction.
GeneJ
By
Vern Edwards on Monday, May 12,
2003 - 01:11 pm:
GeneJ:
If the selection of the winner turned on a vote by uninformed
persons, and if the voters did not look at the proposals
themselves or at the evaluators' documentation, or at
well-documented briefing or report, then I hope you don't get a
protest.
By
Eric Ottinger on Monday, May 12,
2003 - 02:06 pm:
Vern,
Two points—
Based on the previous posts, GeneJ will have all of the SSET
teams that he wants, including a small team specifically for
software maturity.
However --
If you could cite a case where the votes of SSET members were
considered to be “uninformed” because a few members voted to
endorse the results of a review accomplished by a subcommittee
of subject matter experts, I’m sure that you would have cited
the case. I would bet that the case doesn’t exist.
The Comp. Gen. is much more interested in the soundness of the
evaluation than the mechanics of the SSET process.
Eric
By
Vern Edwards on Tuesday, May 13,
2003 - 11:50 am:
Eric:
I'm writing from Paris and I'm not inclined to do any case
searching for you. I'm having enough trouble with this European
keyboard as it is, so forget about it.
My point is this: What does the vote signify, on what
information is it based, and how will the SSA make use of it?
GeneJ had better be pretty clear about all that in his mind,
especially if he gets a protest.
Frankly, as a CO, I would not allow a full panel to develop a
finding or a recommendation for an SSA based on a vote by
persons who had not read the proposals themselves, read a
well-documented subcommittee report, or received a
well-documented subcommittee briefing. I especially would not do
it in order to satisfy the whim of an incompetent legal advisor.
Moreover, as the chief of a purchasing office I would chastise
any CO who proposed such a solution.
By
Eric Ottinger on Tuesday, May 13,
2003 - 12:04 pm:
Vern,
A well documented subcommittee report is exactly what I (and I
think Gene) had in mind.
Please do a little research on software maturity and see if you
still have an issue.
Note my May 3:56 post.
Wishing I was in Paris--
Eric
By
Vern Edwards on Wednesday, May 14,
2003 - 08:06 pm:
Eric:
I see no reason to do any research about software maturity. It
is not germaine to the discussion, which is about sound
evaluation procedure. I don't consider voting to be sound
evaluation procedure. I think that at one time one of the
military services banned the development of evaluation findings
by voting, but I can't remember which one. Air Force?
Paris has been nice. I have to go to Singapore on Friday, then
Cambodia, so I'm off to buy a supply of surgical masks.
Vern
By
GeneJ on Thursday, May 15, 2003 -
01:03 pm:
Hi
One of the old Army FAR sups banned voting AAC 96-3 June 23,
1999 "Establish scores by evaluator consensus and not by vote."
We hold to that petty tightly even today with every thing done
by consensus. We do often refer to establishing consensus as
voting day, even though there may not be any voting. Most SSEB
chairs will discuss a split in opinion before getting down to
establishing consensus. On the one occasion we used professional
facilitators they didn't ever discus the number of folks on each
side of an issue.
I'm missing part of the discussion Eric and Vern are enjoying.
The only way I've ever seen a panel or sub-part of an SSEB used
was to have them do their work and present their evaluation
results to the whole SSEB. Often there were issues that people
felt belonged in a different write-up, and more often some
specific strengths and weaknesses were not supported and
required a rewrite or a better explaination.
At one time we had an ongoing problem evaluating the offeror's
EVMS. We were looking for cost tracking but not requiring EVMS.
We never found anyone one who worked in the EVMS area that
didn't think every offeror was dreadful. The uninformed SSEB did
accept some of their findings, but some were left on the cutting
room floor. Some but not all of the original findings made their
way into minority reports.
GeneJ
By
Eric Ottinger on Thursday, May 15,
2003 - 01:39 pm:
Genej,
"[H]ave them do their work and present their evaluation results
to the whole SSEB" is what I was suggesting.
I think Vern has already agreed to that. So I'm not sure there
is a dialogue, and I doubt you are really missing anything.
Eric
|