HOME  |  CONTENTS  |  DISCUSSIONS  |  BLOG  |  QUICK-KITs|  STATES

Google

       Search WWW Search wifcon.com

To Contents

What Training Should Source Selection Team Members Have? (Part 2)
By joel hoffman on Thursday, October 12, 2000 - 08:51 am:

Eric, et al. The FAR rewrite addressed several years of GAO Protest decisions. FAR 15.308 intended to make it clear that the SSA has the responsibility to make the decision and document the tradeoffs and reasons for the selection, regardless of advice offered by evaluation teams. This had been and still is a recurring problem in source selections and in successful protests.

I believe that the wording: "Although the rationale for the selection decision must be documented, that documentation need not quantify the tradeoffs that led to the decision.” stems from the GSBCA's "B3H Decision" from Circa 1995 or similar cases. I feel that was one of the cases which cost the GSBCA its stranglehold jurisdiction over Federal Information Processing FIP acquisitions.

Anyway, in that decision, even though the SSA had over 47 pages of deposition testimony to justify his selection decision during the protest, the GSBCA ruled that the Government should have quantified all benefits considered in the trade-off analysis, including intangible benefits. I know that the Corps of Engineers and others encouraged the Air Force to appeal this ridiculous decision. The Air Force appealed the B3H decision in Court, because of the principles involved. Fortunately, the GSBCA was overturned. The FAR 15 rewrite later added wording to make it clear that not every advantage has to be "quantified" (priced)in the trade-off analysis.

Yes, I always ensure that the SSA will be able to consider any minority dissenting opinion, in their decision making. However, I've personally never had any dissenting opinions in my boards. I'm sure there are situations elsewhere which would generate valid dissent. Then, the SSA's job is more challenging but that's when they get to fully exercise their independent judgement. Happy Sails!


By Eric Ottinger on Thursday, October 12, 2000 - 01:08 pm:

Joel,

It is more exact I believe, to say that the Comp. Gen. was an active participant in the Rewrite.

This is an outtake from a true story-- Years ago I was approached by the president of a small firm. He wanted to have a little private chat about another PCO’s source selection. He had heard that at the competitive range his lower cost proposal had been out in front, but somehow, he wound up second in the final ranking. He wanted to know whether he should protest.

Obviously, somebody had talked out of school. Reading between the lines in a lot of protests, you can see that there would not have been a protest, if the protestor had not had access to inside information.

On the whole I would rather people vent inside the team and keep their mouths shut otherwise.

I may have had a minority report on some occasion, but I can’t remember when.

It is human nature. If you try to ramrod strong-minded people, they will find a way around you. If you give them a safety valve, they will, more likely than not, support the decision.

CTTO and the decision-making tools that we have been discussing are different animals. If there is a common thread it is simply that you don’t really understand these tools unless you understand their limitations. I agree regarding the GSBCA. However, we still have CTTO. But the Comp. Gen. is more realistic about what can be quantified and what can’t be quantified.

Anyway, it sounds like we have converged nicely.

Eric


By Ramon Jackson on Thursday, October 12, 2000 - 11:34 pm:

Eric, I have more problem with your reliance on consensus than varied opinions. I find consensus reports, unless there is such overwhelming clarity in the data itself, to generally trend toward mush. It is rare there is such clarity of choice that the consensus is sharp and clear too. I find them something like things written by committee. For decision making I'd prefer majority and minority reports as they tend to sharpen the issues upon which a decision must be made.

A good SSA would possibly be safer and on sounder ground going with a minority report, documenting exactly why, than one full of compromises to reach consensus. A really good SSA would probably call the support staff onto the carpet to clarify.

Source selection is too expensive and bears too much long term risk in my view to have people off in corners playing games with their own evaluation systems as you mention earlier. I've seen it and it is damaging. They changed their role from evaluating the material to support a decision to being ad hoc source selection planners. They were not forming a majority or a minority view on the material at hand. They were forming a minority view on the process itself.

I have no problem with venting within the team. Pound tables, yell if necessary. Poor form perhaps, but better that than selection plan spin offs and intersting, but not particularly revealing, personal evaluation systems with lack of clarity in product.

Neither do I have a problem with someone noting that something under consideration might be seen with more clarity if subjected to some special analysis technique they know. More than once (mainly in other fields) I've seen some subject being a struggle when someone hits on a way to untangle the knot. No problem.

Explain what is going to happen, what went on and the results so the others can share in the knowledge. I would have a problem of a "magician" going behind a curtain to do something and then announcing superior knowledge as I suspect some of your fiddlers tend to do. The first assists in expanding real knowledge, the second is mere grandstanding. A swift exit would be nice -- and, if they talk out of school, a real penalty nicer.

I believe the selection teams are too often thrown together without real thought, planning or understanding of what has been prepared and their specific role in that plan. Teams are also too often structured to meet internal political needs rather than present reasoned analysis to the SSA. As someone mentioned, the SSA is often chosen for position rather than real knowledge or ability. They need sound advisors. An ineffective (but perhaps happy) team is a waste of time and opens the door (if stiffly protested) to some of those court decisions we've looked at with amusement and bit of horror.

I'm reminded of a Far Side cartoon that had repeated use covering various stages in acquisition. A pile of horses, tack, human limbs and stuff lay before the Sheriff's office in an Old West town. The Sheriff is on the porch telling the deputy that a possee requires organization. People in our organizations often don't like to be organized. We too often see the results in these things gone bad.


By Eric Ottinger on Friday, October 13, 2000 - 12:32 pm:

Ramon,

I think Joel and I converged on “Consensus is the outcome of a process, not an end in itself.”

I agree with the Comp. Gen. It is normal that strong-minded evaluators will have differences of opinion. Nobody ever lost a protest simply because the evaluators didn’t agree.

My pragmatic observations are, (1) I want a strong, experienced chairman to manage the team, and (2) strict consensus (i.e. unanimity) will work only if the team makes that choice. Otherwise, it isn’t a good idea.

When I was at Infantry Basic, an old Sergeant (probably quite a bit younger than I am now) told us, based on his experience in two wars, that he had crawled across some objectives and walked across others. They were all different.

It was good advice.

I like to have a diverse group of evaluators. Among other benefits, we are less likely to get into technical team versus the PCO problems if the team is not in-bred. I know some authorities who think that the team should be a supervisor and two of his/her immediate subordinates. I think this is profoundly silly advice, and any PCO who takes it deserves the problems that are going to follow.

True story. Years ago we had-- not an agency protest-- but a gentlemanly complaint at a high level. Discrete inquiries were made and the answer was something like the following: “That bunch wouldn’t agree on the time of day without an act of Congress. There is no way that anyone could have rigged that source selection.” Nevertheless, the Director of our agency decreed that henceforth there would always be outside representatives on each selection team. He understood that it is important to be fair and it is also important to be fair in a way that everyone understands. In my experience, this is self-evident to everyone but some 1102’s and lawyers.

Eric


By Vern Edwards on Saturday, October 14, 2000 - 10:59 am:

"Consensus is the outcome of a process, not an end in itself" isn't much of a convergence. What else would it be?

That statement begs the questions: What is a consensus? And what kinds of processes are appropriate to use in order to develop a consensus? Those are among the questions that need to be answered during source selection training.


By Eric Ottinger on Sunday, October 15, 2000 - 06:38 pm:

Vern,

In the Air Force Guide which you initially cited, for Basic source selections “…a consensus decision is the NATURAL OUTCOME OF THE PROCESS. In the rare instances when there is disagreement among the team members … the CO, as the SSA, is final authority …”

If a team wishes to achieve strict unanimity, my advice is to keep talking longer. If the team wishes to avoid an indecisive split vote for a particular factor and achieve “general” consensus, my advice is to focus more efforts on the factors where there a close vote and see if they can swing those factors one way or another.

If there are wide differences in scoring, you need to go back and make sure everyone has the same understanding of the factors and the standards. However, this should have been accomplished at the start.

If you still have wide differences you need to make sure that everyone has an opportunity to clearly explain the basis for the rating.

This is all common sense. I will admit that I have known a few people who could use remedial training in common sense. I don’t want them involved in my source selections.

I can think of other means to force consensus, but they are morally repugnant.

What means did you have in mind to obtain consensus?

Eric


By Vern Edwards on Monday, October 16, 2000 - 10:39 am:

Eric:

Before I describe how I would try to build a consensus, we better see if you and I agree on the meaning of that word. Then we have to talk about what the members of a source selection team should try to reach a consensus about.

When I use the word consensus, I am referring to a group decision that each member of the group understands and is willing to live with, even though one or more members do not think that it is the best possible decision and are not entirely satisfied with it. When I use consensus I mean an agreement to pursue a course of action despite the fact that not everyone thinks that it is the best course of action. Based on that definition, voting would not be an acceptable way to build a consensus, since voting merely imposes the will of the majority on everyone.

Do you accept that definition of consensus?


By Kennedy How on Monday, October 16, 2000 - 12:18 pm:

I would accept Vern's definition of consensus, but I'd also say that voting might be one way of getting TO the point of a consensus. It's shouldn't be the last part of determining a consensus, but rather a step to determine who likes what.

Kennedy


By Vern Edwards on Monday, October 16, 2000 - 01:30 pm:

Kennedy:

I agree that polling (I prefer that word to "voting") by show of hands would be a useful way to determine the members' initial positions on the decision to be taken.

But let's not get ahead of ourselves. Eric and I need to see if we understand consensus in the same way before we can talk about the ways in which a team can build a consensus.

Vern


By Eric Ottinger on Monday, October 16, 2000 - 07:29 pm:

Vern,

Let’s be clear. I am not advocating “consensus”. In my view, a word that can mean anything from strict unanimity to a group “sentiment” is dangerously ambiguous and likely to cause problems.

Merriam-Webster:

“1 a : general agreement : UNANIMITY b : the judgment arrived at by most of those concerned
2 : group solidarity in sentiment and belief”

However, I am comfortable with the following definition:

DUHAIME'S LAW DICTIONARY
Consensus
“A result achieved through negotiation whereby a hybrid solution is arrived at between parties to an issue, dispute or disagreement, comprising typically of concessions made by all parties, and to which all parties then subscribe unanimously as an acceptable resolution to the issue or disagreement.”

Let’s start with two facts and one opinion.

1. The Comp. Gen. doesn’t care. The Comp. Gen. doesn’t have any problem with differences of opinion among the evaluators. Nobody has ever lost a protest, merely because the evaluators didn’t agree. However, any number of protests have been lost because clumsy or inappropriate things were done to prevent or cover-up disagreements.

2. The Chairman is not obligated to present minority opinions in the briefing to the SSA, normally doesn’t do so, and normally wouldn’t do so, unless (1) the minority specifically requests an opportunity to present a minority position to the SSA or (2) the Chairman thinks the issue is so important that both points of view should be briefed. (The Comp. Gen. will review the information which the SSA uses to make his/her decision. Errors or omissions in the briefing to the SSA are not subject to protest unless there is something arbitrary or stupid. It is the element of arbitrary or stupid which may provide a basis to sustain a protest, not the error or omission per se.) In short, irrespective of the process which the team uses to prepare for the briefing, the briefing is normally a “consensus” presentation.

3. Personal Opinion: I would rather have a file reflecting spirited debate rather than a file indicating lockstep conformity. It will look better in the event of a protest; all of the participants will be better satisfied with the outcome, including the offerors. Further, I would prefer to select evaluators reflecting diverse points of view and avoid loading the evaluation team with like-minded evaluators.

Just for clarity, let’s state that our disagreement is not so much a question regarding “consensus” as it is a question of roles and responsibilities. For a Basic level source selection it is clear that the AF expects every evaluator to be equally involved in evaluating the factors, preparing the narrative comments, “integrating” the evaluation, and making the decision. But this is explicitly for a “noncomplex,” low-dollar source selection performed by a small team of “normally two people” (a technical person and a PCO/SSA.) For more complex source selections, roles and responsibilities are more defined and less amorphous; evaluators rate, comment, (and prepare quantitative of non-quantitative tradeoffs for CTTO, if required) but the “integration” is normally done at a higher level.

In any case, I am not advocating “consensus.” I regard this choice as a prerogative of the SSET. Obviously, if the team chooses strict unanimity there is a hazard that the process will bog down on some point where the team can’t reach agreement, and a disproportionate amount of time and energy might be spent on a small issue. On the other hand, if the team signifies a desire to work by consensus, they will usually make it work.

As for general consensus, this would appear to be common sense. I see no particular need to insist on a point which should be self-evident.

Most of the regulations seem to contemplate a “general” consensus rather than a strict consensus.

However, the Army prefers consensus to voting. I have no idea what this means. Perhaps our Army participants can enlighten us.

AFARS 15.305 Proposal Evaluation.

“(a)(1) Cost or price evaluation. Always evaluate and consider cost or price. Do not score cost or price or combine it with other aspects of the proposal evaluation.


(a)(3) Technical evaluation. Do not average or otherwise manipulate individual evaluator or unit scores to produce a single raw score for any factor or subfactor. Establish scores by evaluator consensus and not by vote. When divergent evaluations exist, and none of the evaluators have misinterpreted or misunderstood any aspects of the proposals, consider providing the SSA with written majority and minority opinions.”

I think Stan has the right idea in terms of putting more emphasis on the comments and less on the ratings. But I believe that I have already said as much.

In short, Vern, it would be better if you address the question to someone else; perhaps, one of our Army participants.

Eric


By Vern Edwards on Monday, October 16, 2000 - 09:01 pm:

Eric:

You say that our disagreement is not so much a question regarding consensus as it is a question of roles and responsibilities. I don't think that we disagree about anything in that regard. All I have done in this thread is recommend that the source selection team receive training in business evaluation and decision-making processes. For some reason this prompted you to accuse me of advocating some "hippie commune"/"participative democracy" thing. Goodness knows why.

The SSA is in charge--FAR says so. I never disputed that or suggested anything different. No need for you to go on about it or quote the GAO at length. The issue was settled back in 1976 in the Grey Advertising, Inc. decision, 55 Comp. Gen. 1111. I discuss the SSA's authority and responsibility on pages 69-70 of my new book, Source Selection Answer Book (Vienna, VA: Management Concepts, Inc., 2000), under the heading, "Who is in charge of the source selection process."

I don't disagree with you. Chill, dude.

As to the Army's definition of consensus, the Army describes it as follows in The Art of Teaming Guidebook (U.S. Army Materiel Command, Integrated Product and Process Management Working Group, June 1999):

"A consensus is a decision reached by the team that everyone can live with and no one opposes. A consensus decision is not necessarily a unanimous vote since some members may not feel it is the best solution. It also, therefore, does not necessarily result in everyone being totally happy. But a consensus decision should indicate that all members can live with the decision, can support it, and will do their part to implement it."

But I can live with your definition of consensus. (I'm a little surprised that you don't advocate it, since you said that you believe in it. But no matter.)

Now that we've reached an agreement as to the meaning of the term consensus, I'm ready to answer the question you asked me about how I would build one. But before I do, we should decide what the evaluators (not including the SSA) should try to reach a consensus about (assuming that they will seek consensus). There are several possibilities:

1. They should seek a consensus about how well each offeror did on each of the evaluation factors, but they should leave the integrated assessment of each offeror to the SSA.

2. They should seek a consensus about (1) how well each offeror did on each of the evaluation factors and (2) the integrated assessment of each offeror, but they should leave the assessment of the relative merits of the offerors to the SSA.

3. They should seek a consensus about (1) how well each offeror did on each of the evaluation factors, (2) the integrated assessment of each offeror, and (3) the relative merits of the offerors, ranked from best to worst, but not on a recommendation to the SSA. (Don't want to try to force that decision, you know.)

4. They should seek a consensus on all of the things that I listed in 3 above, and on a recommendation to make to the SSA if asked to make one.

You pick.


By Eric Ottinger on Thursday, October 26, 2000 - 05:53 pm:

Sorry for the slow response Vern. I will be happy to give you a week to think.

If I am the only playmate, we should both take a hint.

Since this thread is taking a long time to load and I think we have gone way beyond the scope of the question that Ramon asked, I am going to preempt Bob and start a continuation thread.

Eric


By Eric Ottinger on Thursday, October 26, 2000 - 06:01 pm:

Vern,

We do disagree.

I agree with the regulations and the Comp. Gen. precedent. In part, this is because policy is policy. In part, I think the policy incorporates common sense. I agree with the common sense element as well as the explicit policy. I don’t see why anyone would wish to do it differently, unless they have a propensity for self-inflicted pain.

For instance-- Why would anyone set up a scenario where the SSA somehow appears to override the recommendation of the SSET. If a contracting officer uses good common sense and doesn’t make a recommendation (unless asked), this is unlikely to happen.

In my view--

1. The SSA should be a senior official in the agency with a broad view of the agency’s requirements.
2. The key strategic decisions should be made by a small group of people including the PCO, the customer (who may or may not be the Chairman) and the SSA.
3. Evaluators normally evaluate in accordance with the RFP, the SSP and the direction (call it “training” if you wish) that they are given by the PCO and the Chairman.

You assume--

1. The SSA is routinely going to abdicate responsibility and ask the SSET to make a recommendation based on an integrated assessment.
2. The SSA is routinely going to ask the SSET as a whole, to prepare an integrated assessment collectively and make a recommendation.

If I may quote your previous postings--

“The second part of the training is more important than the first part, because it will teach the TEAM how TO MAKE A SOUND BUSINESS DECISION, which is the main objective in source selection. Two or three days or more should be devoted to this training, depending on the backgrounds of the members of the team.”

“It is clear from this passage that THE AIR FORCE EXPECTS THE TEAM TO REACH A CONSENSUS DECISION ABOUT WHICH FIRM SHOULD RECEIVE THE CONTRACT. In order to do that effectively, the team should receive some training in effective decision-making processes.”

“One reason that it is important for the evaluation team members to receive training in effective evaluation and decision principles and procedures is that SSAs often (but not always) want award recommendations from them. In order to make a recommendation, the evaluators must reach a consensus decision among themselves, even though the SSA can reject it.”

(Actually, the AF expects that a two person team will reach “consensus,” sort of like the proverbial “two car funeral.” I don’t believe the AF expects “consensus” for anything but the small dollar “Basic” level.)

I won’t say that it can’t happen. It just isn’t my experience. In my experience the SSA is either the Director of Contracts (now Deputy), a senior manager at the Deputy level, or a Division Director. In my experience, none of these folks has ever had any difficulty making a decision independently. None of them has asked for help. Of course, they take counsel and they think things through very carefully.

Of course, the SSA has, on occasion, asked for additional input with regard to specific, well-defined issues.

This is going to sound hopelessly authoritarian to some, but I believe that people with greater experience, knowledge, rank and responsibility should have a larger voice in key decisions. I believe in unity of command. In my experience, projects with diffuse responsibility and poorly defined leadership roles are very likely to produce poor results.

My reference to hippies, communes, the 60s, and the Left generally, was not casual. I am conservative on this kind of issue. I find it strange and disconcerting when DoD and corporate America adopt the trappings of the 60s Counterculture.

I most certainly expect every evaluator to contribute to the discussion within the team. I expect the Chairman to have the experience and leadership skills to make that happen. And if the Chairman is ineffective, I will take over and lead the discussion myself. But I want the focus on the specific factors.

I am not very touchy-feely. I will admit that I had a mild allergic reaction when it was indicated that an agency is bringing “facilitators” in to help the evaluators work together and learn to express themselves. Surely, it shouldn’t be that difficult.

I would note that the AF Guide has a prohibition on rolling up subfactors. (2.2.2.1 “Mission Capability Subfactors. Subfactor ratings are not rolled up into an overall factor color rating.”) Like I said, “I constantly urge people not to force the correct answer or jump to the bottom line.” But, some teams have insisted on doing just that.

You aren’t going to get me to agree on a meaning for “consensus” because the word has several possible meanings, and I don’t really have any basis to determine which is correct in this context. Since, the word isn’t in the FAR or the DFARS, this is not my problem. My personal opinion is neither here nor there. If you wish to build an edifice, starting with unsupported personal opinion, suit yourself, but let’s not start with mine.

I think this is an unfortunate example of buzzword policy making. The Army definition specifies the output. It doesn’t tell us what approach we are expected to use to get to the output. A statement that “agreement is generally better than disagreement (unless you really can’t agree)” is a profoundly trivial statement. Ditto, a statement that we should all be willing to compromise a little bit. The real question is, “What process are we expected to use to get to this result?”

However, in many situations, a collective decision process is a legitimate choice out of several viable choices. It is, however, a painful, slow way of doing things, which is the reason that you don’t find many people living in communes these days. Unless people are very committed, communes, and “participative” decision making schemes in general, don’t work very well.

1102s are the nattering, controlling, busybody, obsessive-compulsive, bossy older sisters of the acquisition family. Whether “consensus” is good or not (Keep in mind that the Comp. Gen. doesn’t care.), why is it our business to tell the evaluation team that they must do “consensus” rather than majority vote, or whatever else they might choose to do. Where is the compelling need to have a policy for this issue.

However, as long as we all understand that this is speculation, I will speculate a bit.

The workbooks are really just convenient scrap paper to scribble on, and the initial ratings are not much more significant. Some Contracting Officers have made a fetish of quality checking the workbooks and hounding the evaluators if the workbooks aren’t up to standard. They shouldn’t.

I did a source selection once with an evaluation board consisting of contracting officers. The normal Blue, Green, Yellow, Red rating scheme wasn’t good enough. They wanted Green Plus and Green Minus and Yellow Plus and Yellow Minus, etc.

I sense that higher management is saying, “Cut out the silliness. It shouldn’t be that difficult or that complicated.” In this context, “consensus” means, “Keep the focus on the things that matter (to the SSA and Comp. Gen.) and don’t get bogged down trying to fine tune or perfect workbooks and rating schemes, (which really don’t matter that much in the final analysis); put the effort where it matters, writing good narrative comments and preparing a solid briefing for the SSA. (See Stan’s posting.)

Lest I be misunderstood-- I am not belittling the role of the technical evaluators. The ratings and narrative comments are the backbone of the briefing to the SSA and the decision memorandum. The work that the evaluators do is the most important part of the process. In many cases the results of the technical evaluation will substantially dictate the selection decision.

But, it is another thing altogether to say that the technical evaluation should be done with an eye toward dictating the final decision. An umpire, who calls balls and strikes with some idea that it is his responsibility to determine the outcome of the game, is self-evidently a hazard. I think we all understand and agree on that.

I would advise my evaluators to “Call them as you see them,” taking each proposal and each factor one at a time.

Vern, you gave me a flip answer when I raised the “consensus” question last year. I am glad that you are taking the issue seriously this time around. As for , “Chill dude” I think we are providing more entertainment for Joe Blow than we should; some of our readers would clearly prefer more decorum.

Joe,

Glad to see that you are still around.

Jane,

I don’t think we are related. But you never know.

You can call me anything you want. Just remember to call me for dinner.

Joe and Jane,

Do you agree that we are a “pathological” bunch? (Robert Lloyd does. ) I find that just a little weird. What do you think?

To wrap up, let me take a little poll. All of those who gather the complete evaluation team, including the evaluators, for two days of training in the theory of correct business decision making, months before the proposals come in, at the start of the process, before you write the selection criteria and the SSP, please raise your hands. (Just curious-- How did you select and gather the evaluators before you did anything else?)

All of those who routinely give the evaluators the same authority, to strategize and to integrate the analysis, that you give to the PCO, Chairman and the SSA, please raise your hands.

Eric


By Vern Edwards on Friday, October 27, 2000 - 03:08 am:

Eric:

Dear Friend, I did not read your last post, even though it is addressed to me. It comes much too late and it is much too long. I stopped upon reading the words, "We do disagree."

If you say we do, then we must.

Vern

ABOUT  l CONTACT