HOME  |  CONTENTS  |  DISCUSSIONS  |  BLOG  |  QUICK-KITs|  STATES

Google

       Search WWW Search wifcon.com

To Contents

Total Points Scoring Evaluation Method

By Mike Wolff on Monday, December 17, 2001 - 11:25 am:

I've been researching doing evaluations using the total points scoring method. (This evaluation would be used primarily for service contracts - mechanical, elevator, and janitorial - and construction projects under $2 million.

In "Formation of Government Contracts" 3rd Edition, Cibinic and Nash state (on page 848 of the softbound) that "this evaluation technique totally obscures the tradeoff between price and the other evaluation factors...."

I don't understand this comment. For example, if price is assigned a weight, say 40 points, and the only other factor, past performance, is assigned a 60 points, the tradeoff is specifically defined, and not obsure at all.

Any comments?


By Vern Edwards on Monday, December 17, 2001 - 12:14 pm:

Mike:

I agree with you, but this is one of those issues in which people come to the argument with their minds made up. You cannot win people over.

I have argued about this with Ralph Nash many, many times. Ralph dislikes numerical scoring schemes generally and total point scoring schemes particularly. He believes that converting dollars to numbers obscures how much you are paying for what you are getting. He believes that if you are comparing two offerors to each other on the basis of total points that include the dollars, you cannot see how much you would be paying A vs. B, and for what.

For example, if you are using a total point scale that includes both nonprice factors and price such that the best possible score is 100 points, and if Offeror A has 98 points and Offeror B has 89 points, you cannot tell by the scores what kind of tradeoff you are making.

The counter-argument is that it does not matter as long as the scoring system is designed on the basis of sound principles, since the total score shows that Offeror A is the better value of the two offerors based on the tradeoffs built into the system. Unfortunately, that "as long as" is a major hangup, because it is clear that most people do not understand how to design sound numerical scoring systems.

Although I think that total point systems are unobjectionable in theory, and although the GAO has said that their use is permissible, I discourage agencies from using them in light of the general level of innumeracy in our society and the fact that most contracting officers and source selection team members have had little if any training in the use of formal decision analysis techniques.

By the way, if you are using a total point system you should use it only to make initial tradeoffs, but you should not cite scores as the basis for the source selection decision. Instead, you should justify the source selection decision on the basis of the documented differences among the proposals in terms of the nonprice factors and the proposed prices and on an analysis of whether in each comparison of offerors the differences in the nonprice factors are worth the difference in price.


By Mike Wolff on Monday, December 17, 2001 - 12:30 pm:

Vern,

Thanks very much. The majority of the time the only non-price factor we use is past performance. We call the references using a survey similar to the following:

5 - Excellent
4 - Very Good
3 - Good
2 - Fair
1 - Poor
0 - N/A or No Response

1. How would you rate the firm's overall performance on your contract?
2. How well did the firm comply with the terms and conditions of the contract?
3. Please rate the overall quality of the work performed on your contract.
4. Rate the firms timeliness in completing work in compliance with the contract.
5. Please rate the firm on their managerial performance, e.g., the quality of their supervisory employees, contract manager, corporate management involvement, responsiveness, communication, etc.
6. Would you enter into a contract with this firm again? Yes = 5 Maybe = 3 No = 0

We then total and average the scores. Using this procedure, and since past performance is the only non-price factor, would you still be against using this score, in conjuntion with a price score (or without a price (scored or unscored) as the basis for the source selection decision?

Mike


By Vern Edwards on Monday, December 17, 2001 - 01:00 pm:

Mike:

I would not combine the numerical past performance scores that you describe with price scores into a total score. Also, I see shortcomings in your numerical scoring system for past performance.

For one thing, I cannot tell whether your numerical scores for past performance are based on an ordinal scale or an interval scale. If the scale is ordinal in nature, it means that the numbers represent categories of performance rather than specific values. In that case, while I know that a 5 is better than a 4, I do not know how much better; nor do I know that all 5s are equal. (It may be that two offerors are excellent, but that one is more excellent than the other. Also, what is "excellent" to one reference might be only "very good" to another.)

Your scheme appears to be numerical but not quantitative; if so, then it is not appropriate to average the scores. I consider the system that you have described to be no different than an adjectival scheme, except that you are using numbers instead of adjectives.

Finally, I do not know how you assign the numerical scores to prices, how you weight the scores, or how you combine the past performance scores with the price scores.


By Dave Barnett on Monday, December 17, 2001 - 01:33 pm:

Numerical scores may provide a sweet picture but (and I've been preaching this for years) a narrative trade-off analysis is needed especially as the acquisition value rises.

Let's take the scenario presented: Price = 40 points max., past performance = 60 points max. Vendor A receives the max points for price with an offer of $100,000, but only scores 3 points per each of the 6 questions Mark posed. To normalize the past performance score to a scale of 60 points possible, vendor A receives 36 points (think of 6 question times a possible 5 point max. equals 30, multiply that by 2 and a 60 point possible score is available, I know, I'm over simplifying, bear with me). So vendor A has a score of 76 out of a possible 100. But vendor A is ranked "good" in the past performance rating.

Now we come to vendor B who receives 5 points for each of the six past performance questions and gets the max. of 60 points for past performance. Vendor B's price offer is $200,000 so a price score of 20 points is given for a total of 80 points. According to the numerical source selection scenario, vendor B wins...but is the better past performance record sufficient cause to pay the additional $100,000? What if in the scenario, vendor A offered $1,000,000 and B offered $2,000,000, $10,000,000 vs $20,000,000, etc?

Numerical ratings just don't cut it, a qualitative analysis must be done demonstrating the superiority of a higher priced offer justifies the additional cost.

I know I've kept the argument very simple and basic, but it does demonstrate the flaws with relying solely on the numerical score.


By Vern Edwards on Monday, December 17, 2001 - 01:53 pm:

Dave:

You may have given us an example of a poorly designed numerical scoring system, but your example does not prove your general proposition that "numerical ratings just don't cut it."


By Dave Barnett on Monday, December 17, 2001 - 02:11 pm:

Oh but I think it does Vern, reliance on raw numerical scores does not cut it, the narrative analysis of trade offs is the key in the source selection. And that is what my scenario intended to demonstrate.


By joel hoffman on Monday, December 17, 2001 - 02:24 pm:

Just curious, how many points per dollar difference are there in your price scoring system?

Does a price point directly correlate to a quality point? If so, you have apparently devised a system to $$ quantify differences in tangible and intangible quality. Do your dollars per quality point make sense in actual application?

If you haven't, I don't think it is appropriate to mix dollar points and quality points in an overall point scoring system.

Even the GAO Board of Contract Appeals, normally a rubber stamp (as long as the Government follows the RFP, regardless of whether the Government used any business judgement) tends to frown on a cookbook point scoring system. The Army acquisition folks got fed up with Contractor complaints and Protests over such systems and banned all point scoring, last Spring. happy sails! joel


By Mike Wolff on Monday, December 17, 2001 - 02:54 pm:

My problem with the narrative system is that it appears to allow too much leeway. When proposals are close enough, ask 10 different COs their opinion on who should get the award, and you'll get 10 different answers. A system like that is flawed.

Assume you have three offers with the following:
A - $100,000 3/5 avg. Past Perf. Score
B - $120,000 4/5 avg.
B - $140,000 5/5 avg.

The award criteria is "Past Performance is significantly more important than price," without using a numerical scoring system, who would you award to? Reasonable arguments could probably be made for award to any of the three - to me, that is a problem.


By Vern Edwards on Monday, December 17, 2001 - 03:36 pm:

Dave:

I guess I don't know what you mean by "numerical scores do not cut it." Are you saying that it is not possible to construct a totally numerical scoring system that can validly portray differences in value among competing offerors?

I don't like the term "narrative analysis," because it is a misuse of the word narrative, but I do agree with you that a valid assessment of each offeror's performance on each of the evaluation factors and rational tradeoffs are the keys to sound source selection decisions. However, I say that the results of offeror assessments can be reliably portrayed in numerical form, i.e., without words.

It's important to understand what "scoring" is. All evaluations entail two steps. The first step is to assess each offeror on each of of the evaluation factors, nonprice and price. The second step is to compare the offerors to each other on the bases of the results of the first step in order to determine which is the best value. Scoring is the process of assigning to each offeror a symbol or set of symbols that indicates how well it performed on the evaluation factors, i.e., the results of the first step.

I say that it is possible to design a numerical scoring system that assigns to each offeror a single number that reliably portrays the outcomes of the first step, the offeror assessments, and that thus can be used to perform the second step without resort to what you call "narrative."

Let's be clear: No scoring system is a substitute for offeror assessment; scores must reflect such assessments. Scores combine and simplify complex data by portraying the results of offeror assessments in concise symbolic form--adjectives, colors, numbers, etc. Scores are aids to tradeoff analysis and decisionmaking. So I say again that it is possible to devise a numerical scoring system that produces a single number which reliably portrays the results of evaluators' assessments of an offeror's performance on the evaluation factors, including price. Properly devised, such a system will reliably indicate that an offeror with a higher score is a better value than an offeror with a lower score, based on the evaluation factors and weights described in an agency's solicitation. However, the development of such systems takes know-how that most government source selection officials, contracting officers and evaluators do not have.


By Anonymous on Monday, December 17, 2001 - 10:01 pm:

Let me put in some "outside the box" thinking. I have (sometimes) applied a price-per-point evaluation to assist in determining a best value. In this scenario, the technical and past performance are evaluated and given numerical scores. The total score is then divided into the total price.

I know this will probably cause lots of heartburn messages, but it has been helpful in understanding the tradeoffs of technical merit and price.


By joel hoffman on Monday, December 17, 2001 - 10:32 pm:

Anon, years ago, we used the price per point method. My boss used it as the sole trade-off method to determine the winner. Our organization didn't do a true trade-off analysis.

Then I decided to use it for a short time, as one indicator, along with a trade-off analysis.

I ultimately ditched it altogether, because it is deceptive. For example, two offerors, one with 3% higher price and 3% higher points than the other will have the same $/point ratio. Plus it assumes too much precision in the point assignment to each factor and too much precision in the point score rating. happy sails! joel


By Dave Barnett on Tuesday, December 18, 2001 - 07:56 am:

You say tomato, I say tomahto...trade off analysis, narrative analysis, offeror assessment, I think we're engaged in semantics. No, I don't believe you can set up a numerical scoring system that can stand up by itself. It may look pretty but without a reasoned analysis of the strengths/weaknesses of an offerors proposal, it can be very misleading. My previous example using a simple scoring system demonstrated just that...is paying double worth it?

Gentlemen, ladies, we have to exercise good business judgement...heck, we do it every day in our personal lives...do you buy the (gosh, here I show my age) Yugo because its the least expensive car on the lot or do you buy the Rolls Royce because it has the reputation for being the top of the line, or do you deecide to buy the middle of the road car because the trade off between price, reliability, looks, etc represents the best value in your judgement? It's the same thing in buying for others (i.e. the taxpayer), you're the contracting officer, entrusted to make sound, reasonable decisions. I think as long as you are reasonable, you're pretty much on firm ground.

Then again, I've been wrong before...


By Vern Edwards on Tuesday, December 18, 2001 - 08:15 am:

Dave:

I have to smile whenever people reach the point in a discussion when one of them says that it's a matter of semantics. Of course it is, since semantics is the study of meanings. All discussion entails each of us trying to make our meanings clear to another.

I can see that I can't change your mind about numerical scoring, but just to be clear: It appears that you think that a numerical system is an alternative to a reasoned analysis; you see it as a matter of either/or, numerical scoring or reasoned analysis. In my view, in a well-constructed numerical scoring system the numerical score is the product of a reasoned analysis, not an alternative to a reasoned analysis.

Vern


By Dave Barnett on Tuesday, December 18, 2001 - 08:42 am:

Sheesh, I never said that a numerical analysis is an alternative to the detailed proposal analysis, what I've been saying is that the numerical analysis, ON ITS OWN, is insufficient as a source selection tool. Whether you use numbers, colors, adjectives, a +, -, x scoring system, the in-depth analysis has to be part of your documentation as to how you arrived at your source selsction.

I was admonished (well, our procurement office was) by the GSBCA back in '86 for relying on numerical scores alone. We complied with the criteria in Section M but the GSBCA stated that we needed to do a cost/benefit trade-off analysis to support our numerical ratings.

If you have it, check out the June 1989 NCMA magazine article, "For Beginners Only: Source Selection: Another Way Of Doing The Same Thing".


By Vern Edwards on Tuesday, December 18, 2001 - 08:45 am:

I'm sorry I misunderstood you, Dave.

Vern


By Dave Barnett on Tuesday, December 18, 2001 - 09:03 am:

Heck Vern, I think we've both been on the same page, it's just that we learned this topic differently. This is one topic that I've been most sensitive about because I was burned early in my career due to my naivete. That's a war story for another forum, suffice to say, a young contracting officer was upgefughted by his project team who had a hidden source selection agenda of their own. Oh, it came out in the GSBCA decision, I think the word the judge used regarding the project office's evaluation methods was "egregious". And that young contracting officer didn't have a clue as to what was happening...until the hearing.

ABOUT  l CONTACT