Past Performance
The protester additionally contends that the agency erred in
deciding not to include past performance as an evaluation
factor. Protest at 7-8. Specifically, the protester argues that
the agency’s decision not to consider past performance is an
impermissible attempt to reduce its administrative burden by
avoiding the SBA certificate of competency process. Protester’s
Comments at 9-10. The protester additionally contends that,
contrary to the agency’s assertions that past performance has
not historically been a distinguishing factor in prior
procurements, past performance has, in fact, been a relevant
distinguishing factor in other comparable procurements, which it
has identified.[2] Protest at 8.
As noted above, the determination of a contracting agency’s
needs and the best method of accommodating them are matters
primarily within the agency’s discretion. Crewzers Fire Crew
Trans., Inc., supra at 3; G. Koprowski, supra at 3. Likewise, a
protester’s disagreement with the agency’s judgment concerning
the agency’s needs and how to accommodate them does not show
that the agency’s judgment is unreasonable. Cryo Techs., supra
at 2; G. Koprowski, supra. In this context, the FAR provides
that past performance need not be evaluated if the contracting
officer documents the reason past performance is not an
appropriate evaluation factor for the acquisition. FAR §
15.304(c)(3)(iii). For the reasons discussed below, we see no
basis to object to the agency’s decision not to consider past
performance in this procurement.
In this case, consistent with the FAR’s requirements, the agency
executed a D&F documenting its conclusion that considering past
performance would not provide any significant information for
the source selection authority. D&F at 2. The D&F noted that
this was the case for three reasons: (1) offerors with limited
or unknown past performance cannot be evaluated favorably or
unfavorably, which in a lowest-priced, technically-acceptable
context compels an acceptable rating; (2) in an SDVOSB
set‑aside, negative assessments of past performance would
necessitate a referral to the SBA for a Certificate of
Competency; and (3) past performance evaluations in two recent
similar contracting efforts had not provided a meaningful basis
to distinguish between offerors. Id.
As a preliminary matter, our Office has consistently explained
that pass/fail evaluations of capability issues, such as past
performance, are tantamount to responsibility determinations,
with the result that a rating of “unacceptable” in these areas
is the same as a determination of non-responsibility. See, e.g.,
Phil Howry Co., B-291402.3, B-291402.4, Feb. 6, 2003, 2003 CPD ¶
33. Consistent with this premise, in the context of a
lowest-priced, technically-acceptable selection scheme where the
contracting officer determines that a small business’ past
performance is not acceptable, “the matter shall be referred to
the Small Business Administration for a Certificate of
Competency determination.” 4 FAR § 15.101-2(b)(1).
As a result, in this context, our Office has previously
questioned the value of including past performance as a separate
evaluation factor precisely because the past performance
evaluation is ultimately reduced to a matter of the firm’s
responsibility, which will be evaluated, in any case, after
source selection. See Frontier Systems Integrators, B-298872.3,
Feb. 28, 2007, 2007 CPD ¶ 46 at 7 n.4. This is particularly true
given the difficulties associated with how to consider a neutral
rating in the context of a pass/fail evaluation, which, as noted
by the agency’s D&F in this case, is the rating required for
firms without any past performance record or where the record is
not available. Id. (citing FAR § 15.305(a)(2)(iv)); D&F at 1-2.
In support of its position, the protester identifies two other
allegedly similar contracts in which (1) a lowest-priced,
technically-acceptable selection scheme was employed and (2) the
Air Force terminated or modified the contracts due to alleged
poor performance. Protest at 8. The protester asserts that an
evaluation of past performance or a best‑value tradeoff
selection scheme would have prevented these allegedly
undesirable outcomes, but does not provide any evidence
supporting that claim or linking the evaluation criteria in
those procurements to the outcomes. Id. This argument represents
nothing more than a speculative attempt to substitute the
protester’s judgment for the agency’s. The agency considered two
recent base operation support procurements for air reserve bases
in reaching its conclusion, and the protester has provided no
evidentiary basis to question that selection. D&F at 2.
In sum, we see no basis to disturb the agency’s conclusion that
performing a past performance evaluation in the context of a
lowest-priced, technically-acceptable procurement, which is set
aside for small businesses, is essentially duplicative of the
agency’s responsibility determination, and the protester has
provided no basis to question the agency’s assessment of the
historical relevance of past performance evaluations in prior
base operation support procurements. We therefore see no basis
to question the agency’s decision not to include past
performance as an evaluation factor in this procurement. (Data
Monitor Systems, Inc. B-415761: Mar 6, 2018)
Disparate
Treatment
Green Earthworks also alleges that the Air Force engaged in
disparate treatment by contacting one of All Phase’s past
performance references, while not doing so with any of its
references. See Protester’s Supp. Comments (Dec. 10, 2014) at
3-4. The protester also alleges that the agency disparately
evaluated proposals by not downgrading the awardee’s proposal
for failing to have at least three completed questionnaires,
while downgrading the protester’s proposal for being incomplete.
See Supp. Protest (Nov. 26, 2014) at 1‑2. We find that neither
objection provides a basis on which to sustain the protest.
Agencies are required to evaluate proposals on a common basis
and in accordance with the terms of the RFP. Agencies may not
engage in disparate treatment of offerors in the evaluation of
past performance. Apptis, Inc., B-299457 et al., May 23, 2007,
2008 CPD ¶ 49 at 17. It is an offeror’s responsibility, however,
to provide to the agency relevant past performance information
that is under its own control. Beretta USA Corp., supra, at 11.
Here, we do not find that the Air Force’s contact with one of
All Phase’s past performance references constituted disparate
treatment.
The Air Force contacted both Green Earthworks and All Phase
regarding the status of past performance questionnaires that had
not been returned by their references. CO’s Statement at 14. The
awardee notified the Air Force that its reference for two
projects had represented that there were limitations on his
ability to release the questionnaires to the Air Force. AR, Tab
30, Memorandum for Record (Dec. 5, 2014) at 1. In light of the
awardee’s notice, the agency contacted the reference to
ascertain whether the reference would be submitting
questionnaires for the two projects cited by the awardee. Id.
The reference explained that for the referenced projects,
contractor performance assessment reports (CPAR), which included
ratings above “satisfactory,” were still pending legal and
upper-level agency review. AR, Tab 18, Proposal Analysis Report,
at 012. The reference represented that pursuant to that agency’s
policies, that agency would not prepare past performance
questionnaires while CPARs remained pending. Id. Thus, the
agency’s follow up with the awardee’s reference concerned the
reference’s preparation of questionnaires.
In contrast, the issues that the Air Force identified with the
protester’s questionnaires--which contributed to its lower past
performance rating--stemmed not from a reference’s questionnaire
response, but, rather, from the protester’s failure to provide
adequate information in its proposal and to properly prepare the
questionnaires. AR, Tab 18, Proposal Analysis Report, at
010-011. Thus, the circumstances surrounding the offerors’
questionnaires, and the agency’s need for additional
information, were materially different. See Apptis, Inc., supra,
at 17. Therefore, we find no basis to find that the agency
treated the offerors in a disparate manner.
We also do not find that the Air Force unreasonably waived a
“requirement” for All Phase to have submitted three completed
past performance references, while similarly not excusing Green
Earthworks’ failure to provide all of the past
performance-related information required by the RFP. Contrary to
the protester’s argument, the RFP established a maximum, not a
minimum, of three past performance references. RFP at 016.
Absent an express solicitation provision to the contrary, which
was not the case here, we have held that it is generally not
objectionable to evaluate an offeror’s past performance based on
fewer than the maximum possible number of references the agency
could have received. Fox RPM Corp., supra, at 4; M&N Aviation,
Inc., B‑409048, Dec. 27, 2013, 2013 CPD ¶ 299 at 5. In this
regard, we have also held that it is generally improper for an
agency to downgrade an offeror’s past performance rating solely
on the basis that a reference failed to complete and return a
questionnaire. See, e.g., Family Entm’t Servs., Inc.,
B‑298047.3, Sept. 20, 2006, 2007 CPD ¶ 59 at 5-6. In contrast,
the deficiencies in Green Earthworks’ past performance proposal,
namely its failure to properly fill out the questionnaires and
its teaming partner’s failure to provide required project
details, were entirely within its control. (Green
Earthworks Construction, Inc. B-410724, B-410724.2: Feb 2,
2015) (pdf)
Next, Amyx
challenges the agency’s evaluation under the past performance
and relevant experience criteria. Specifically, Amyx contends
that it was incongruous for its quotation to be rated good under
the relevant experience criterion yet outstanding under the past
performance criterion given that the SSET reviewed the same past
performance projects under both criteria. Protest at 17. The
protester contends that its quotation warranted the highest
possible rating under both criteria. The protester similarly
objects to the good rating assigned to 22nd Century’s quotation
under the past performance criterion in light of the
satisfactory rating assigned to the awardee’s quotation under
the relevant experience criterion. Supp. Protest at 19.
Generally, an agency’s evaluation under an experience factor is
distinct from its evaluation of an offeror’s past performance.
Commercial Window Shield, B‑400154, July 2, 2008, 2008 CPD ¶ 134
at 3. Specifically, as is the case here, the former focuses on
the degree to which an offeror has actually performed similar
work, whereas the latter focuses on the quality of the work. See
id. Where a solicitation calls for the evaluation of experience
and past performance, we will examine the record to ensure that
the evaluation was reasonable and consistent with the
solicitation’s evaluation criteria and procurement statutes and
regulations. Divakar Techs., Inc., B‑402026, Dec. 2, 2009, 2009
CPD ¶ 247 at 5. The evaluation of experience and past
performance, by its very nature, is subjective, and a vendor’s
disagreement with an agency’s evaluation judgments does not
demonstrate that those judgments are unreasonable. See Glenn
Def. Marine‑Asia PTE, Ltd., B‑402687.6, B‑402687.7, Oct. 13,
2011, 2012 CPD ¶ 3 at 7.
We find reasonable the agency’s evaluation of quotations under
the past performance and relevant experience criteria. As
outlined in the RFQ, the agency’s assessment under the past
performance criterion considered the “quality of performance and
successful performance relative to the scope, size, complexity
and duration to the work described in the RFQ.” RFQ at 65. In
contrast, the agency’s assessment under the relevant experience
criterion evaluated “the relevance and extent . . . that
[contracts submitted for review] are similar in size, scope and
complexity to the work described in the PWS.”[15] Id. at 64.
Thus, past performance related to how well a contractor
performed, while relevant experience pertained to the type of
work a contractor performed-- two distinct criteria. Given the
fundamentally different nature of the evaluations, a rating in
one factor would not automatically result in the same rating
under the other.
More specifically, in rating Amyx’s quotation outstanding under
the past performance criterion, the record reflects that the
SSET focused its evaluation on the ratings and narratives
contained in past performance information questionnaires (PPIQs),
as well as information from the past performance information
retrieval system (PPIRS).[16] See AR, Tab E.1, SSET Report, at
24. For instance, as noted above, the SSET highlighted as a
significant strength that Amyx was rated exceptional in all
contractor performance attributes on its three PPIQs and that
the references’ comments were positive. Id.; see Tab E.4, Amyx
Past Performance Information, at 2, 10, 15.
On the other hand, with respect to relevant experience, the SSET
thoroughly evaluated whether the contracts submitted for review
were similar in size, scope, and complexity to the PWS. See AR,
Tab E.3, SSET Report Relevant Experience Summary, at 1-59. In
assigning the quotation a good rating under the criterion, the
SSET found that one of Amyx’s three contracts was relevant to
the PWS; the firm’s other contracts were deemed somewhat
relevant. Id. at 1-2; AR, Tab E.1, SSET Report, at 23. The SSET
further found that one of Amyx’s two subcontractors demonstrated
relevant experience on two previous efforts, but also found that
the other subcontractor’s experience had only “limited
similarity” to the PWS. AR, Tab E.3, SSET Report Relevant
Experience Summary, at 2-5; Tab E.1, SSET Report, at 23-24.
Based on these findings, the SSET reasonably assigned the
quotation a good rating under the relevant experience criterion.
Given that the past performance evaluation considered altogether
different aspects of the contracts submitted for review, we see
nothing in the record, beyond Amyx’s opinion of its own
quotation, to support a conclusion that the agency acted
unreasonably in assigning Amyx’s quotation a good rating under
the relevant experience criterion.[17] We find equally
unobjectionable the agency’s evaluation of 22nd Century’s past
performance and relevant experience. Consistent with the
evaluation criteria outlined above, under the past performance
criterion, the SSET evaluated how well 22nd Century and its
subcontractors had performed on previous efforts submitted for
review. The record reflects that 22nd Century earned a strength
because the firm received predominantly very good ratings (with
a couple exceptional ratings and a satisfactory rating) and all
positive comments on two PPIQs in the record. AR, Tab E.1, SSET
Report, at 94-95; Tab E.5, 22nd Century Past Performance
Information, at 1-7. The SSET identified as a significant
strength that one of 22nd Century’s subcontractors received all
exceptional ratings and “very positive” comments in a PPIQ for a
relevant contract; all exceptional ratings in a PPIRS report on
another relevant contract; and very good and exceptional ratings
and positive comments on a somewhat relevant contract. AR, Tab
E.1, SSET Report, at 94; Tab E.5, 22nd Century Past Performance
Information, at 33-51.
With respect to 22nd Century’s second subcontractor, the record
reflects that this firm’s past performance was considered both a
strength due to predominantly exceptional ratings and very
positive comments on one somewhat relevant contract, as well as
a weakness due to predominantly unsatisfactory ratings in a
PPIRS report on a relevant contract. AR, Tab E.1, SSET Report,
at 94-95; Tab E.5, 22nd Century Past Performance Information, at
13-32. Overall, the SSET concluded that the team demonstrated
“more favorable than unfavorable quality of performance and
successful performance,” and the SSET assigned the quotation a
good rating under the criterion. AR, Tab E.1, SSET Report, at
94-95.
Amyx disagrees with the agency’s conclusions, arguing that the
agency gave disproportionate weight to the subcontractor’s
significant strength. See Supp. Protest at 20; Comments at 38.
We conclude otherwise. As detailed above, the record reflects
that the agency reviewed the entirety of 22nd Century’s team’s
performance on previous efforts, taking into consideration both
the amount of work the entities were proposed to perform on the
task order, as well as the relevance of the contracts being
reviewed. The record also does not support the protester’s
assertion that the agency overlooked 22nd Century’s
subcontractor’s unsatisfactory PPIRS ratings. See Comments at
38. Instead, the record confirms that the agency identified this
performance as a weakness, but reasonably concluded that the
weakness was “more than offset” by the significant strength and
strengths due to performance on six other efforts. See AR, Tab
E.1, SSET Report, at 94. There simply is no basis for our Office
to conclude that the agency’s exercise of its judgment in this
regard was unreasonable. See Glenn Def. Marine‑Asia PTE, Ltd.,
supra.
Lastly, we find no merit to Amyx’s argument that the agency was
compelled to assign 22nd Century’s quotation a lower past
performance rating given that the SSET assigned the quotation a
satisfactory relevant experience rating. As noted above, the
evaluation under the two criterion involved distinct
considerations. See Shaw-Parsons Infrastructure Recovery
Consultants, LLC; Vanguard Recovery Assistance, Joint Venture;
B‑401679.4 et al., Mar. 10, 2010, 2010 CPD ¶ 77 at 14 (an
offeror’s experience is different from its past performance, and
an agency may reasonably provide for separate evaluation of
these factors in a solicitation). (Amyx,
Inc. B-410623, B-410623.2: Jan 16, 2015) (pdf)
Performance
Confidence Evaluation
The four protesters next present a variety of allegations
concerning the elements of the agency’s performance confidence
evaluation. Generally, the protesters assert that the
performance confidence analysis was unreasonable and
inconsistent with the RFP where the agency overwhelmingly
emphasized the recency, relevance, and location of past
performance references over the quality of their performance.
More specifically, CSC and Booz Allen argue that the agency’s
recency assessment was arbitrary and unreasonable where several
of the protesters’ past performance references were discounted
as “less recent,” while awardees’ similarly recent past
performance references were considered “more recent.”
Additionally, all four protesters object to the agency’s process
of assessing “concerns” against their records of past
performance--which were used in the [Source Selection Decision
Document] SSDD as the principal differentiator between offerors
with the same overall performance confidence rating.
As a general matter, the evaluation of an offeror’s past
performance is within the discretion of the contracting agency,
and we will not substitute our judgment for reasonably based
past performance ratings. Clean Harbors Envtl. Servs., Inc.,
B-296176.2, Dec. 9, 2005, 2005 CPD ¶ 222 at 3. However, we will
question an agency’s evaluation conclusions where they are
unreasonable or undocumented. OSI Collection Servs., Inc.,
B-286597, B-286597.2, Jan. 17, 2001, 2001 CPD ¶ 18 at 6. The
critical question is whether the evaluation was conducted
fairly, reasonably, and in accordance with the solicitation’s
evaluation scheme, and whether it was based on relevant
information sufficient to make a reasonable determination of the
offeror’s past performance. Id. Here, we agree with the
protesters that the agency’s performance confidence evaluation
was unreasonable and inconsistent with the terms of the RFP.
Overall, our consideration of the record demonstrates that the
agency’s evaluation was inconsistent with the RFP’s evaluation
criteria concerning the performance confidence assessment. In
this respect, the RFP provided that the performance confidence
assessment was to “evaluate past performance through an
integrated analysis of recency, relevancy, and the quality of
work performed,” “focusing on the quality of the work performed
(as described in M3.6 and Table 3) and the relevancy (as
described in M3.5 and Table 2) to the acquisition.” AR, Tab 7,
RFP, at 165, 178.
However, the five-step evaluation process instituted by the
agency, as discussed in greater detail below, resulted in a
misleading view of the actual past performance records of the
various awardees by substantially elevating the importance of
“more recent” performance within the recency window, relevance
as defined by references’ application to the 27 relevance
subcriteria areas, and worldwide performance. While these
elements were set forth in the RFP in the context of
establishing “greater weight” to be afforded to past performance
references, the agency’s evaluation process elevated their
importance to minimum threshold criteria for consideration of an
offeror’s references in various elements of the evaluation,
without respect to the reference’s assigned relevance (i.e.,
relevant/highly relevant) or quality. Additionally, because the
agency only considered whether an offeror’s references were
“favorable” in determining whether the references would count
towards earning a substantial confidence rating, the evaluation
process led the agency to misunderstand the specific quality
distinctions between the offerors.
Overall, by severely minimizing the impact of quality, and even
the overall relevancy of past performance references, in favor
of the mechanical application of the agency’s “more recent,”
sustained, worldwide performance and subcriteria thresholds, the
agency overlooked meaningful differentiators, such as
performance quality, while emphasizing misleading
differentiators, such as “concerns.” This evaluation process
produced misleading results, and was unreasonable. We sustain
the protests on this basis, as discussed in more detail below.
Five-Step Performance Confidence Evaluation Process
The agency explains that its performance confidence assessment
group (PCAG) conducted its evaluation process in five steps.
First, the agency considered whether each past performance
reference was recent and sustained. This evaluation considered
(1) whether the reference was performed within the recency
evaluation window (November 4, 2007 – November 4, 2011), (2)
whether it was more recent or less recent within that window,
and (3) whether the reference demonstrated sustained performance
of more than one year within the window. See Contracting
Officer’s Statement, Booz Allen Protest, at 39-40. If the
reference was both “more recent” within the window and
sustained, the reference was considered to have “greater
impact.”
Second, the agency considered relevancy by assessing whether
each reference addressed each of the 27 subcriteria areas of the
five performance confidence areas (systems sustainment, web
service development, management, cost, and small business
participation). Id. at 35-36. The result of this process was
that each reference was assigned a relevancy rating for each of
the five confidence criteria areas--that is, a reference could
be considered highly relevant for systems sustainment, but only
somewhat relevant for web service development.
Third, the agency considered quality by rating the quality of
each reference under each of the five confidence areas, based on
quality assessment questionnaires and data from the Contractor
Performance Assessment Reporting System (CPARS). Id. at 38. If
the quality ratings under a confidence criteria area were
“positive” (apparently satisfactory or better), the reference
was considered “favorable” for that confidence area.
Fourth, the agency assigned each offeror its overall performance
confidence rating. As relevant here, in order to receive the
highest performance confidence rating of substantial confidence,
an offeror was required to demonstrate, in each of the five
confidence criteria areas, two or more past performance
references that were (1) “greater impact” (“more recent” and
sustained for more than one year), (2) relevant or highly
relevant, (3) favorable, and (4) performed at worldwide
locations. Id. at 40. If an offeror failed to demonstrate two
references meeting this threshold for any of the five criteria
areas, that offeror was limited to a satisfactory confidence
rating.
Fifth, after assigning the overall performance confidence
ratings, the agency returned to the 27 relevancy subcriteria,
and reanalyzed the offerors’ references to determine whether
each offeror had provided two references meeting the
“substantial confidence” thresholds--as set forth above--for
each of the 27 subcriteria. Where an offeror’s proposal did not
demonstrate two or more of these references for a subcriteria,
that subcriteria area was labeled as a “concern.” Id. at 41. In
the SSDD, these subcriteria areas were discussed as areas in
which the PCAG questioned the offerors ability to perform. The
“concerns” also served as the basis for the agency’s ranking
between offerors that shared the same overall performance
confidence assessment rating.
Recency Assessment
CSC and Booz Allen each assert that the agency’s assessment of
recency was arbitrary and unreasonable, because several of the
protesters’ references, performed at least in significant part
within the recency window set forth in the RFP, were considered
“less recent” and therefore “less impactful,” while similarly
recent references provided by the awardees were considered “more
recent” and “more impactful.” The protesters argue that this
error was significantly prejudicial where the “less recent”
determination prevented the references from being considered in
the agency’s analysis of whether an offeror warranted a
substantial confidence rating, or in the assessment of
“concerns,” without regard to the overall relevance or the
quality of the reference. The protesters argue that their
references were sustained, relevant, and high quality, but were
not given appropriate consideration in the agency’s analysis.
As a specific example, CSC points to three of its own
references, each of which started prior to the recency window
and ended in late 2009 or early 2010, as follows:
Offeror |
Reference |
Start |
End |
Sustained |
CSC |
CSC, Reference 1 |
11/2004 |
12/2009 |
2.15 Years |
CSC |
CSC, Reference 9 |
03/2005 |
03/2010 |
2.40 Years |
CSC |
CSC, Reference 10 |
03/2007 |
05/2010 |
2.57 Years |
AR, Tab 137, SSDD Briefing, at 63. Although performed
substantially within the recency window of November 4, 2007
through November 4, 2011, these references were each considered
to be “less recent” and were not further considered for the
purposes of determining whether CSC warranted a substantial
confidence rating, or for the purpose of assessing “concerns”
against CSC’s record of past performance. In contrast, CSC
points to two references, provided by awardees, which
demonstrate similarly recent end dates but were considered “more
recent” by the agency:
Offeror |
Reference |
Start |
End |
Sustained |
[DELETED] |
[DELETED], Reference 9 |
07/2009 |
06/2010 |
1.00 Years |
[DELETED] |
[DELETED], Reference 10 |
02/2009 |
08/2010 |
1.58 Years |
Id. at 140, 85. CSC asserts that the agency treated these
offerors unequally, because there is no substantive difference
between the end dates of CSC’s references, versus those of
[DELETED] and [DELETED], which would rationally support
permitting the awardees’ references to contribute to a
substantial confidence rating and to resolve “concerns,” while
preventing CSC’s references from being considered for either
purpose.
Concerning Booz Allen, four of Booz Allen’s past performance
references were considered “less recent.” Booz Allen alleges
that two of these four references ended in the latter half of
the recency evaluation window (both ending in December 2009),
and should have been considered “more recent.” Booz Allen
asserts that if consideration had been given to these two
references in the “concerns” evaluation, Booz Allen’s evaluation
would have reflected zero concerns, rather than [DELETED]
concerns. Booz Allen Comments at 32. Booz Allen further alleges
that the recency criteria, as applied, were unreasonable where
four of its references were eliminated from consideration in the
“concerns” evaluation, despite three of those references
demonstrating highly relevant past performance in four of the
five confidence criteria areas, exceptional quality ratings, and
worldwide performance.
The contracting officer responds that the recency evaluation was
not unreasonable or unequal because the agency selected a common
cut-off date for more recent versus less recent past
performance, and applied that cut-off date to all offeror’s
references. Although the common cut-off date is not apparent
from the record, the contracting officer asserts that the agency
selected June of 2010 as the threshold for demonstrating “more
recent” performance, with references ending during or subsequent
to that time considered more recent and therefore “greater
impact.” Supplemental Contracting Officer’s Statement, CSC, at
4; Contracting Officer’s Statement, Booz Allen, at 39. The
contracting officer contends that the application of this
cut-off date explains the determination that the protester’s
references were “less recent,” while the highlighted awardee
references were “more recent.”
As an initial matter, we afford little weight to the contracting
officer’s assertion that the [performance confidence assessment
group] PCAG applied a common cut-off date of June 2010 to all
offerors’ past performance references. We will generally
consider post-protest explanations that provide a detailed
rationale for contemporaneous conclusions and simply fill in
previously unrecorded details so long as those explanations are
credible and consistent with the contemporaneous record. ITT
Fed. Servs. Int’l Corp., B-283307, B-283307.2, Nov. 3, 1999,
99-2 CPD ¶ 76 at 6.
In this case, the contracting officer’s statement does not meet
the above standard, since it is contradicted by evidence present
in the contemporaneous record. Specifically, the contemporaneous
record contains meeting notes from the SSDD Briefing which quote
a discussion of the recency analysis. The discussion is
transcribed as follows:
Question--[SSA]: Was there a numerical cut-off for determining
recency, or for determining a citation to be more recent?
Response--[PCAG Chairperson]: Approximately one year was used
for recent.
[SSET Chairperson]: For more recent we looked for citations
towards the end of the window.
[PCAG Chariperson]: There wasn’t any specific cutoff though.
AR, Tab 138, SSDD Briefing Notes, at 3. Where the
contemporaneous record does not support the assertion that the
PCAG utilized a common cutoff date of June 2010 to determine
“more recent” past performance, we accord the contracting
officer’s assertion little weight.
Absent the application of the cut-off date of June 2010, which
is not supported by the contemporaneous record, there is no
discernable basis for why CSC’s reference ending in May 2010 was
materially “less impactful” than the awardee [DELETED]’s
reference ending in June 2010. In this regard we see no
reasonable explanation for why [DELETED]’s reference was viewed
as supporting a substantial confidence rating and resolving
“concerns,” while CSC’s reference was excluded from
consideration for those purposes, notwithstanding the fact that
CSC’s reference was superior to [DELETED]’s reference in all
aspects of the past performance evaluation other than recency.
In this connection, CSC’s reference was evaluated as relevant in
one past performance area and highly relevant in three areas,
with exceptional quality, while [DELETED]’s reference was
evaluated as relevant in two areas and highly relevant in two
areas, with very good quality. AR, Tab 137, SSDD Briefing, at
64-67, 141-44. CSC’s reference was also more sustained than [DELETED]’s
reference, and was performed at worldwide locations, while [DELETED]’s
reference was performed only within the continental United
States. Id. at 63, 140. Nonetheless, because [DELETED]’s
reference contract ended one month more recently, only [DELETED]’s
reference was “higher impact” in the agency’s analysis. We
cannot conclude that the agency had a reasonable basis to
support this result.
In the final analysis, even to the extent the PCAG had utilized
a common cutoff date, the agency’s approach to considering “more
recent” versus “less recent” past performance was not
reasonable. In this regard, we conclude that it was not
reasonable to wholly exclude an offeror’s references from
consideration in assigning a substantial confidence rating, or
in resolving “concerns,” based on minor differences in recency
without consideration of whether less recent references
demonstrated highly relevant, exceptional quality, and/or
worldwide past performance. Accordingly, we sustain the
protests.
Assessment of “Concerns”
All four protesters allege that the agency’s assignment and
application of “concerns” in the performance confidence
assessment was significantly flawed. First, CSC, HP, and Booz
Allen argue that the agency unreasonably relied on “concerns” to
differentiate the quality of offerors assessed the same overall
performance confidence rating where the “concerns” evaluation
process did not meaningfully consider, and in fact
misrepresented, relative quality. Second, Booz Allen and Harris
allege that the assessed “concerns” could not reasonably stand
in for or represent an offeror’s ability to perform elements of
the work, and wholly ignored the offerors’ substantive records
of past performance set forth in their “lower impact”
references. We agree with the protesters on both issues.
First, as explained above, in order to determine areas of
concern, the agency conducted a reevaluation of the offerors’
past performance references against the 27 performance
confidence relevancy subcriteria, looking for two references
meeting the agency’s substantial confidence thresholds for each
subcriteria. For each subcriteria in which an offeror did not
provide two references meeting the thresholds, the offeror was
assigned a concern. In the SSDD briefings and SSDD, these
concerns were used as the primary differentiator between
offerors with equivalent overall performance confidence ratings.
For example, in comparing the offerors with substantial
confidence ratings, the SSDD reads as follows:
Moreover, not all offerors with a Substantial confidence
assessment are the same. For example, Booz Allen was rated
Substantial despite having [DELETED] areas of concern (where the
offeror did not address the [DELETED] criteria with at least two
citations with greater impact at worldwide locations) while
[DELETED] was assessed as substantial with [DELETED] areas of
concern.
AR, Tab 139, SSDD, at 14. Further, it is apparent that the SSA
used the differences in the number of “concerns” as reflective
of a difference in the relative quality of the offerors’ past
performance. For example, the SSA concluded, based on the number
of “concerns,” that “[DELETED].” Tab 138, SSDD Briefing Notes,
at 12.
However, because the substantial confidence thresholds
established by the agency only considered whether an offeror’s
past performance was “favorable,” and discarded references based
on non-quality related bases, an offeror’s number of “concerns”
did not actually indicate whether the offeror had a record of
higher quality past performance. As a consequence of this
disconnect, it is clear that many of the SSDD’s conclusions
concerning the relative merits of the offeror’s past performance
quality were unsupported.
For example, regarding Booz Allen, an analysis of the actual
past performance quality ratings assigned to Booz Allen’s
references, versus those of [DELETED] and [DELETED],
demonstrates that, notwithstanding its [DELETED] “concerns”,
Booz Allen’s overall past performance quality was roughly equal
to [DELETED]’s, and equal to or slightly higher than [DELETED]’s.
See AR, Tab 137, SSDD Briefing at 39 (Booz Allen), 116
([DELETED]), 149 ([DELETED]). Accordingly, the agency’s use of
the assessed “concerns” to differentiate between the past
performance quality of offerors with equal performance
confidence ratings was misguided where the “concerns” did not
reflect on the relative quality ratings of the offerors’ past
performance. Nor can we conclude that the “concerns” served as a
reasonable proxy for an integrated analysis of the underlying
factors contributing to the offerors’ overall performance
confidence assessments. Accordingly, we sustain the protests.
Second, we agree with the protesters that it was unreasonable
for the agency to interpret the assessed “concerns” as
reflecting on the offeror’s ability to perform in the
subcriteria areas. As noted by the agency, all 20 of the
offerors considered for award under the RFP were considered
technically acceptable. Further, to the extent the agency viewed
the “concerns” as reflecting gaps in the offerors’ past
performance concerning the subcriteria areas, this conclusion
was also unreasonable because the agency’s process for assessing
concerns wholly discarded substantial evidence of relevant, high
quality past performance contained in references that did not
meet the agency’s supplemental “greater impact” and performance
location thresholds.
As examples, again turning to Booz Allen’s evaluation, the
agency’s performance confidence methodology led it to conclude
that there were “concerns” regarding Booz Allen’s proposal under
the “[DELETED]” and “[DELETED]” subcriteria of the system
sustainment performance confidence criteria area. As addressed
above, these “concerns” were assessed because Booz Allen’s
evaluation did not demonstrate two or more references addressing
the subcriteria areas that were (1) “greater impact” (“more
recent” and sustained for more than one year), (2) relevant or
highly relevant, (3) favorable, and (4) performed at worldwide
locations. Based on Booz Allen’s lack of qualifying references,
the agency recorded that “the PCAG had concerns about [Booz
Allen’s] ability to perform” in the [DELETED] and [DELETED]
areas, among other areas. AR, Tab 137, SSDD Briefing, at 40.
However, the underlying record of the agency’s evaluation shows
that Booz Allen in fact provided four past performance
references that addressed the [DELETED] subcriteria. Of these
four references, three were rated highly relevant to the system
sustainment criteria area with exceptional performance quality,
while the fourth reference was rated relevant in the criteria
area with very good quality. AR, Tab 137, SSDD Brieifing, at
41-42. Additionally, two of the highly relevant/exceptional
references, and the relevant/very good reference were sustained,
and all four of the references were performed at worldwide
locations. Id. However, because these references were not “more
recent” as defined by the agency, they did not meet the agency
threshold for consideration and were discarded, resulting in the
agency’s conclusion that there was a concern about Booz Allen’s
ability to perform in the [DELETED] subcriteria area.
Similarly, the record shows that Booz Allen’s proposal provided
six past performance references addressing the [DELETED]
subcriteria area. All six references were relevant or highly
relevant and exhibited very good or exceptional quality. In
addition, two of the six references were both recent and
sustained as defined by the agency. Despite this substantial
record of past performance in the subcriteria area, the agency
concluded that there was a “concern” in this area because, of
the two references that were recent and sustained, only one
reference was for work performed at worldwide locations.
Accordingly, because Booz Allen did not demonstrate two or more
references meeting all agency thresholds for consideration, the
SSA concluded that there were “concerns about Booz Allen’s
ability to perform” in the area of [DELETED]. AR, Tab 139, SSDD,
at 6.
We cannot find the agency’s “concerns” assessment process, or
the SSA’s conclusions, to be reasonable where Booz Allen in fact
presented an expansive history of relevant, high quality
performance in subcriteria areas that were flagged as
“concerns.” The record confirms that in the assessment of
“concerns,” the agency disregarded any references that did not
meet all of the agency’s “greater impact” thresholds, despite
that fact that the references met the requirements for
consideration set forth in the RFP, and demonstrated relevant,
high quality prior work in the areas.
The fact that an offeror’s references addressing a subcriteria
area were of “lesser impact” in the agency’s view did not permit
the agency to entirely disregard the contents of the references
in the assessment of “concerns.” The agency’s requirement that
an offeror demonstrate at least two “greater impact,” worldwide
references in each subcriteria area to avoid the assessment of a
“concern,” without regard to any combination of less recent,
less sustained, or non-worldwide references, was arbitrary and
did not reflect a reasonable, integrated assessment of the
offeror’s past performance in accordance with the RFP’s
performance confidence assessment critera. We sustain each of
the four protests on this basis. (Computer
Sciences Corporation; HP Enterprise Services, LLC; Harris IT
Services Corporation; Booz Allen Hamilton, Inc., B-408694.7,
B-408694.8, B-408694.9, B-408694.10, B-408694.11: Nov 3, 2014)
(pdf)
The solicitation
provided for award on a "best value" basis considering price and
three non-price evaluation factors--technical, management, and
past performance. Past performance information was to be
submitted by June 25, 2010, while the technical, management, and
price proposals were due on July 20. With respect to past
performance, offerors were to list up to six contracts performed
in the past 3 years, with current points of contact. RFP sect.
L, at 3.4. The solicitation further provided as follows: "[the]
government will evaluate the quality and relevance of the
offeror's past performance. Relevant present and past
performance pertain to work that was: similar to or greater than
the kind of requirement appearing in this solicitation with
regard to location, size, scope, type, dollar value, complexity
and performed as prime." RFP sect. M, at 3.
On June 25, ICTC submitted a spreadsheet--with 12 entries
listing 13 prior contracts – as its past performance
information, and later timely submitted its technical,
management, and price proposals. On July 20, the agency notified
ICTC that its proposal was not evaluated because it had failed
to submit the required past performance volume on June 25. More
specifically, the agency found that the spreadsheet ICTC
submitted did not include full periods of performance, full
descriptions of the listed contracts, or points of contact, as
required by the solicitation. Accordingly, the agency rejected
the proposal because it could not evaluate ICTC's past
performance.
ICTC does not argue that the information in the spreadsheet the
agency evaluated should have been found acceptable. Rather, it
claims that it submitted a spreadsheet that did in fact include
the required information. In support of its claim, ICTC has
furnished a copy of the spreadsheet it allegedly sent to the
agency by e‑mail on June 25; it is different from the agency's
version--which supports its evaluation--and does include the
required information.
A finding that a protester submitted a proposal or other
information as claimed requires evidence that the agency
received the information; absent such evidence, there is no
basis for us to question the agency's representation as to what
was or was not received. See, e.g., Latvian Connection Trading
and Constr., LLC, B-402410, Feb. 25, 2010, 2010 CPD para. 58 at
3.
Notwithstanding that ICTC has furnished our Office an acceptable
version of the spreadsheet, there is no evidence in the
record--aside from the protester's unsupported assertion--that
this latter version of the spreadsheet is the one received by
the agency. In this regard, in response to a request from ICTC,
the agency searched for e-mails it received on June 25, and
reports that it did not locate an e‑mail with the spreadsheet
that ICTC now references. Beyond this total absence of
supporting evidence, it is not apparent--if ICTC submitted only
the acceptable spreadsheet, as it claims--how the agency
obtained the unacceptable spreadsheet that it evaluated.[2] We
conclude that, while the circumstances surrounding submission of
the spreadsheet are not clear, there is no basis to question the
agency's representation as to the spreadsheet it received, or
otherwise to conclude that the agency misevaluated the
information submitted by ICTC. Consequently, there is no basis
to question the agency's rejection of ICTC's proposal. (Industrial
Construction & Trading Company, B-403849, December 13, 2010)
(pdf)
As noted above,
the record reflects that the SEB divided its evaluation of the
firms' [past performance questionnaires] PPQs into two parts,
one based on the narrative information contained in the PPQs and
the second based on the individual PPQ chart ratings. For the
purpose of evaluating the PPQ chart ratings, the SEB, as
explained above, used a numerical scoring system, one which
assigned points for Superior and Acceptable ratings; using this
scoring system, the SEB calculated an average score for each
firm. In this regard, an average score of 36 points, or higher,
translated to a Superior rating. Under this scheme, Vanguard
received an Acceptable rating for its PPQ chart ratings‑-it had
an average score of less than 36 points. While Vanguard received
a narrative rating of Superior for its PPQs, it only received an
Acceptable rating for its PPQs overall, and an overall past
performance rating of Acceptable.
In its protest, Vanguard argues that its Acceptable PPQ chart
rating was unreasonable because the agency's scoring methodology
effectively penalized Vanguard for having submitted PPQs for two
less relevant contracts. Vanguard does not challenge the notion
of giving less relevant contracts less weight in the evaluation
per se; rather, Vanguard contends that the way in which the SEB
accounted for the less relevant contracts was fundamentally
flawed because the SEB effectively (and unreasonably) penalized
Vanguard by diminishing the value of its Superior ratings on its
relevant contracts. Specifically, in considering Vanguard's two
less relevant PPQs, the SEB assigned 7.5 points as opposed to 10
points for each Superior rating, and using these scores
calculated an average score for Vanguard. Vanguard maintains
that had these less relevant contracts not been submitted, its
past performance rating score would have been 36.25, thus
leading to a Superior rating, yet when the less relevant
contracts were included in the average, its score was reduced to
34.83, even though it received Superior ratings across-the-board
on the PPQs for the two less relevant contracts. According to
Vanguard, any reasonable consideration of its less relevant
contracts could only have increased its score since it received
all Superior ratings for these contracts. We agree.
In Olympus Bldg. Servs., Inc., B-285351, B-285351.2, Aug. 17,
2000, 2000 CPD para. 178, we sustained a protest where the
agency used a scoring methodology to evaluate offerors'
experience which was similar to the one used by FEMA to evaluate
firms' past performance in this case. Specifically, the agency
in Olympus assigned different point scores depending on the
degree of relevance of an offeror's experience, deducted points
for less relevant experience, added the scores, and calculated
an average. The application of this formula penalized the
protester for two extra experience references, albeit for less
relevant experience. We noted that had the protester's score
been based on only the comparable references, the protester
would have received a higher score than it did when these
references were averaged with the less relevant references. We
concluded that the agency's scoring methodology improperly
penalized the protester for having included extra references for
its experience.
Here, Vanguard submitted six PPQs, two of which were identified
as less relevant. The agency deducted points for each of the
Superior ratings Vanguard received on these contracts, which,
when averaged with Vanguard's ratings for its relevant
contracts, had the effect of lowering its overall PPQ rating
score, notwithstanding the fact that Vanguard received the
highest adjectival ratings on its less relevant PPQs. Thus, as
in Olympus, the agency effectively penalized Vanguard for having
submitted references for additional, less relevant contracts and
used what was an arbitrary score for the purpose of its
evaluation. See also United Paradyne Corp., B‑297758, Mar. 10,
2006, 2006 CPD para. 47 (sustaining protest where the agency's
approach to evaluating past performance was unreasonable because
it had the effect of penalizing offerors with relevant
experience for their non-relevant experience).
The agency argues that Vanguard was not materially impacted by
this issue since the PPQ chart ratings were given minimal weight
and any change in those ratings would not have affected the
result in any event. Setting aside the fact that such post hoc
assessments made by agencies in the heat of the adversarial
process are given little weight by our Office, see Boeing
Sikorsky Aircraft Support, B-277263.2, B-277263.3, Sept. 29,
1997, 97-2 CPD para. 91 at 15, the agency's conclusions are not
supported by the underlying record. Vanguard received a Superior
rating for the narrative information contained in its PPQs--information
which FEMA claims was more important to the overall
evaluation--yet Vanguard only received an overall rating of
Acceptable for its PPQs. This was apparently due to its
Acceptable rating for its PPQ chart information, which, as
explained above, was derived from the agency's faulty scoring
system. Moreover, if Vanguard had received an overall Superior
rating for its PPQs, there is a reasonable possibility that it
would have received an overall past performance rating of
Superior. Thus, we cannot conclude, as the agency suggests, that
Vanguard was not prejudiced by this error. (Shaw-Parsons
Infrastructure Recovery Consultants, LLC; B-401679.8;
B-401679.9; B-401679.10, Vanguard Recovery Assistance, Joint
Venture, September 8, 2010) (pdf)
The evaluation of
past performance is a matter of agency discretion, and we will
review the evaluation only to ensure that it was reasonable and
consistent with the solicitation's stated evaluation criteria
and applicable statutes and regulations. Guam Shipyard,
B-311321, B-311321.2, June 9, 2008, 2008 CPD para. 124 at 3. The
evaluation by its very nature is subjective; the offeror's mere
disagreement with the agency's evaluation judgments does not
demonstrate that those judgments are unreasonable. SDV Telecomms.,
B-279919, July 29, 1998, 98-2 CPD para. 34 at 2.
PMT challenges the agency's evaluation of the PPIRS quality
records for past performance. It contends that the contracting
officer improperly diminished the value of PMT's purple rating
and irrationally concluded that PMT's higher quality rating did
not indicate a higher degree of confidence.
Regardless of the color rating assigned, such ratings are merely
guides for intelligent decision-making in the procurement
process; and where, as here, the agency considers the underlying
basis of the ratings and rationally determines that a color
rating does not provide higher confidence in performance, the
actual color rating assigned is inconsequential in the analysis.
Cherry Rd. Techs.; Elec. Data Sys. Corp., B-296915 et al., Oct.
24, 2005, 2005 CPD para. 197 at 12. Here, the contracting
officer credited PMT's purple rating under PPIRS and recognized
that this rating was higher than ATAP's. The contracting officer
also discounted the only written quality report for PMT, which
was negative, because of its age and a lack of information about
the reasons for this report. The contracting officer also
acknowledged that PMT delivered 50 line items as opposed to
ATAP's single line item, and found that, except for the
discounted quality report for PMT mentioned above, there were no
quality reports, negative or positive, about either offeror's
line items. Given this absence of specific data, the contracting
officer found that PMT's more numerous complaint-free deliveries
did not indicate "a higher degree of confidence should be
afforded PMT over ATAP" with regard to quality. Based on our
review, we find that the contracting officer reasonably
evaluated the offerors' respective quality past performance
consistent with the RFP's evaluation scheme, and reasonably
determined that the quality aspect of the offerors' past
performance was not a significant discriminator between
proposals. In this regard, we further find that the contracting
officer reasonably determined that PMT's superiority under this
factor did not indicate "a higher degree of confidence" that PMT
would be more likely to provide items that would meet the
agency's quality standards.
PMT also challenges the agency's evaluation of delivery past
performance. It complains that the contracting officer relied on
PPIRS scores without considering the "currency and relevance of
the information, source of the information, context of the data,
and general trends in contractor's performance," as required by
the FAR. PMT's Second Supplemental Comments at 7; FAR sect.
15.305(a)(2).
Our review of the record confirms that the contracting officer
considered all of the information required by the solicitation
and the FAR. In this regard, the contracting officer evaluated
the available PPIRS data, consulted with the Air Force PPIRS
liaison, and reviewed PMT's contract files to confirm the
validity of PMT's delinquencies, and to understand the reasons
for and circumstances surrounding those delinquencies. The
contracting officer also provided PMT with an opportunity to
correct any errors in its PPIRS data. Based on the information
available, the contracting officer determined that, even after
inaccuracies were corrected, PMT delivered items late 36 percent
of the time and was at fault for all of those late deliveries.
Air Force Response to PMT's Supplemental Comments at 7; AR, Tab
29, Reevaluation Decision Document, at 10. In fact, of 12 late
deliveries associated with Air Force contracts, 8 were in excess
of 60 days late. Air Force Memorandum of Law at 12. This
demonstrated to the contracting officer an "overwhelming[] . . .
pattern of disregard on the part of PMT in meeting their
contractual obligations for delivery." AR, Tab 29, Reevaluation
Decision Document, at 14. Based on our review of the record, we
find the contracting officer's assessment of PMT's delivery
record to be reasonable.
PMT asserts that the contracting officer's conclusion that
ATAP's delivery record was superior to PMT's is unreasonable,
given that ATAP's delivery score of 100 percent was based only
on one line item, whereas PMT's delivery record included many
line items and many on-time deliveries. The contracting officer
considered the number of line items upon which PPIRS scores were
based, as well as PMT's on-time deliveries, but noted that PMT's
significant number of late deliveries "warranted a diminished
degree of confidence in their ability to perform and deliver on
time." In so doing, the contracting officer did not give too
much weight to the delivery aspect of past performance, but
appropriately recognized it as a clear discriminator between the
proposals. Given the "critical" nature of the work, we find the
contracting officer's reasoning unobjectionable. AR, Tab 29,
Reevaluation Decision Document, at 15. (Precision
Mold & Tool, B-400452.4; B-400452.5, April 14, 2009) (pdf)
During the course
of this protest, Master Lock also argued that the agency’s
evaluation of Evergreen’s past performance was unreasonable. As
discussed above, Evergreen declined to accept order No. 2745,
which was issued under a different contract. DLA acknowledges
that it did not consider these events in its evaluation of
Evergreen’s past performance. AR at 8. The agency contends,
however, that it was not required to do so because the
submission of a quote by a vendor under an ID/IQ contract does
not result in a binding obligation. Thus, the agency argues,
because Evergreen did not accept the order, there was no
contract performance for the agency to evaluate.
The agency is correct that neither the submission of a quote by
a vendor nor the issuance of an order by an agency results in a
binding contractual obligation. Rather, the government’s order
represents an offer that the vendor may accept either through
performance or by a formal acceptance document. M. Braun, Inc.,
B-298935.2, May 21, 2007, 2007 CPD para. 96 at 3.
We disagree, however, with DLA’s view as to the relevance of a
vendor’s rejection of an order to a past performance evaluation.
A vendor who has been awarded an ID/IQ contract has a
contractual relationship with the agency that awarded the
underlying contract. Although the work required under any task
or delivery order will only become a binding obligation on the
parties if the vendor accepts the order, the underlying ID/IQ
contract may itself have obligations. For example, a contract
may require a vendor to accept orders placed by the agency
within certain parameters. If a vendor were required to accept
certain orders placed against the contract, but was unable to do
so, this could be a matter relevant to the evaluation of that
vendor’s past performance.
Our Office requested that the agency address “whether under the
terms of the contract against which orders could be placed with
Evergreen, vendors were required to fill orders, or whether
[Evergreen] was free to decline to accept the order based on an
inability to fill the order.” GAO Request for SAR, Apr. 29,
2008, at 3. The agency did not address our question, but instead
argued that the matter was not appropriate for review under the
past performance evaluation factor. SAR at 6. On this record, we
are unable to determine whether the agency’s conclusion
regarding the relevance of order No. 2745 to Evergreen’s past
performance was reasonable. (Master
Lock Company, LLC, B-309982.2, June 24, 2008) (pdf)
Guam protests the agency’s evaluation of its past performance as
marginal, contending that it has received accolades for
performance of similar work in the past. The protester contends
that if the agency had received information regarding its past
performance from the reference Guam identified in its proposal,
the information would have been favorable; in this regard, the
protester questions the adequacy of the agency’s attempt to
contact the reference by email, since the agency has not
demonstrated that the email message was in fact received by the
reference. The protester also challenges the agency’s reliance
on the adverse CPAR regarding an ongoing Guam contract with the
Navy for similar work. Guam has not refuted the substance of the
performance problems described in that CPAR, either in its
protest or when it had the opportunity to comment on the CPAR
findings before the report became final in the PPIR. Rather, the
firm only generally suggests that the agency’s consideration of
the CPAR was improper because Guam had not realized that the
CPAR had been posted on the PPIR even though the contract is not
fully performed yet.
Our Office will examine an agency’s past performance evaluation
only to ensure that it was reasonable and consistent with the
stated evaluation criteria and applicable statutes and
regulations, since determining the relative merit of an
offeror’s past performance is primarily a matter within the
contracting agency’s discretion. See Pacific Ship Repair and
Fabrication, Inc., B-279793, July 23, 1998, 98-2 CPD para. 29 at
3‑4. In conducting a past performance evaluation, an agency has
discretion to determine the scope of the offerors’ performance
histories to be considered, provided all proposals are evaluated
on the same basis and consistent with the solicitation
requirements. Federal Envtl. Servs., Inc., B-250135.4, May 24,
1993, 93-1 CPD para. 398 at 12. An agency is only required to
use reasonable effort to contact an offeror’s references, and is
not required to make multiple attempts to contact a firm’s past
performance references. See OSI Collection Servs., Inc.; C.B.
Accounts, Inc., B‑286597.3 et al., June 12, 2001, 2001 CPD para.
103 at 9.
Here, the agency points out that the past performance reference
listed by the protester failed to respond to the agency’s email
inquiry for relevant survey information. While Guam suggests
that proof of the reference’s receipt of the email message is
necessary to show a reasonable attempt at contacting the
reference, the firm provides no support for its contention.
Rather, in light of the RFP’s emphasis on the importance of the
offeror providing reliable contact information (including, as
was used here, email addresses) and ensuring cooperation from
its references, we think the agency’s email inquiry was an
adequate effort to contact the reference; the failure of the
reference to respond does not show that the agency’s effort was
inadequate (particularly in view of Guam’s apparent failure to
ensure, or, at a minimum, encourage its reference to cooperate)
or that the past performance evaluation was improper.
As stated above, the agency, consistent with the terms of the
RFP, considered a comprehensive, recent, and relevant CPAR that
rated the firm’s past performance as marginal and showed that
Guam had problems with following procedures, safety issues,
accidents, obtaining material, scheduling, manpower, and
management. Guam had an earlier opportunity to respond to the
information reported in the CPAR and, despite seeking an
extension of time to do so, never refuted the CPAR, which then
became a final report in the PPIR and, in accordance with the
terms of the RFP, was properly considered as part of the firm’s
past performance evaluation. The protester has provided no basis
to question the reasonableness of the agency’s findings that
such unfavorable recent performance of similar work suggests
that similar instances of “re-work” or flawed performance could
be anticipated under this contract, and that there is also a
greater potential for agency involvement in monitoring
performance, which may increase the ultimate cost of performance
for the agency. Lastly, while Guam generally suggests that
accolades for good past performance have been given to the firm
in the past, as the agency points out, the firm did not include
such information in its proposal, nor was it in the CPAR, and
thus, we have no basis to object to the agency’s failure to
credit the firm with such information. (Guam
Shipyard, B-311321; B-311321.2, June 9, 2008) (pdf)
Where an agency utilizes a lowest price technically acceptable
source selection process, the FAR provides that past performance
need not be an evaluation factor at all. However, when it is
included, it cannot be utilized for the purpose of making a
“comparative assessment”; rather, past performance is to be
determined solely on a pass/fail basis. FAR sect. 15.101-2. Our
Office has long held that pass/fail evaluations of capability
issues, such as past performance, are tantamount to
responsibility determinations, with the result that a rating of
“unacceptable” in these areas is the same as a determination of
nonresponsibility. See, e.g., Phil Howry Co., B-291402.3,
B-291402.4, Feb. 6, 2003, 2003 CPD para. 33. Consistent with
this premise, in the context of a lowest price technically
acceptable evaluation scheme, where the contracting officer
determines that a small business’ past performance is not
acceptable, “the matter shall be referred to the Small Business
Administration for a Certificate of Competency
determination.”[4] FAR sect. 15.101-2(b)(1). By including past
performance as an evaluation factor in the RFP’s lowest price
technically acceptable evaluation scheme here, the Navy
essentially carved out one element of a responsibility
determination and utilized it as an evaluation factor in this
set-aside. This, however, did not alter the fact that the
pass/fail past performance evaluation in this context remained a
matter of responsibility since, if the Navy had found CSPS’ past
performance “unacceptable,” it would have been required to
submit that determination to the Small Business Administration
for a certificate of competency (COC) review. Since past
performance ultimately is a matter of responsibility, the agency
could look beyond an offeror’s compliance with the informational
requirements set forth in the RFP, and therefore retained the
discretion to find CSPS’ past performance acceptable despite
CSPS’ failure to submit three past performance questionnaires.
Because Frontier has not in any way challenged the Navy’s
affirmative determination of responsibility with respect to CSPS,
but instead only challenged CSPS’ failure to comply with the
informational requirements under the past performance evaluation
factor, there is no basis for our Office to conclude that the
award to CSPS was improper. Citing our decisions in Prudent
Techs., Inc., B-297425, Jan. 5, 2006, 2006 CPD para. 16 and
Menendez-Donnell & Assocs., B-286599, Jan. 16, 2001, 2001 CPD
para. 15 at 3 n.1, Frontier suggests that CSPS’ past performance
evaluation was not a matter of responsibility but rather a
question of technical acceptability and, as a consequence, the
Navy would not have been required to refer CSPS to SBA for a COC
review. Prudent Techs., Inc. and Menendez-Donnell & Assocs. are
distinguishable, however, since, unlike the case at hand, the
protesters in those cases disregarded the solicitation regarding
the submission of specific information required by the agency
for the purpose of evaluating experience or past performance
and, based on this failure, their proposals were found to be
technically unacceptable. In these cases, the agency did not
have any basis to assess or judge the protesters’ capabilities
because of their failure to comply with solicitation
requirements. Here, in contrast to the cited cases where the
protesters did not submit the required information, CSPS made an
effort to comply with all of the RFP’s informational
requirements, and in fact submitted four past performance
questionnaires. It was only as a result of the Navy’s assessment
of the information provided by CSPS that CSPS was deemed to have
submitted only two “relevant” questionnaires. Notwithstanding
this conclusion, however, the Navy determined that CSPS was
capable of performing the work and its past performance
acceptable--a determination ultimately regarding CSPS’
responsibility. (Frontier Systems
Integrators, LLC, B-298872.3, February 28, 2007) (pdf)
Dellew argues that it should have received a significant
confidence rating under the past performance factor; in light of
its low price, this would have moved it into line for award.
Specifically, Dellew notes that both firms performed base supply
subcontracts and contracts to provide services that are part of
base supply, and asserts that the reference ratings for its and
DCSI’s prior contracts are essentially equal. Dellew argues that
its receipt of two “very good” reference ratings rather than
uniformly “excellent” ratings is a “slight difference” that
“cannot justify rating Dellew below DCSI.” Protester’s Comments
at 4. The past performance evaluation was unobjectionable.
Although the SSA commented on Dellew’s relatively limited past
performance, the record shows that the selection ultimately
turned on Dellew’s and DCSI’s reference ratings. In this regard,
while both firms received exemplary ratings, and Dellew
emphasizes the similarity in the ratings, the fact is that the
ratings were not the same--DCSI’s ratings were uniformly
“excellent,” while Dellew received two “very good” ratings in
addition to its “excellent” ratings. As discussed above, one of
Dellew’s references also did not unequivocally state that he
“definitely” would award a contract to Dellew again--instead of
selecting “1” to indicate that he “definitely” would award a
contract to Dellew today, he wrote in “1.5,” indicating that he
was less than certain that he would make a new award to Dellew.
While these appear to be relatively minor distinctions, there is
no basis to conclude that the agency was not permitted to give
weight to them in its price/technical tradeoff. As noted above,
Dellew’s price advantage over DCSI was relatively
minor--$24,736--and we think the agency reasonably could
determine that DCSI’s edge under the past performance factor was
sufficient to offset Dellew’s price advantage. Dellew argues
that the SSA misread the past performance evaluation results.
Specifically, Dellew points to the TEP’s summary of Dellew’s
past performance, which stated, “1 contract not LRS type award.”
AR, Tab 11, Dellew Past Performance Evaluations, subtab B, at 1.
Dellew interprets this notation to mean that the TEP determined
that three of its four listed contracts were LRS‑related, and
that only one was not. Protester’s Comments at 2-3. Dellew then
points to language in the SSA’s Briefing to the Evaluation Team,
which reads: “Past performance shows 1 contract, however, it is
not an LRS type contract.” AR, Tab 9, Source Selection Authority
Briefing, at 13. Dellew concludes that the SSA based her award
decision on incorrect information. While the language in the
agency’s various evaluation documents is confusing, the record
shows that the SSA was fully aware that Dellew had performed
more than one contract, and that she determined that three of
Dellew’s contracts did not involve the complexity, scope, or
type of LRS work required under the RFP. Specifically, as noted
above, the SSDD and Post-Award Debriefing set forth the SSA’s
determination that Dellew did not “have much experience” with
LRS contracts, and had broad LRS experience under only one
contract. Dellew does not actually claim that three of its
listed contracts were LRS contracts; rather, it merely focuses
on the apparent misstatement in the TEP’s summary. This is not
sufficient to establish that the evaluation was unreasonable
given our finding that the SSA’s ultimate conclusions in fact
are supported by the record. To the extent Dellew believes the
agency should have found that three of its contracts were LRS
contracts covering the work under the RFP, it has neither
identified which of its four contracts it believes are of this
type, nor shown where in its proposal it described the work
under those contracts as broad LRS work. This argument therefore
provides no basis for questioning the evaluation. (Dellew
Corporation, B-298233.2; B-298233.3, September 13, 2006) (pdf)
Based on the record, we are not persuaded that the agency
received past performance information from two of the references
as asserted by Sayres, or that DOE improperly failed to consider
the questionnaire received from a third. In this regard, the
agency made repeated attempts to contact the references,
including documented e-mail attempts on December 2, 2004 and on
January 26, 2005, [6] but the references either did not respond
or, in the case of the DOE reference, did not respond in a
timely fashion. Although Sayres complains that the agency failed
to take additional steps between December 2 and January 26 to
contact the references, an agency is not required to make
multiple attempts to contact past performance references. See
OSI Collection Servs., Inc.; C.B. Accounts, Inc. , B-286597.3 et
al. , June 12, 2001, 2001 CPD 103 at 9 (agency's single attempt
to contact protester's reference found reasonable). An agency
need only make a reasonable effort to contact a reference, and
where that effort proves unsuccessful--such as here, where the
information either was not received or was received too late in
the evaluation to be reasonably considered--it is
unobjectionable for the agency to proceed with the evaluation
without benefit of that reference's input. See Lynwood Mach. &
Eng'g, Inc. , B-285696, Sept. 18, 2000, 2001 CPD 113 at 5. (Sayres
& Associates Corporation, B-295946; B-295946.2, April 25,
2005) (pdf)
We also find no error in the agency's failure to contact the ACO
of Williamson's prior Marion contract. An agency is not required
to contact each and every reference listed by an offeror in its
proposal. Roca Mgmt. Educ. & Training, Inc. , B-293067, Jan. 15,
2004, 2004 CPD 28 at 3. The agency explains here that during the
time of the evaluation, the ACO was on medical leave and could
not be reached. The agency instead contacted the contracting
officer's technical representative, who had worked with
Williamson for 3 years, and he reported that Williamson's
performance was "poor at best" under Williamson's Marion
contract. Although Williamson asserts that the ACO would have
given a more favorable report, the record does not support this.
Infact, the record contains a letter to Williamson from the ACO,
and several letters to Williamson from a different contracting
officer, complaining of performance issues during the contract
period. E.g. , AR, Tab 33, Past Performance Documents, exhs. 1,
3, 5, 8; see also exh. 7 (internal agency e-mails reflecting
ACO's involvement in addressing performance issues with
Williamson). (Williamson County
Ambulance Service, Inc., B-293811.5; B-293811.6; B-293811.7,
December 15, 2004) (pdf)
We find from our review of the record that the agency had a
reasonable basis on which to conclude that the awardee's past
performance was acceptable. Although it is true that one
contract reference questioned Securitas's billing system and
staff quality, this reference nevertheless rated the firm's
overall performance as "average" and assigned an average
numerical rating of 7.9 on a 10 point scale, where the rating
sheet indicated that 10 was the highest quality score and 5 was
acceptable. Agency Report, Tab N, Securitas Past Performance
Evaluation, at 5-7. The awardee's other two contract references
rated the firm's past performance as "excellent" and assigned
average numerical scores of 9.8 and 9.3. With respect to
Securitas's performance of the postal service contract in New
Jersey, which was not considered in the agency's past
performance evaluation, the agency has provided a letter from
the United States Postal Service plant manager, who indicates
that Securitas has acceptable performance. Agency Report, Tab J,
Letter from United States Postal Service. Although AGS disagrees
with the agency's judgment that Securitas's quote was entitled
to an acceptable, "pass" rating based upon these references, its
disagreement with the agency's judgment does not demonstrate
that the evaluation was unreasonable. UNICCO Govt. Servs., Inc.
, supra , at 7. (American Guard
Services, Inc., B-294359, November 4, 2004) (pdf)
We find no basis upon which to question the past performance
evaluation. Although the letters from prior MEPS commanders
which Staten furnished with its proposal noted SIH's superior
performance on the MEPS contract, the Army reports, and the
record confirms, that SIH's performance had deteriorated in the
most recent contract year. In this regard, the contracting
officer had available monthly inspection reports, prepared by
the contracting officer's representative (COR), which documented
"discrepancies" in SIH's performance, including matters relating
to applicants' health and safety, such as unsafe lighting,
garbage strewn around the applicants' entrance, unsanitary food
handling, and soiled utensils; failures to comply with
specification requirements concerning the availability and
selection of food for the applicants; and failures to afford
applicants the same level of service and quality of facilities
offered to other guests. These evaluation reports (including a
report dated January 21, 2003) indicated that some of the
discrepancies were "repeat" discrepancies, previously raised
with the contractor but not yet resolved. Further, in addition
to the monthly inspection reports, the contracting officer also
had available negative evaluation sheets entitled "How Do We
Rate? "that had been completed by MEPS applicants after their
stay at the SIH facility (and had been retained so that the
criticisms therein could be addressed with the contractor).
These applicant evaluation sheets included a number of
complaints, such as the fact that there were not sufficient
amounts of food, that the food lacked flavor, and that the food
was cold. While the applicant evaluation sheets retained by the
agency may have represented only a portion of the evaluations
returned by the applicants, they appear to confirm the
criticisms in the COR's inspection reports and the agency's
position that, at least recently, SIH's performance of the MEPS
contract had been deficient. On this record, given the current
deficiencies in its performance, we find that the agency
reasonably concluded that SIH warranted no more than a
satisfactory rating for past performance. (The
Staten Island Hotel, B-292893.3, August 6, 2004) (pdf)
As indicated above, the CPAR were not the only source of
information used to evaluate AIRODs performance of the prior
contract for these services. In large measure, the agency's
failure to obtain the most recent CPAR information was mitigated
by use of past performance questionnaires and follow-up
interviews with agency officials familiar with the company's
performance on this contract. These materials generated ratings
ranging from satisfactory to exceptional, with most of the
ratings in the very good category. AR, Tab 6 at 16. In addition,
officials at the Defense Contract Management Agency reported
very favorably on their experience with AIROD on the earlier
contract. Their favorable comments included that AIROD meets
scheduling requirements, does a good job of handling unscheduled
work and expanded scopes of effort, has good management, and
makes impressive efforts to support customer needs. Id. In
short, the omission of 1 years worth of favorable CPAR data did
not cause the agency to overlook the company's favorable past
performance on the predecessor contract. Moreover, we do not
think AIROD can reasonably claim it was prejudiced by the
agency's omission. Our Office will not sustain a protest unless
there is a reasonable possibility of prejudice, that is, unless
the protester demonstrates that, but for the agency's actions, it
would have had a substantial chance of receiving the award.
McDonald-Bradley , B-270126, Feb. 8, 1996, 96-1 CPD 154 at 3;
see Statistica, Inc. v. Christopher , 102 F.3d 1577, 1581 (Fed.
Cir. 1996). AIRODs challenges to its performance/confidence
rating must be considered in light of the evaluation results:
put simply, AIROD has already received the highest rating
available under the agencys evaluation scheme for its past
performance. For our Office to find prejudice here would require
several layers of speculation. First, given that the agency
obtained information about AIRODs performance of its predecessor
contract from sources other than CPAR data, we would have to
speculate that the missing CPAR information would materially
enhance the agencys already favorable view of AIRODs
performance. Second, we would have to speculate that this
additional information would have resulted in a perceived
difference between AIRODs and Singapore's exceptional ratings.
Finally, we would have to speculate that the agency would have
chosen to justify the expenditure of an additional3million to
obtain the benefit of this perceived difference. In our view,
there is nothing in this record to justify a conclusion that,
but for the agency's failure to obtain the 2003 CPAR report,
AIROD would have had a substantial chance of receiving the
award. (AIROD Sdn. Bhd., B-294127,
August 16, 2004) (pdf)
The evaluation here was unobjectionable; nothing in the RFP
prohibited offerors from submitting more than three contracts
for each team member or the agency from considering more than
three. While the RFP required that each offeror submit specified
information on its three most relevant contracts performed
within the past 3 years for itself and its critical
subcontractors, RFP § L-900(c)(1), it also specifically required
that offerors submit two additional pages listing all contracts
that the offeror was performing or had performed in the past 3
years and provided that the “Government may obtain and use
performance information on any or none of these programs.” Id. §
L-900(c)(3). Thus, there was nothing improper in the agency’s
consideration of five contracts in evaluating the past
performance of Cirrus’s subcontractor. (Efficiency
Management & Engineering Company;, B-292676; B-292676.2,
October 31, 2003)
As the protester concedes, a
procuring agency properly may use traditional responsibility
factors as technical evaluation factors in a negotiated
procurement, where the agency is to perform a comparative
evaluation of those factors. Docusort, Inc., B-254852, Jan. 25,
1994, 94-1 CPD para. 38 at 6. The RFP here reflected such a
determination on the Coast Guard's part, expressly establishing
past performance as an evaluation factor, and the record shows
that, in evaluating the offers and making the award decision,
the agency relied on the past performance information provided
by the offerors. (Goode
Construction, Inc., B-288655; B-288655.2; B-288655.3,
October 19, 2001) |