New
Past Performance Evaluation
The protester challenges an error appearing within DHS’s
evaluation of VariQ’s past performance. In this regard,
the past performance evaluation team (PPET) report
incorrectly stated that VariQ received “good” past
performance questionnaire (PPQ) ratings for one of VariQ’s
contract references, a contract with the United States
Citizenship and Immigration Services for operations and
maintenance support to that agency’s verification
information system (VIS). AR, Tab 321, Consensus TET
Report, at 55. Based on these “good” ratings, the PPET
assigned a satisfactory confidence rating to the contract
reference. Id. In fact, however, VariQ received
“outstanding” PPQ ratings for the VIS contract, a fact
that was noted in the PPET report one page earlier. Id. at
54.
The agency contends that the reference to the “good” PPQ
ratings was a “typographical error,” which was caused
because an old set of PPQ ratings received for the same
contract was “not properly updated in the final evaluation
document” to reflect new PPQ ratings that had been
received. MOL at 36. DHS asserts that, notwithstanding
this error, the PPET was aware of the updated PPQ ratings
and fully evaluated VariQ’s performance based on those
ratings. Id. The agency reiterated this assertion in a
post-protest statement provided by the PPET Chair, which
represented that “[b]ased on the updated PPQ, the PPET
determined that the contract reference would still be
rated Satisfactory Confidence.” PPET Chair Statement at 4.
In reviewing an agency’s past performance evaluation, we
will question an agency’s evaluation conclusions where
they are unreasonable or undocumented. Logistics Mgmt.
Int’l, Inc., et al., B-411015.4 et al., Nov. 20, 2015,
2015 CPD ¶ 356 at 8. Additionally, we accord much greater
weight to contemporaneous source selection materials than
to representations made in response to protest
contentions. Celta Servs., Inc., supra, at 9. Further, we
give little weight to post-hoc statements that are
inconsistent with the contemporaneous record. Caddell
Constr. Co., Inc., B-411005.1, B-411005.2, Apr. 20, 2015,
2015 CPD ¶ 132 at 11.
Based on our review of the contemporaneous record, we
conclude that the above error reflects an evaluation
mistake that broadly affected the agency’s past
performance evaluation and source selection decision,
rather than being a minor typographical error. In this
regard, we note that the discussion of the incorrect PPQ
ratings appeared in the “finding” section of the PPET
report, which is the only section of the report that
explains the agency’s basis for assigning the VIS contract
a satisfactory confidence rating. AR, Tab 321, Consensus
TET Report, at 55. In that section, the report notes that
“VariQ received ‘Good’ ratings in Quality of Service,
Timeliness of Performance, and Business Relations,” then
writes, in the next sentence, “VariQ received partial in
scope and smaller in size and complexity – rating for this
contract past performance review – ‘Satisfactory
Confidence.’” Id. Accordingly, while the PPET chair
contends, in his post-protest declaration, that the agency
based its satisfactory confidence rating on the updated
PPQ, we find this representation to be inconsistent with
the contemporaneous record. The PPET report expressly
states that the satisfactory confidence rating assigned to
the VIS contract was based on the “good” PPQ ratings. See
id.
Additionally, we note that the SSA expressly discussed the
VIS contract as having received “good” PPQ ratings in the
best-value tradeoff determination. AR, Tab 333, SSDM, at 9
(quoting the above PPET report finding). While the SSA
provided a post-protest declaration representing that this
error did not affect the best-value tradeoff decision, we
note that this representation appears to be based, in
part, on the PPET Chair’s post-hoc representation that
this error did not affect the PPET’s ratings.
Accordingly, it is unclear from the contemporaneous record
that the PPET would have assigned the protester’s VIS
reference a confidence rating of satisfactory if the PPET
had relied on the correct PPQ scores. In this regard, we
note, for example, that the PPET contemporaneously
concluded that an Inserso reference warranted a
substantial confidence rating, where that reference
received outstanding PPQ ratings, despite being found to
be only partially similar in size, scope, and complexity.
See AR, Tab 367, TET Chair Statement Attach. 1, Corrected
PPET Report, at 7-8. Additionally, since VariQ’s other two
contract references were both assigned substantial
confidence ratings, it appears likely an increase in its
rating for the VIS contract would have increased VariQ’s
overall past performance rating. In sum, we conclude that
the agency’s reliance on the incorrect PPQ ratings
contributed to the agency’s overall rating in this area,
and sustain VariQ’s challenge of the agency’s evaluation
of the protester’s past performance. (VariQ
Corporation B-414650.11, B-414650.15: May 30, 2018)
Where a protester challenges the past performance
evaluation and source selection, we will review the
evaluation and award decision to determine if they were
reasonable and consistent with the solicitation's
evaluation criteria and procurement statutes and
regulations, and to ensure that the agency's rationale is
adequately documented. Patricio Enters., Inc., B-412740 et
al., May 26, 2016, 2016 CPD ¶ 152 at 7. In reviewing an
agency's evaluation, we accord much greater weight to
contemporaneous source selection materials than to
representations made in response to protest contentions.
Celta Servs., Inc., B-411835, B-411835.2, Nov. 2, 2015,
2015 CPD ¶ 362 at 9. Further, we give little weight to
post hoc statements that are inconsistent with the
contemporaneous record. Caddell Constr. Co., Inc.,
B-411005.1, B-411005.2, Apr. 20, 2015, 2015 CPD ¶ 132 at
11.
Based on our review of the contemporaneous record here, we
conclude that the agency made substantial errors in
evaluating Inserso's past performance and, as a result of
these errors, failed to document a reasonable basis for
its evaluation. In this regard, the PPET report evidences
that the agency switched the background information for
Inserso's first and third contract references, listing the
latter in place of the former and vice versa. DHS then
compounded this error by expressly relying on the wrong
information in the remainder of the PPET report. For
example, the PPET report contains the following summary
table:
Contract Reference Number |
Contract # |
Recent |
Size |
Scope |
Complexity |
PPQ Final Rating |
1 |
FA3047-12-D-0005 DHA |
Yes |
$80M |
Partial |
Partial |
Satisfactory Confidence |
2 |
[DELETED] |
Yes |
$292M |
Partial |
Partial |
Substantial Confidence |
3 |
W81K04-13-D-0023 AFCEC |
Yes |
$96.4M |
Partial |
Partial |
Substantial Confidence |
AR, Tab 320, TET Consensus Report,
at 24. The agency concedes that, for contract reference
numbers one and three, the contract numbers and the size
values are incorrect, because they are based on the
swapped background information. For the "Recent," "Scope"
and "Complexity" columns, the values are the same for the
first and third contract references, such that reliance on
the incorrect background information did not result in
incorrect values being listed.
The agency repeated this mistake in the PPET report's
finding sections, which are the only sections of the
report that explain DHS's basis for its assignment of
confidence ratings to the two contract references. In
these sections, DHS expressly based its confidence ratings
for the first and third contract references on the
incorrect size information. See id. at 26, 32.
As a result of these errors, substantial portions of the
PPET report are either inaccurate or are based upon
inaccurate information, including those portions of the
report where the agency documented its rating
determinations. See generally PPET Chair Statement Attach.
2, Redlined PPET Report, at 1-11. We further note that the
SSA, in turn, relied on these findings in the source
selection decision, even citing the incorrect size
information for Inserso's first and third past performance
references in the tradeoff section. See AR, Tab 332, SSDM,
at 22.
The above demonstrates that the contemporaneous record is
inconsistent with the post-protest statements provided by
the agency, which attempt to downplay the agency's
reliance on the incorrect size information. We therefore
give little weight to the agency's post hoc representation
that it would have reached the same result even if the
PPET had considered the correct size information. See
Celta Servs., Inc., supra. Further, we note there is
reason to question the correctness of this representation.
For example, while the agency contends that it would have
assigned a substantial confidence rating to the third
contract reference, which received outstanding past
performance questionnaire (PPQ) ratings but was only
partially similar in size, scope, and complexity to the
instant requirement, it contemporaneously assigned only a
satisfactory confidence rating to a contract reference
(reference number one) that it believed had similarly
received outstanding PPQ ratings but was only partially
relevant in size, scope, and complexity. See AR, Tab 320,
TET Consensus Report, at 26. It is therefore not clear
from the contemporaneous evaluation record what a past
performance evaluation that considered the correct size
information would have concluded. (Ace
Info Solutions, Inc. B-414650.10, B-414650.14: May 21,
2018)
SRTR argues that the RFP specifically required the agency
to consider the viability of the offerors' technical
approach. In this connection the RFP provides: "DOE will
evaluate the offeror's key technical inputs, assumptions,
and justifications used to demonstrate the viability of
its technical approach and/or support its technical
understanding." RFP at 401. SRTR argues that the agency
simply failed to evaluate the considerations noted above
at the time it evaluated proposals and maintains that, had
it done so, it would have concluded that SRE's technical
approach was not viable. SRTR points out that, in support
of the viability of processing waste at a concentration of
[deleted]M, the SRE proposal relies entirely on the
findings of the DOE study noted above. AR, exh. F.2, SRE
Technical Proposal, at 113 (Chart Listing SRE's Technical
Inputs, Assumptions and Justifications). However, SRTR
argues that this study does not demonstrate the viability
of SRE's proposed approach, and instead shows that its
approach may in fact not be possible.
We sustain this aspect of SRTR's protest. As noted, the
RFP specifically required the agency to evaluate the
viability of the offerors' technical approaches and to
consider the offerors' key technical assumptions and
justifications. As the foregoing discussion demonstrates,
SRE's technical approach relies on processing waste at
[deleted] than has ever been demonstrated, either in a
laboratory setting or in practice. The agency's evaluation
record is devoid of any discussion of this concern, or any
conclusion regarding the agency's view of the probability
for success of the SRE technical approach.
During the protest, the agency submitted an affidavit
executed by the chairman of the technical evaluation board
(TEB). In his affidavit, he states that the agency was
aware of the report referenced by the protester and quoted
above, that the report was reviewed by the chairman and
several members of the TEB, and that the agency also
sought input from a subject matter expert on this
question. TEB Chairman's Affidavit at 1-2. The affidavit
goes on to represent that, during discussions with the
subject matter expert, it was noted that there were
several possible approaches to mitigating levels of Isopar
L in the DSS. Id. at 2. He also specifically stated the
following with respect to the findings of the test report:
"Further, it was noted that there was significant
uncertainty in the measurements that were made during the
testing and the expected concentrations of lsopar L under
SWPF specific conditions could not be predicted based on
this single set of tests." Id. (emphasis supplied).
Finally, he characterized any potential solution to this
problem as a "yet[-]to[-]be[-]fully[-]developed technical
improvement that would be required to facilitate the high
rates of processing contemplated by the RFP."
We reach several conclusions based on the record before
our Office. First, and perhaps most significantly, there
is nothing in the contemporaneous record that shows that
the agency actually considered this question during its
evaluation of proposals. The only evidence in the record
is the declaration prepared by the TEB chairman submitted
during the course of the protest. In addition, even the
evidence tendered--the TEB chairman's affidavit--suggests
that there are reasons for concern about the viability of
the SRE technical solution. As noted, SRE based its
proposal to process waste at [deleted]M entirely on the
results of the test report. However, the TEB chairman
himself characterizes the test results as significantly
uncertain and not a predictor of SRE's possible success in
implementing its technical solution. TEB Chairman's
Affidavit at 2. He also characterized solution of the
Isopar L concentration problem as a
"yet-to-be-fully-developed technical improvement." Id. In
other words, the TEB chairman himself concludes that the
test report--which provides the sole underlying scientific
support for SRE's never-before-tried technical
approach--provides, at best, uncertain evidence to
demonstrate its viability, and that resolution of the
Isopar L concentration problem itself is a yet-to-be fully
developed technical solution. Moreover, as noted, although
the TEB sought the input of a subject matter expert, it
did not inform her that SRE proposed to process the waste
at[deleted]M; it follows that the TEB did not have the
benefit of her views concerning viability of SRE's
technical approach.
Finally, we note that SRE's proposed technical solution
afforded it a significant competitive advantage,
providing--at least in part--a way for it to significantly
reduce the overall amount of waste to be processed. As
noted, SRE proposed to prepare approximately [deleted]
Mgal. of waste for processing, compared to the [deleted]
Mgal. to be prepared by SRTR; in effect, SRE obtained a
significant competitive advantage because it proposed a
technical solution that--only if successful--would enable
it to process almost [deleted] percent less waste. Under
these circumstances, we conclude that the agency
unreasonably failed to evaluate the viability of SRE's
proposed technical solution. We therefore sustain this
aspect of SRTR's protest. (Savannah
River Technology & Remediation, LLC; Fluor Westinghouse
Liquid Waste Services, LLC B-415637, B-415637.2,
B-415637.3, B-415637.4, B-415637.5, B-415637.6,
B-415637.7: Feb 8, 2018)
TSI argues that the Bureau evaluated its quotation in an
unequal manner as compared to four awardees: CBE,
Performant, ConServe, and Coast. Specifically, TSI
challenges the agency's assessment of a weakness for TSI's
quotation under the quality control factor for failing to
mention HIPAA or the TOP regulations, but not assessing
any weakness for the same omission by the awardees.
Comments and Supp. Protest (B-414090.15) at 4.
In response, the agency asserts that TSI is requesting
that GAO reevaluate quotations and that TSI's failure to
address these two important laws and regulations in its
quotation left the TEP unable to determine whether TSI
would in fact provide training on those laws.
Additionally, the agency contends that TSI's allegations
of disparate treatment fail to show that "any awardee
received a higher rating for a quote that contained the
same or similar omissions as TSI." Supp. MOL at 4.
In TSI's comments on the supplemental agency report, TSI
demonstrated that the training plans under the quality
control factor for three of the four awardees failed to
specifically mention HIPAA or TOP regulations, and that
the plan for the fourth awardee mentions only HIPAA in its
response. See TSI AR, Tab 121, Performant Quotation, at
27-28; Tab 126, ConServe Quotation, at 7-8; Tab 116, CBE
Quotation, at 25-26; Tab 131, Coast Quotation, at 19-20.
It is a fundamental principle of government procurement
that competition must be conducted on an equal basis; that
is, the contracting agency must treat all offerors or
vendors equally; it must even-handedly evaluate offers
against common requirements and evaluation criteria.
Electrosoft Servs., Inc., B-409065 et al., Jan. 27, 2014,
2014 CPD ¶ 252 at 9.
Here, the record shows that the agency assessed TSI's
quotation a weakness for not addressing HIPAA or TOP
regulations in addressing training on laws, regulations
and Bureau requirements. TSI AR, Tab 20, TSI Consensus
Evaluation, at 4. The record also shows that the
quotations of the four challenged awardees similarly did
not address HIPAA and/or the TOP regulations. See TSI AR,
Tab 121, Performant Quotation, at 27-28; Tab 126, ConServe
Quotation, at 7-8; Tab 116, CBE Quotation, at 25-26; Tab
131, Coast Quotation, at 19-20. Yet, these awardees'
quotations were not assessed weaknesses for their
omission. TSI AR, Tab 22, SSP Award Recommendation, at
1-4; Tab 10, SSD, at 13. On this record, we conclude that
the Bureau's evaluation of TSI's quotation was unequal as
compared to the evaluation of the quotations of CBE,
Performant, ConServe, and Coast. We therefore sustain the
protest on this basis. (Transworld
Systems, Inc.; Account Control Technology, Inc.
B-414090.13, B-414090.14, B-414090.15, B-414090.16: Dec
22, 2017)
The evaluation of proposals in a task order competition,
including the determination of the relative merits of
proposals, is primarily a matter within the agency’s
discretion because the agency is responsible for defining
its needs and the best method of accommodating them. See
Engility Corp., B-413120.3 et al., Feb. 14, 2017, 2017 CPD
¶ 70 at 10. An offeror’s disagreement with the agency’s
judgment, without more, is insufficient to establish that
the agency acted unreasonably. STG, Inc., B-405101.3 et
al., Jan. 12, 2012, 2012 CPD ¶ 48 at 7. In reviewing
protests of awards in a task order competition, we do not
reevaluate proposals but examine the record to determine
whether the evaluation and source selection decision are
reasonable and consistent with the solicitation’s
evaluation criteria and applicable procurement laws and
regulations. Technology Concepts & Design, Inc.,
B-403949.2, B-403949.3, Mar. 25, 2011, 2011 CPDP
The [request for task order proposals] RTOP informed
offerors that the government would evaluate proposed
staffing of personnel to ensure they possess the minimum
qualifications--experience level, education, background,
and clearance--to successfully meet the PWS requirements.
RTOP at 48. Offerors were on notice that the agency would
not assume the offeror possesses any capability,
understanding, or commitment not specified in its
proposal. Id. at 41. A proposal that merely offered to
perform work according to the RTOP terms but failed to
provide support and detail as specified in section L of
the RTOP would be deemed to have failed to meaningfully
respond to the proposal preparation instructions. Id. at
46. Specifically, with respect to the staffing matrix,
offerors were required to identify a proposed employee’s
experience level for a particular labor category or
position. AR, Tab 31, Staffing Matrix Instructions, Column
M. As noted above, the PWS listed the required labor
categories and both the minimum required qualifications
and the highly desired qualifications for each category.
AR, Tab 23, PWS § 7.3. The PWS required the contractor to
provide personnel with experience in one of several
military occupational specialties “or equivalent.” Id. at
111‑125.
Here, the awardee’s proposed staffing matrix stated the
length of experience of the awardee’s proposed personnel,
but the staffing matrix did not specify the particular
military occupational specialty under which the proposed
employee had gained experience. See AR, Tab 15, CACI Final
Proposal, Staffing Matrix. Instead, CACI’s proposal
restated the requirement that the employee had experience
under a listed MOS “or equivalent,” without specifying
whether the personnel had experience in an MOS specified
in the PWS, or whether CACI was substituting experience in
an MOS that, in the awardee’s view, was equivalent. Id.
Based on the content of the awardee’s proposal, the agency
had no way to assess whether the proposed employees met
the minimum requirements, because the agency had no
knowledge of whether the awardee was relying on experience
in an “equivalent” MOS, and, if so, whether the agency
viewed that experience as equivalent to the solicitation
requirements. As noted above, agency scrutiny of BAE’s
proposed equivalent MOS contributed to the agency’s
assessment of the one weakness in BAE’s proposal. Because
the awardee’s proposal lacked critical information
regarding the qualifications of proposed personnel, we
sustain the allegation that the agency’s evaluation of the
awardee’s proposal was unreasonable. (BAE
Systems Technology Solutions & Services, Inc.
B-414931.2, B-414931.3: Dec 20, 2017)
We have acknowledged that, as a general matter, it is not
unusual for individual evaluators to reach different
conclusions and assign different scores or ratings when
evaluating proposals, since both objective and subjective
judgments are involved. Novel Pharm., Inc., B-255374, Feb.
24, 1994, 94-1 CPD ¶ 149 at 6. Moreover, evaluation
ratings under another solicitation are not probative of
the alleged unreasonableness of the evaluation ratings
under the solicitation at issue, given that each
procurement stands on its own. Parmatic Filter Corp.,
B-285288, B-285288.2, Aug. 14, 2000, 2000 CPD ¶ 185 at 7.
We have applied these general principles, even where there
are material similarities between related procurements.
For example, in National Government Services, Inc.,
B‑401063.2 et al., Jan. 30, 2012, 2012 CPD ¶ 59, our
Office declined to impose a duty to reconcile evaluation
results, notwithstanding that the requirements of two
related procurements were essentially the same and the
protester had submitted largely identical proposals in
response to the two procurements. Specifically, we noted
that: (1) the procurements had been conducted under
different solicitations; (2) the evaluations did not
involve a common source selection board (SSB) that
reviewed the consensus ratings prepared by the evaluators;
and (3) a different SSA was responsible for each award,
each of whom prepared a separate source selection
decision. Id. at 6‑7.
Our Office distinguished the circumstances in National
Government Services, Inc. from an exceptional case where
we sustained a protest alleging that the agency had failed
to reasonably reconcile different evaluation findings.
Specifically, in CIGNA Government Services, LLC,
B‑401062.2, B-401062.3, May 6, 2009, 2010 CPD ¶ 283, our
Office sustained a protest where the same SSB and SSA
failed to reasonably address divergent evaluation findings
for different regions where the record reflected that both
regions were simultaneously procured using a single
solicitation and offerors submitted a single common
proposal for both regions. Our Office explained that “it
was incumbent upon the SSB and SSA, when confronted with
the differing evaluation results of essentially the same
proposal, submitted by the same offeror under the same
solicitation, to seek some sort of explanation, or
otherwise arrive at an understanding, as to why this was
the case, especially where there were significant rating
differences in the respective evaluations.” Id. at 14. Our
decision in National Government Services, Inc., however,
clarified that “CIGNA addressed the limited circumstances
of a common SSB and SSA reviewing inconsistent evaluations
concerning identical proposals submitted under a single
solicitation.” National Gov’t Servs., Inc., supra at 7.
AdvanceMed, while acknowledging the general principle that
each procurement stands on its own, nevertheless argues
that the exception to that principle articulated in CIGNA
should apply here. Specifically, the protester argues that
the exception is warranted because, among other
considerations, the task orders were issued against the
common UPIC SOW, the technical scenarios offerors were
asked to address, although differing somewhat by region,
were materially similar, the same evaluation criteria were
utilized, and the same SSA ultimately made the award
decisions. More specifically, AdvanceMed argues that it
was unreasonable for the agency to award Health Integrity
two strengths under the scenario responses factor for its
proposed use of a data sharing tool and its approach to
data analysis when materially similar features were not
evaluated as warranting strengths in the western region,
and for the agency to effectively downgrade AdvanceMed’s
past performance rating from very good to satisfactory
where the underlying past performance information
considered was materially similar. We find that the facts
of this case do not warrant extension of the exception set
forth in CIGNA.
As an initial matter, we note that AdvanceMed does not
challenge the reasonableness of the third strength
assigned to Health Integrity’s proposal under the scenario
responses factor. AR, Tab 11, South Western Region Health
Integrity TEP Rep., at 4 (assigning a strength for the
awardee’s understanding and experience with Texas’
Medicaid program). This strength, which is specific to the
unique technical scenarios that offerors were expected to
address in this RFP and is unchallenged by AdvanceMed,
highlights that while similar, the south western and
western region RFPs and the offerors’ respective proposals
submitted in response thereto were not identical in
nature.
In any event, even accepting the protester’s assertion
that there were material similarities in terms of the RFPs’
requirements, the evaluation criteria utilized, and the
contents of the offerors’ proposals, extension of the
limited exception in CIGNA would not be appropriate here.
In CIGNA, our Office was concerned with a scenario where a
common SSB and SSA failed to reasonably address differing
evaluation findings that were evaluated from a common
proposal submitted under a single solicitation. Our
concern was the agency’s failure to reasonably reconcile
differing evaluation findings that were simultaneously and
contemporaneously raised.
In contrast, here the agency procured the associated
services for the western region via a solicitation issued
on July 13, 2016, and issued a separate solicitation for
the south western region approximately eight months later
on February 3, 2017. See AdvanceMed Corp., B-414373, May
25, 2017, 2017 CPD ¶ 160 at 1; RFP at 1. The agency
convened a separate TEP for the western region, that TEP
issued its final consensus evaluation reports in January
2017, and the SSA issued the western region SSD on
February 2, 2017, which all occurred prior to the issuance
of the south western region RFP on February 3, 2017. See
AR, Tab 8a, Western Region SSD, at 1; Tab 8b, Western
Region AdvanceMed TEP Rep., at 1. Meanwhile, the agency
convened a separate TEP to evaluate the south western
region proposals, that TEP issued its consensus evaluation
reports in September 2017, and the SSA issued his SSD for
the south western region on September 14, 2017. See AR,
Tab 5, SSD, at 1; Tab 11, South Western Region Health
Integrity TEP Rep., at 1. In light of the different
solicitations, different proposals, different TEPs, and
significant passage of time between the procurements, we
find no reasonable basis to find that the SSA was required
to reconcile any alleged differences between the western
and south western region evaluations. (AdvanceMed
Corporation B-415360, B-415360.2, B-415360.3: Dec 19,
2017)
Background
Phase one of the procurement was
completed in April 2015 with the issuance of the phase two
RFP to five of the phase one competitors. The agency then
established a phase two source selection evaluation board
(SSEB), which conducted an initial evaluation of phase two
proposals, and engaged in discussions with the remaining
competitors. The SSEB evaluated revised proposals and
issued a phase two evaluation report to the source
selection authority (SSA) on September 9, 2015. The SSA
conducted a best-value tradeoff analysis and selected
Agile Infrastructure Services for award on September 19.
CSG and A/T each filed a protest of that award with our
Office. In response to the protests, the agency elected to
take corrective action and our Office dismissed the
protests on November 6. Construction Services Group, Inc.,
B-412343, Nov. 6, 2015 (unpublished decision);
Atkinson/Trend, B-412343.2, Nov. 6, 2015 (unpublished
decision).
Subsequent to taking corrective action, the agency issued
amendment 0008 to the RFP and allowed all offerors to
submit new revised phase two proposals. In order to
evaluate the revised proposals, the agency constituted a
new SSEB composed of all new SSEB members. This new SSEB
evaluated the revised proposals, conducted an additional
round of discussions, and thereafter issued a new
evaluation report to the SSA on June 6, 2016.
(sections deleted)
CSG alleges that the
agency’s evaluation of its proposal was arbitrary and
unreasonable because two significant strengths recognized
by the original 2015 SSEB report were not recognized in
the subsequent 2016 SSEB report. CSG also asserts that the
agency unreasonably assessed three weaknesses against its
proposal, and applied unequal evaluation standards
concerning sample task order schedule compliance.
The evaluation of an offeror’s proposal is a matter within
the agency’s discretion. IPlus, Inc., B-298020,
B-298020.2, June 5, 2006, 2006 CPD ¶ 90 at 7, 13. In
reviewing an agency’s evaluation, our Office will not
reevaluate proposals; instead, we will examine the record
to ensure that it was reasonable and consistent with the
solicitation’s stated evaluation criteria and applicable
procurement statutes and regulations. Metro Mach. Corp.,
B-402567, B-402567.2, June 3, 2010, 2010 CPD ¶ 132 at 13;
Urban-Meridian Joint Venture, B-287168, B-287168.2, May 7,
2001, 2001 CPD ¶ 91 at 2. An offeror’s disagreement with
the agency’s evaluation is not sufficient to render the
evaluation unreasonable. Ben-Mar Enters., Inc., B-295781,
Apr. 7, 2005, 2005 CPD ¶ 68 at 7. We see no errors in the
agency’s evaluation.
Concerning CSG’s allegation that the 2016 SSEB evaluation
was unreasonable where it failed to recognize two
significant strengths which had been assessed by the 2015
SSEB, there is nothing per se unreasonable about a new
technical evaluation panel, convened for a reevaluation
pursuant to corrective action, reaching different
conclusions from the prior evaluation panel. Corps
Solutions, LLC, B-409298.2, Aug. 21, 2014, 2014 CPD ¶ 244
at 7-8. Our Office has long recognized that different
evaluation panels may reasonably reach different
conclusions regarding the quality of an offeror’s proposal
given the subjective judgment necessarily exercised by
evaluators. Constellation NewEnergy, Inc., B‑409353.2,
B-409353.3, July 21, 2014, 2014 CPD ¶ 219 at 9.
In this case, the record shows that the 2015 panel
identified two significant strengths in its evaluation of
CSG’s proposal, one relating to CSG’s damage mitigation
approach and a second relating to its placement of safety
training on its schedule. The 2016 SSEB recognized CSG’s
damage mitigation approach, but concluded that because the
contractor was responsible for all site damage and was
required to repair damage under the RFP, CSG’s approach
represented compliance with the requirements of RFP and
did not constitute a strength. With respect to safety
training, the 2016 SSEB assigned CSG a strength, as
opposed to a significant strength, on the basis that
“[t]he Technical Approach as a whole was well organized
with identification of major tasks to be performed and
recognition of safety concerns for the project,” which
encompassed CSG’s safety training schedule. AR, Tab 6b,
2016 SSEB Report, at 22. We see nothing unreasonable with
the SSEB’s conclusions here. CSG’s disagreement with the
SSEB’s conclusions does not provide a basis for our Office
to conclude that the evaluation was unreasonable. Mil-Mar
Century Corp., B‑407644, B-407644.2 et. al., Jan. 17,
2013, 2013 CPD ¶ 39 at 6. (Construction
Services Group, Inc.
B-412343.3: Feb 27, 2017)
The protester argues that the agency’s evaluation of
SAWTST’s proposal under the mission capability factor is
flawed because the proposal did not merit a rating of
good. In support of this allegation, Glacier contends that
SAWTST’s technical proposal should have been evaluated as
unacceptable because the [technical evaluation team] TET
rejected the explanation provided by SAWTST’s to support
one of its’ proposed staffing methodologies.
The evaluation of an offeror’s proposal is a matter within
the agency’s discretion. National Gov’t Servs., Inc.,
B-401063.2 et al., Jan. 30, 2012, 2012 CPD ¶ 59 at 5. An
offeror’s disagreement with the agency’s judgment, without
more, is insufficient to establish that the agency acted
unreasonably. STG, Inc., B‑405101.3 et al., Jan. 12, 2012,
2012 CPD ¶ 48 at 7. While we will not substitute our
judgment for that of the agency, we will question the
agency’s conclusions where they are inconsistent with the
solicitation criteria and applicable procurement statutes
and regulations, undocumented, or not reasonably based.
Public Commc’ns Servs., Inc., B-400058, B-400058.3, July
18, 2008, 2009 CPD ¶ 154 at 17.
In reviewing an agency’s evaluation, we do not limit our
consideration to contemporaneously-documented evidence,
but instead consider all the information provided,
including the parties’ arguments, explanations, and any
hearing testimony. Remington Arms Co., Inc., B-297374,
B-297374.2, Jan. 12, 2006, 2006 CPD ¶ 32 at 10. While we
accord greater weight to contemporaneous source selection
materials as opposed to judgments made in response to
protest contentions, post-protest explanations that
provide a detailed rationale for contemporaneous
conclusions, and simply fill in previously unrecorded
details, will generally be considered in our review of the
rationality of selection decisions--so long as those
explanations are credible and consistent with the
contemporaneous record. NWT, Inc.; PharmChem Labs., Inc.,
B-280988, B‑280988.2, Dec. 17, 1998, 98-2 CPD ¶ 158 at 16
(citing Boeing Sikorsky Aircraft Support, B-277263.2,
B‑277263.3, Sept. 29, 1997, 97-2 CPD ¶ 91 at 15).
Based on our review of the record, we find the agency’s
technical evaluation of SAWTST under the mission
capability factor, and specifically under the management
approach subfactor, to be unreasonable. The RFP required
an offeror’s management approach to be based upon the
staffing levels recommended by the agency. RFP at 108. To
the extent that an offeror’s staffing plan deviated from
the agency-provided baseline staffing levels, the RFP
required those deviations to be justified in the cost
narrative section of an offeror’s proposal. Id. The
solicitation expressly advised that the Army would
evaluate an offeror’s management approach as it related to
the analysis of baseline staffing. Id. at 117.
Here, SAWTST’s initial offer was evaluated as deviating
from the agency-provided baseline staffing level. AR, Tab
9, Final Cost Evaluation, at 43. These deviations were
enough of a concern to the Army that they were addressed
in the evaluation notice that the agency sent to SAWTST
during discussions. Id. SAWTST responded to the evaluation
notice, but in spite of the firm’s attempts to explain how
the Army’s concerns would be addressed, the [cost
evaluation team] CET’s final evaluation report clearly
states that the TET rejected one of the two staffing
methodologies proposed by SAWTST in response to the Army’s
evaluation notice. Specifically, the Army’s TET rejected
SAWTST’s explanation for the firm’s decision to propose
[DELETED]. Although the CET’s final evaluation documents
the fact that the TET rejected this aspect of SAWTST’s
proposal, the TET’s final evaluation report is devoid of
any discussion explaining what consideration and weight it
gave to this concern. Compare AR, Tab 9, Final Cost
Evaluation, at 43, with Tab 19, Final SAWTST Technical
Evaluation, at 1‑8.
In this regard, we find the agency’s evaluation of
SAWTST’s proposal under the mission capability factor and
more specifically, under the management approach subfactor
to be unreasonable because the record reflects that even
though the TET rejected one of SAWTST’s proposed staffing
methodologies, it failed to acknowledge that rejection or
otherwise explain how that rejection factored into its
technical evaluation of the awardee’s proposal. That is,
the agency has not explained why SAWTST was rated as good
under the mission capability factor despite the TET’s
decision to reject one aspect of the firm’s staffing plan.
In response to this allegation, the agency argues that it
was reasonable for the Army to assign a rating of good to
SAWTST’s proposal under the mission capability factor
because only one portion of one sub-factor was rejected by
the agency, especially when considering that this rejected
portion of SAWTST’s proposal was addressed by the Army
through its cost realism analysis and because the rejected
staffing deviations reflected only a fraction of what the
agency assessed in order to determine its mission
capability rating. As discussed above, the RFP required
the Army to evaluate an offeror’s management approach as
it related to the offeror’s baseline staffing, under the
mission capability management approach subfactor. There is
nothing in the contemporaneous record to support the
agency’s position that it was reasonable for the TET to
assign a good rating to SAWTST’s proposal under the
mission capability factor, even though the TET rejected
one of SAWTST’s proposed staffing methodologies. We
sustain the protest because while the TET concluded that
one of SAWTST’s staffing methodologies should be rejected,
the contemporaneous record does not explain what weight or
effect the TET gave to this consideration when assigning
the good rating to SAWTST’s proposal under the mission
capability factor, and specifically the management
approach subfactor. (Glacier
Technical Solutions, LLC B-412990.2: Oct 17, 2016)
Mission Support
ASRC argues that the Air Force’s reevaluation of Aleut’s
proposal under the mission support subfactor is
unreasonable because the three areas of concern that
formed the basis of the deficiency–involving training and
certifications, minor ship repairs, and the de-mineralized
water system--remain. ASRC maintains that, as the Air
Force did not conduct discussions and obtain revised
proposals after our Office sustained its previous protest,
the agency reasonably could not find that Aleut’s proposal
met the solicitation’s requirements with respect to the
three identified areas of concern.
In reviewing an agency’s evaluation of proposals, we
examine the record to determine whether the agency’s
evaluation was reasonable and in accordance with the RFP’s
evaluation criteria, along with applicable procurement
statutes and regulations. Cherry Rd. Techs.; Elec. Data
Sys. Corp., B-296915 et al., Oct. 24, 2005, 2005 CPD ¶ 197
at 6.
Training and certifications
ASRC maintains that the Air Force
unreasonably determined that Aleut’s proposal addressed
the requirement to provide personnel with the required
training and certifications, as well as training and
certification programs for various crane personnel in
accordance with NAVFAC P-307 (PWS § 1.10.8.6). Protest at
10. ASRC further argues that Aleut’s proposal simply
repeats the PWS requirement without identifying the
personnel that it will provide, or how Aleut will provide
the training. ASRC Comments at 6-7.
The Air Force states that Aleut’s identification of the
required certifications and staff proposed to have those
certifications was an acceptable approach to meeting the
requirements of PWS § 1.10.8.6. The agency further states
that Table 5 of Aleut’s proposal included the appropriate
certifications and training qualifications for all
personnel associated with providing crane management
services, including the proposed work control supervisor
(which the agency asserts is also the ordnance manager),
the port/wharf supervisor, and the engineer. AR Legal
Memorandum at 8.
The record shows that the Air Force’s reliance on Table 5
of Aleut’s proposal is unreasonable. Table 5, which is
titled “Team Aleut Skill and Staffing Level Mix” contains
three columns that include information on: (1) labor
category, full-time equivalents, and the responsibilities
assigned to that labor category by PWS paragraph; (2) an
explanation of the skill mix and staffing approach; and
(3) certification and training requirements. While Table 5
lists the certification and training requirements, it does
not identify whether or not the individuals proposed for
the labor categories meet these qualifications. For
example, for the work control supervisor, the table lists
“n/a” for certifications and lists training on the Air
Force’s automated work management system, Maximo, as the
only required training. AR, Tab 24, Aleut’s Final
Proposal, at 29. Moreover, we find nothing in Aleut’s
proposal to support the agency’s assertion that the
port/wharf supervisor and the ordnance supervisor are the
same individual. In comparison, the narrative, supported
by Figure 5, states that Mr. [A], the Mission System
Operations & Maintenance Division Manager, would be
cross-utilized as the Ordnance Manager. See id. at 20, 22.
For the port/wharf supervisor, under the heading for
certification and training requirements, Table 5 lists
NAVFAC P-307, Chief of Naval Operations 5090.1, and other
required training, but again does not state that the
proposed individuals meet the qualifications for the
position. Id. at 30. Only for the engineer does Aleut’s
proposal state that the proposed individual actually has
the required certifications. In Table 14, Aleut proposed
the engineer as the certifying official required by PWS §
1.10.8.5, noting that “Team Aleut will designate a
qualified/certified [Deleted] Official (currently the
[Deleted]) to oversee the management and administration of
the program in the areas of instructing, examining and
certification of crane operators.” Id. at 78.
With regard to ASRC’s assertions that Aleut did not
demonstrate how it would meet the requirements of PWS §
1.10.8.6 to provide training and certification programs
for all crane personnel, the agency states that Aleut’s
identification of the need for appropriate training and
certifications was sufficient to be compliant with the RFP
and constituted an acceptable approach to the PWS section
in question. The Air Force further states that the RFP did
not require the offeror to provide a copy of its training
program, but rather to comply with RFP § L.2.1.1.2, which
required offerors to provide a listing of personnel
certifications and personnel training requirements. AR
Legal Memorandum at 8-9.
We agree with the protester that the Air Force
unreasonably determined that Aleut’s proposal met the
requirements of PWS § 1.10.8.6. Read as a whole, the RFP
advised offerors that proposals would be evaluated under
the mission support factor for demonstrating the ability
to, among other things, provide training and certification
programs for all crane operators, crane inspector, crane
mechanics, crane electrician, load test director, and
certifying official in accordance with NAVFAC P‑307. See
RFP at 190, 231. Here, Table 5 of Aleut’s proposal states:
Provide training and certification
programs for all crane operators, crane inspector, crane
mechanics, crane electrician, load test director and
certifying official IAW [in accordance with] NAVFAC
P-307. This includes ordnance certification IAW NAVFAC
P-307, PPE, Overhead Bridge Crane Safety, Hydra-Set
Training.
AR, Tab 24, Aleut’s Final Proposal,
at 30. As we stated in our prior decision:
[T]he offeror was required to provide
training and certification programs for all crane
operators, crane inspectors, crane mechanics, crane
electricians, load test directors and certifying
officials. Simply stated, nothing in the narrative
references relied on by the agency in responding to the
protest demonstrates that Aleut’s proposed work control
supervisor, port/wharf supervisor, or engineer possesses
the necessary training or certifications themselves, and
nothing in the narrative describes how Aleut will
provide the training (other than ordnance handling
training) required by the PWS to its applicable
personnel. (Emphasis added.)
ASRC Comms., Ltd., supra, at 7. While
an agency reasonably may come to a different conclusion in
reevaluating proposals, see IAP World Servs., Inc., infra,
our review of the record shows that there was no
reasonable basis for the agency to have concluded that
this concern was resolved. Moreover, the Air Force
erroneously relies on RFP section L.2.1.1.2 to support its
position as to the propriety of accepting lists of
certifications and training requirements to meet the
requirements of PWS § 1.10.8, as section L.2.1.1.2
provides instructions for the program management subfactor,
not the mission support subfactor. See id. at 175.
Based on the record before us, we cannot determine the
impact that the awardee’s failure to provide an approach
to provide personnel with the required training and
certifications and training and certification programs for
various crane personnel in accordance with the
solicitation would have been on the awardee’s proposal or
on the agency’s tradeoff decision. We therefore conclude
that ASRC was prejudiced by the flawed evaluation and
sustain this basis of protest. See, e.g., Information
Ventures, Inc., B-297276.2 et al., Mar. 1, 2006, 2006 CPD
¶ 45 at 6.
Minor ship repairs
With respect to the issue relating to
Aleut’s proposal of personnel with skill sets necessary to
perform minor ship repairs (PWS § 1.10.3), ASRC argues
that Aleut’s proposed personnel lack the skills necessary
to perform all of the work--specifically carpentry and
metal work--required under the PWS. Protest at 9.
The fact that a reevaluation varies, or does not vary,
from an original evaluation does not constitute evidence
that the reevaluation was unreasonable. It is implicit
that a reevaluation could result in different findings or
conclusions. IAP World Servs., Inc., B-406339.2, Oct. 9,
2012, 2012 CPD ¶ 287 at 3-4. The overriding concern for
our Office’s review is not whether the evaluation results
are consistent with the earlier evaluation results, but
whether they reasonably reflect the relative merit of the
offers. Spectrum Comm, Inc., B-412395.2, Mar. 4, 2016,
2016 CPD ¶ 82 at 7.
The Air Force states that it relied on Tables 14 and 5, as
well as Figure 5, of Aleut’s proposal to determine that
the requirements of PWS § 1.10.3 were met. AR, Tab 39,
Proposal Analysis Report (Re-Eval.), at 64; AR Legal
Memorandum at 16. Table 14 proposed using an [Deleted],
and a [Deleted] to meet the requirements of PWS § 1.10.3,
and stated that additional support could be made available
within the firm’s [Deleted] or from its subcontractor’s
reach-back capabilities. The evaluators concluded that
Figure 5, which identified Aleut’s corporate support
services, showed that Aleut’s [Deleted] could provide the
personnel--specifically sheet metal workers and carpenters
–to perform minor ship repairs. See AR Legal Memorandum at
16‑17; AR, Tab 44, Decl. of Technical Evaluator Mr. G, at
3-4.
Based on the record before us, we conclude that the Air
Force reasonably found Aleut’s proposal to have met the
requirement. With respect to PWS § 1.10.3, the RFP
required offerors to demonstrate the ability to provide
minor ship repairs including minor electrical, metal or
carpentry work not associated with vessel structural
integrity or seaworthiness. See RFP at 190, 230. The
protester does not dispute that the awardee’s proposal
demonstrated in Table 14 that it proposed personnel
skilled in minor electrical work, based on job titles.
Table 14 also referenced Aleut’s [Deleted] as a source of
additional support. AR, Tab 24, Aleut’s FPR, at 77. Thus
the evaluators reasonably looked to Figure 5, which listed
staffing for the various [Deleted]. As noted above, Figure
5 included a carpenter and a sheet metal worker under the
[Deleted]. Although ASRC contends that the RFP prohibited
the agency from looking to other sections of Aleut’s
proposal to determine whether Aleut met the solicitation
requirements, we see nothing in the RFP that would
preclude the agency from doing so. In the absence of
solicitation language expressly precluding the
consideration of certain proposal information, an agency
properly may evaluate a proposal by reading it as a whole.
See Vinculum Solutions, Inc., B-406760, B-406760.2, Aug.
22, 2012, 2012 CPD ¶ 249 at 6.
De-mineralized water system
With respect to the issue of Aleut’s
ability to meet the requirements of PWS § 1.10.4, ASRC
argues that Aleut’s proposal failed to provide an approach
to maintaining the Trident Wharf de-mineralized water
system. Protest at 9-10. The protester asserts that
Aleut’s use of a checklist to perform pre-arrival
inspections does not explain how Aleut will actually
maintain the water system. Comments at 5.
The record shows that the source selection evaluation
board (SSEB) concluded that Aleut’s use of checklists for
all required systems, including the demineralized water
system, was an acceptable approach to maintaining the
demineralized water system. The SSEB primarily relied on
Table 14 of Aleut’s proposal as well as narrative
generally addressing the requirements of PWS § 1.10, which
stated that it would use detailed pre-arrival inspections,
checklists, and sampling of the demineralized water system
for pH, resistivity, conductivity, and other features. AR,
Tab 39, Proposal Analysis Report (Re-Eval.), at 63-64. One
evaluator explains that Aleut’s proposed use of checklists
can reasonably be accepted as a maintenance approach
because checklists are a key component of ensuring
operational readiness and will identify maintenance needs.
AR, Tab 44, Decl. of Technical Evaluator Mr. G, at 5.
Although the protester maintains that the awardee’s
proposed use of checklists does not constitute an approach
as required by the solicitation, the protester’s
disagreement with the agency’s judgment does not render
that judgment unreasonable. Rome Research Corp., B-291162,
Nov. 20, 2002, 2002 CPD ¶ 209 at 4. (ASRC
Communications, Ltd. B-412093.4: Jul 1, 2016)
The first weakness assigned to Encentric’s quotation under
the personnel factor was based on Encentric’s failure to
clearly demonstrate that its proposed project manager
satisfied the minimum requirements for at least five years
of project management experience and at least three years
of project management experience on a PeopleSoft project.
AR, Tab 8, Consensus Technical Evaluation, at 14. As noted
above, the evaluators found that while Encentric’s
quotation stated that the proposed project manager had
over twenty years of IT project management experience, the
work experience described in the candidate’s resume
indicated only two years of qualifying project management
experience. Id. Further, according to the TEP chairperson,
the TEP concluded that while the proposed project manager
had experience with PeopleSoft, she did not have “a
project management role in any of her PeopleSoft
experience as she only held responsibilities commensurate
to an analyst position[;]” thus, she did not have
“experience managing projects or staff while working on a
People[S]oft project.”[6] AR, Tab 3, TEP Statement at 1.
In challenging this weakness, the protester first argues
that the agency relied on an unstated evaluation criterion
when it concluded that the proposed project manager did
not have sufficient experience that matched the
description of the project management responsibilities to
be performed under the task order. AR, Tab 8, Consensus
Technical Evaluation, at 14. According to Encentric, the
agency relied on an unstated evaluation criterion when it
essentially faulted the protester for failing to copy and
paste all forty of the RFQ’s individual bullet points into
its description of its project manager’s experience.
Comments at 7. We disagree.
When evaluating a quotation, an agency properly may take
into account specific, albeit not expressly identified,
matters that are logically encompassed by or related to
the stated evaluation criteria. Sigmatech, Inc., B‑410933,
March 18, 2015, 2015 CPD ¶ 110 at 8. Here, the RFQ
expressly described the duties of a project manager, and
set requirements for years of experience in that position.
The fact that the TEP relied upon that description when it
attempted to discern whether any of the work experience
listed in the resume qualified as project manager
experience is consistent with the terms of the
solicitation and does not involve reliance on an unstated
criterion. RFQ at 46.
Additionally, the record demonstrates that, contrary to
the protester’s assertion, the agency did not require it
to address each bullet point from the description in order
to demonstrate that its proposed key personnel met the
minimum experience requirements. In fact, the TEP credited
the protester’s proposed project manager with two years of
project manager experience based on a brief description of
the proposed project manager’s employment from September
2001 to August 2003. AR, Tab 8, Consensus Technical
Evaluation, at 14.
The protester next argues that, had the agency carefully
read the candidate’s resume, the agency would have found
that it clearly showed that she had at least six years of
experience performing project management tasks. Comments
at 8. Specifically, the protester argues that the agency
should have credited the proposed project manager with
project management experience based on her employment as a
senior business systems analyst between August 2003 and
August 2007. Protest at 17. In response, the agency argues
that while the candidate’s resume indicated that she
managed a project as a senior business systems analyst,
the resume did not provide sufficient information to allow
the TEP to determine whether the candidate had the type of
project manager experience required by the solicitation,
which included “experience managing people . . . team
performance, contracts and company resources.” AR, Tab 3,
TEP Statement at 2.
Essentially, the protester’s argument here is based upon
the assumption that the agency should be required to
cobble together and draw broad inferences from the
information provided in the resumes in order to conclude
that the requirements of the RFQ were met. We have
previously rejected that argument where, as here, the
solicitation required that resumes clearly demonstrate how
the proposed individuals meet the required experience
level. See Consummate Computer Consultants Sys., LLC,
supra, at 6. Ultimately, it is a vendor’s responsibility
to submit an adequately written quotation that establishes
its technical capability and the merits of its proposed
approach, and allows for a meaningful review by the
procuring agency in accordance with the evaluation terms
of the solicitation. Id.
Here, while the resume contained a bare assertion that the
candidate had “over 20 years of information technology . .
. project management in the commercial, state, and public
sectors with the past ten years emphasizing PeopleSoft
implementations and support for the federal government,”
the accompanying work histories did not clearly
demonstrate that the proposed project manager had the
experience required by the RFQ. AR, Tab 7, Encentric’s
Quotation, at 22. As such, we find no basis upon which to
conclude that the agency’s assignment of a weakness in
this regard was unreasonable.
(sections deleted)
The protest is
denied. (Encentric, Inc.
B-412368.3: Apr 19, 2016) (pdf)
Arctic Slope contends that the agency’s evaluation of
technical proposals was unequal and otherwise
unreasonable. In reviewing protests of an agency’s
evaluation and source selection decision, our Office will
not reevaluate proposals; rather, we review the record to
determine whether the evaluation and source selection
decision are reasonable and consistent with the
solicitation’s evaluation criteria, and applicable
procurement laws and regulations. M&S Farms, Inc.,
B‑290599, Sept. 5, 2002, 2002 CPD ¶ 174 at 6. Here, we
have considered all of the protester’s arguments. For the
reasons discussed below, we conclude that the agency’s
evaluation of proposals was unreasonable.
Undue Emphasis on Corporate Experience
Arctic Slope asserts that the agency in its evaluation
afforded undue weight to corporate experience. Protest at
63-64. In this regard, the RFP provided that corporate
experience was one of six technical subfactors, and was
less important than either the technical approach or
management plan subfactors. RFP at 68. According to the
protester, the agency unreasonably elevated the importance
of corporate experience, such that it was considered
substantially more important than any other evaluation
factor. The agency responds that while the solicitation
required offerors to demonstrate their present
understanding of the requirement and to discuss how they
would perform the work in the future, the TEP nevertheless
evaluated offerors’ past experience because “past work and
experience is the foundation of present understanding and
of the ability to perform work in the future.” AR, Tab 33,
Revised TEP Report, at 49.
In J.A. Jones Mgmt. Servs., Inc., B-254941.2, March 16,
1994, 94‑1 CPD ¶ 244, we held that where, as here, the RFP
contains separate and independent technical evaluation
factors encompassing separate subject areas, with each
factor assigned separate weights under the RFP’s
evaluation scheme, an agency may not double count, triple
count, or otherwise greatly exaggerate the importance of
any one listed factor. In other words, where an RFP lists
a number of evaluation factors or subfactors of stated
importance, a single one cannot be accorded more than the
weight prescribed in the RFP’s evaluation methodology by
the agency repeatedly considering the same factor in
conjunction with the other major factors. Id. at 6‑9; see
Computer Sciences Corp., et al., B-408694.7 et al., Nov.
3, 2014, 2014 CPD ¶ 331 at 19 n.11; GlassLock, Inc.,
B-299931, B-299931.2, Oct. 10, 2007, 2007 CPD ¶ 216 at 6;
Management Servs., Inc., B-206364, Aug. 23, 1982, 82-2 CPD
¶ 164; Earth Envtl. Consultants, Inc., B‑204866, Jan. 19,
1982, 82-1 CPD ¶ 43; The Center for Educ. and Manpower
Res., B‑191453, July 7, 1978, 78‑2 CPD ¶ 21 (protest
sustained where agency considered offeror’s experience
under all of the other evaluation criteria rather than
restricting it to the experience evaluation factor). Here,
the record shows that the agency effectively gave
corporate experience, one of the six technical subfactors,
undue weight in evaluating each of the other technical
subfactors through the lens of corporate experience, and
in doing so, competitively disadvantaged Arctic Slope,
which was not an incumbent offeror.
For example, the TEP assigned a weakness to Arctic Slope’s
proposal under the staffing subfactor for risks associated
with Arctic Slope not being able to hire a sufficient
number of incumbent personnel. In its initial evaluation,
the TEP assigned a weakness to Arctic Slope’s proposal to
hire the majority of the incumbent workforce as employees,
rather than as independent contractors. In its prior
protest, Arctic Slope challenged this weakness, arguing
that, under the terms of the solicitation, this approach
was the only acceptable approach. During its corrective
action, the TEP revised its report to explain that, while
attempting to hire incumbents was a reasonable initial
step, Arctic Slope’s proposal had very little discussion
of what it will do if its first step of attempting to hire
incumbents does not prove successful, and that Arctic
Slope’s proposal “failed to provide any details regarding
its contingency plan.” AR, Tab 33, Revised TEP Report, at
66. Arctic Slope contends that the agency held other
offerors to a lower standard and did not require the same
level of detail regarding staffing and recruiting
resources as that required of Arctic Slope. Protest at
30-31.
The record indicates that the evaluation in this regard
was unequal, and resulted from the consideration of
corporate experience in other than the corporate
experience subfactor. As the protester notes, its proposal
expressly addressed how it would staff the project if it
was unable to hire a sufficient number of incumbent
personnel. Specifically, Arctic Slope’s proposal contained
several pages of discussion of the firm’s “contingency
plan.” AR, Tab 20, Arctic Slope Technical Proposal, at
58-62. The proposal identified nine sources for
identifying potential staff, including the resume database
of Arctic Slope’s corporate parent, which contains over
150,000 candidate resumes. Id. at 59. While the agency
contends that this approach was insufficient because
Arctic Slope’s proposal failed to “identify a single
individual, let alone the source, skill, or ability of any
individual . . . purportedly identified in this database,”
AR at 16, Arctic Slope’s proposal provided that:
We populate the database by sourcing
candidates through the external and internal
organizations, retired investigator websites, and
traditional employment search engines. We have posted
AFISS [Asset Forfeiture Investigative Support Services]
specific requisitions on several of these sites already
. . . . The database includes AFISS candidates across
all contract labor categories . . . . In 2013-2014,
[DELETED]% of our candidates within that database held a
clearance and [DELETED]% of the cleared individuals held
a TS/SCI [top secret/sensitive compartmented
information] or higher clearance.
AR, Tab 20, Arctic Slope Technical
Proposal, at 56, 59.
Further, other offerors’ proposals were not held to this
exacting standard. Rather, the agency relied on offerors’
corporate experience to justify higher ratings under the
staffing subfactor. For example, [DELETED] also proposed
as part of its staffing plan to hire incumbents as
employees and back-fill with individuals from an applicant
pool containing more than 1,000 resumes, including
[DELETED] pre-screened candidates that either have a top
secret clearance or are eligible to have one reinstated.
AR, Tab 12, [DELETED] Technical Proposal, at 56. As a
result, [DELETED] was assigned the following strength:
[DELETED]’s Staffing Plan is
complete, thorough, and highly detailed. [DELETED]
maintains a database of skilled and qualified candidates
with secret or top secret clearances, performs a
continuous recruitment effort to keep the database
updated, and recruits utilizing websites targeted at
former law enforcement agents. The eight-step staff
recruitment and deployment process set forth . . . [in]
its proposal has been successfully utilized and refined
in support of the PACS contract.
AR, Tab 33, Revised TEP Report, at
18.
The agency provides no reasonable justification for the
difference in treatment between the two offerors, and
cites no substantive difference between the two offerors’
contingency plans. Rather, the record shows that the
evaluators gave [DELETED] credit for its prior experience
under the incumbent contract. In this regard, the agency
stated that the “TEP is aware that [DELETED]’s primary
business is providing the highly specialized asset
forfeiture financial services that this RFP was seeking,
so [DELETED]’s resumes actually mean something, while
Arctic has very limited experience and is offering resumes
of a corporate parent that . . . does not do this kind of
specialized work at all.” AR at 19. However, both
proposals only identified key personnel, and the record
contains no indication that the evaluators had access to
the resumes in [DELETED]’s 1,000-resume database or the
resumes in Arctic Slope’s 150,000-resume database, or that
there was any other basis on which the TEP could conclude
that the resumes in [DELETED]’s database were superior to
those in Arctic Slope’s database. Thus, despite apparently
similar proposal approaches, Arctic Slope’s staffing
proposal was considered to be inferior to that of another
offeror based solely on the fact that the other offeror
could be presumed to have better candidates in its
databases or candidate pool due to its corporate
experience in this area.
In sum, the record indicates that rather than limiting the
evaluation of corporate experience to that subfactor, the
agency penalized nonincumbent offerors such as Arctic
Slope for a lack of experience when evaluating under the
other technical evaluation subfactors. The agency thus
greatly exaggerated the significance of the corporate
experience subfactor, which resulted in an unreasonable
evaluation that was not in accordance with the evaluation
scheme set forth in the solicitation.
Unequal Treatment
Arctic Slope also protests that the agency’s evaluation
was unequal in several other respects. For example, the
protester challenges the agency’s assignment of strengths
and weaknesses in evaluating whether teams demonstrated
prior experience working together under the corporate
experience subfactor. Arctic Slope also challenges the
agency’s evaluation under the staffing plan subfactor,
noting that Arctic Slope’s proposal was assigned a
weakness based on proposal language that was nearly
identical to language in another offeror’s proposal that
did not receive a weakness. Arctic Slope also notes
several areas of the evaluation in which incumbent
offerors were given the benefit of the doubt, while
nonincumbent offerors such as Arctic were held to a higher
standard. See Comments at 24.
It is a fundamental principle of federal procurement law
that a contracting agency must treat all offerors equally
and evaluate their proposals evenhandedly against the
solicitation’s requirements and evaluation criteria. Cubic
Applications, Inc., B‑411305, B-411305.2, July 9, 2015,
2015 CPD ¶ 218; Rockwell Elec. Commerce Corp., B‑286201 et
al., Dec. 14, 2000, 2001 CPD ¶ 65 at 5. Further, where an
agency treats offerors unequally by, for example, reading
some offerors’ proposals in an expansive manner and
resolving doubt in favor of the offeror, while reading
other offerors’ proposals narrowly and applying a more
exacting standard that requires affirmative
representations within the four corners of the proposal,
we have found such evaluations to involve disparate
treatment. Lockheed Martin Information Systems, B-292836
et al., Dec. 18, 2003, 2003 CPD ¶ 230 at 11-12.
Here, the record indicates that when evaluating the
proposals of the competitive range offerors, the agency
tended to be expansive, resolving doubts in favor of the
offeror. In contrast, when evaluating the proposal of
Arctic Slope, the agency tended to read proposals narrowly
and apply a more exacting standard that required
affirmative representations within the four corners of the
proposal.
For example, in evaluating the resume of RA’s proposed
project manager under the key personnel subfactor, the TEP
was willing to assume that the project manager proposed by
RA, whose proposal was included in the competitive range,
had experience not identified in his resume. In this
regard, the RFP described the education and experience
requirements for the key personnel position of project
manager as follows: “An undergraduate degree is required
(graduate degree preferred and/or significant experience
with Forfeiture, Financial, or White Collar
investigations). Experienced in managing task orders of
similar size, scope and complexity.” RFP Attach. 2, AFISS
Labor Category Descriptions at 1. The resume for RA’s
proposed project manager stated that the individual had 25
years of experience working for the FBI, including duties
such as: (1) managing Asia Pacific Rim investigations and
intelligence for terrorism; (2) developing and managing
broad-based emergency response initiatives; (3) developing
private sector and local law enforcement terrorism
prevention programs for special entertainment groups, such
as the Emmys, Oscar, and Golden Globe awards; (4) managing
the FBI’s global telecommunications and computer networks;
(5) crisis management with kidnappings, hostage
situations, and a terrorist attack on an airport; and (6)
planning and monitoring the construction of a
multi-million dollar local network connection center. AR,
Tab 6, RA Technical Proposal, at 34. Although the RA key
employee’s resume contained no reference to experience
with forfeiture, financial or white collar investigations,
and also did not indicate a graduate degree, this
offeror’s staffing plan was assigned a strength for
proposing a contract manager and project managers that
“are well qualified with significant asset forfeiture and
law enforcement experience.” AR, Tab 33, Revised TEP
Report, at 44.
Arctic Slope’s proposed project manager, on the other
hand, submitted a resume that stated the following: “17
plus years as a Federal Special Agent and IRS Revenue
Officer conducting various security-related investigations
that include financial.” However, despite a specific
reference to experience in conducting financial
investigations in the resume, the agency assigned the
following weakness to Arctic Slope’s staffing proposal:
“Proposed Project Manager . . . has limited financial
investigative experience.” AR, Tab 33, Revised TEP Report,
at 65.
During its corrective action, the TEP’s evaluations and
conclusions remained largely unchanged from the initial
evaluation. However, the TEP added additional sections to
its report addressing specific issues raised in the
initial round of protests. In addressing this apparent
unequal treatment between an offeror included in the
competitive range and Arctic Slope, the TEP explained that
the resume for RA’s proposed project manager was
considered to meet the experience requirements because
“[t]he TEP, based on its collective knowledge and
experience working with FBI offices in major cities, is
confident that these jobs would have given [the
individual] in-depth experience in financial, white
collar, and forfeiture work.” AR, Tab 33, Revised TEP
Report, at 67. In other words, in evaluating RA’s key
personnel, the TEP was willing to assume “in depth”
relevant experience not listed in the key personnel’s
resume. On the other hand, despite a specific reference in
Arctic Slope’s proposed project manager’s resume to
experience with financial investigations, the TEP
complained that “his resume does not . . . provide any
specific details regarding the nature or extent of his
experience with financial investigations other than to
mention them.” AR, Tab 33, Revised TEP Report, at 66.
Thus, the agency read RA’s proposals in an expansive
manner, resolving doubt in favor of the offeror, while
reading Arctic Slope’s proposal narrowly and applying a
more exacting standard that required affirmative
representations within the four corners of the proposal.
This amounted to an unequal evaluation. See Lockheed
Martin Information Systems, supra.
In another example, Arctic challenges a weakness the TEP
assigned its proposal under the corporate experience
factor. Protest at 32. As set forth above, the RFP
provided that if a team was proposed, offerors were
required to describe the team’s experience working
together, particularly under similar contracts with the
federal government. RFP at 64. The RFP further provided
that the agency would give added preference where the team
had experience working together, particularly under
similar contracts with the federal government. RFP at 69.
Arctic Slope proposed to team with two other companies:
[DELETED] and [DELETED]. Arctic Slope’s proposal
demonstrated that it had experience working with [DELETED]
under several contracts, including an IRS asset forfeiture
contract, an EPA information management center contract
(EPA IMCS 3), and a DEA contract involving investigative
and analytical support for the El Paso Intelligence
Center. AR, Tab 20, Arctic Slope Technical Proposal, at
100. Arctic Slope did not indicate prior experience
working with its other teammate, [DELETED].
The evaluators assigned two weaknesses to Arctic Slope’s
proposal under the corporate experience subfactor, both of
which related to experience working with teammates:
a. While ASMS and [DELETED] have
established relationships and have worked together
extensively in the past, those projects appear to be
smaller in size and scope than AFISS.[]
b. ASMS does not indicate the extent of its experience
working with the proposed team member [DELETED].
AR, Tab 33, Revised TEP Report, at
69.
In contrast, another offeror whose proposal was included
in the competitive range, also proposed to team with a
company with which it had not previously worked.
Specifically, [DELETED] proposed to team with two
companies. [DELETED]’s proposal explained with regard to
the first teammate, [DELETED], that the two companies had
over a decade of experience working together, including
work under the incumbent PACS contract. AR, Tab 6,
[DELETED] Technical Proposal, at 47. In contrast, the
proposal did not indicate any prior experience working in
any context with RA’s second teammate, [DELETED]. See AR,
Tab 6, [DELETED] Technical Proposal, at 47-49. The
evaluators, however, did not assign a weakness to [DELETED]’s
proposal for failure to indicate the extent of its
experience working with [DELETED] as they did with Arctic
Slope. Rather, the evaluators assigned the following
strength:
[DELETED] clearly addresses its
existing relationships with its teaming partners and
specifically how the organizations have worked together
in the past on similarly scoped projects.
AR, Tab 33, Revised TEP Report, at
45.
In its corrective action, the agency did not revise the
above strengths and weaknesses, but instead sought in its
revised TEP report to provide an explanation for this
apparent unequal treatment. The report provided four “very
important facts” that the evaluators believed justified
the differing evaluation results with respect to Arctic
Slope and [DELETED]: (1) although [DELETED] and its
teammate have not previously worked together, “[DELETED]
has established, through its proposal and through the
Department’s extensive real world experience with [DELETED]’s
performance under the current PACS contract, that
[DELETED] is fully capable, by itself, of performing all
work required under the AFISS procurement”; (2) because
[DELETED] is fully capable of performing the work by
itself, [DELETED] did not propose its teammate to perform,
or assist in performing, the day-to-day work required by
the AFISS solicitation, and [DELETED]’s teammate instead
will be providing analytical and reports expertise; (3)
“from past experience, the TEP understood [DELETED]’s
reporting role as proposed by [DELETED] clearly, as
[DELETED] is currently providing financial management
services to AFMS under a different contract”; and (4) the
teammate “has in-depth experience with and knowledge of
the Department’s Asset Forfeiture Program.” AR, Tab 33,
Revised TEP Report, at 73‑74. Thus, the TEP concluded that
since both [DELETED] and the teammate with which it had
never worked both had “considerable knowledge and
experience regarding the AFP [asset forfeiture program],”
the assignment of a weakness was not appropriate here.
The evaluation in this regard was unreasonable. As an
initial matter, we note that under the language of the
solicitation, offerors were to be given added preference
for demonstrating prior experience working with teammates
on similar projects, not penalized for failing to
demonstrate such prior experience working together. In
this regard, we do not believe that the assignment of a
weakness in this area is encompassed by the RFP’s promise
of “added preference” for those teams that demonstrated
experience working together. That is, the RFP led offerors
to believe that the proposal of a team with no prior
experience working together on similar projects would be
fully acceptable, and would not result in a downgrading of
the proposal. Offerors were not on notice that failure to
strive for the “added preference” would result in the
assignment of one or more weaknesses. RFP at 69.
Furthermore, nothing in the RFP supported the agency’s
unequal approach of assigning two separate weaknesses to
Arctic Slope’s proposal for failure to sufficiently
demonstrate prior experience working together, while
assigning a strength to [DELETED]’s proposal for
“specifically [addressing] how the organizations have
worked together in the past on similarly scoped projects,”
even though [DELETED] had never previously worked with one
of its two teammates. While the agency seeks to justify
the apparent inconsistency by reference to other
considerations, such as [DELETED]’s ability to perform the
work itself or the agency’s familiarity with [DELETED]’s
experience, the solicitation preference for teaming with
firms with which the offeror had previously worked did not
provide for considering either of these two factors. Thus,
the evaluation in this area was unreasonable. (Arctic
Slope Mission Services, LLC B-410992.5, B-410992.6:
Jan 8, 2016) (pdf)
Export 220Volt challenges the agency’s evaluation of
AEEA’s proposal. The protester contends that AEEA’s
products fail to comply with the RFP’s technical
requirements in certain essential regards: (1) that they
lack a one U-Blade USA grounded receptacle; (2) that the
universal receptacles are not UL, CSA, or TUV listed; and
(3) that AEEA proposed a basic auto-transformer without a
polarity warning light.[5] Export 220Volt also contends
that the agency failed to conduct its evaluation in
accordance with the solicitation, insofar as the RFP
required offerors to submit extended specifications and
product literature to substantiate compliance, and AEEA
submitted only its own representations regarding the
alleged features of the proposed products.
The agency argues that its technical evaluation was
reasonable because AEEA’s proposal represented on its face
that it met all specifications, including those which the
protester challenges, and took no exception to any RFP
requirements. The agency asserts that it reasonably relied
on AEEA’s representations when finding the awardee to be
technically acceptable.[6]
AEEA, the awardee, argues that its proposed products meet
all RFP requirements, and it provided (in response to the
protest) a “Sunkax Technical Sheet” whereby AEEA re-listed
the purported product specifications. AEEA Comments, Nov.
23, 2015, Technical Sheet, at 1. AEEA also states that the
picture enclosed with its proposal--which the protester
contends demonstrates a noncompliant product--was the
manufacturer’s standard picture, and that the transformers
are “custom design[ed] and provided as per buyer’s
requirement.” AEEA Comments, Nov. 23, 2015, at 1.
It is a fundamental principle that a contracting agency
must evaluate proposals consistent with the terms of the
solicitation and, while the evaluation of offerors’
proposals generally is a matter within the procuring
agency’s discretion, our Office will question an agency’s
evaluation where it is unreasonable, inconsistent with the
solicitation’s stated evaluation criteria, or
undocumented. Tantus Techs., Inc., B-411608, B-411608.3,
Sept. 14, 2015, 2015 CPD ¶ 299 at 6; Exelis Sys.Corp.,
B-407111 et al., Nov. 13, 2012, 2012 CPD ¶ 340 at 5.
Further, it is well-established that contracting agencies
do not have the discretion to announce in the solicitation
that they will use one evaluation plan, and then follow
another. The Emergence Group, B-404844.7, Feb. 29, 2012,
2012 CPD ¶ 133 at 7. Once offerors are informed of the
criteria against which proposals will be evaluated, the
agency must adhere to those criteria in evaluating
proposals and making its award decision, or inform all
offerors of any significant changes made in the evaluation
scheme. Id.; TantusTechs., Inc., supra, at 7. Because the
record evidences that the agency did not adhere to the
announced evaluation scheme in evaluating AEEA’s technical
capability, we conclude that the agency’s technical
capability evaluation of AEEA’s proposal was unreasonable
and improper.
As set forth above, the RFP required each offeror’s
proposal to provide “at a minimum, existing product
literature to substantiate the acceptability of [its]
offered products . . . in accordance with th[e]
solicitation.” RFP at 26. Quite simply, the RFP
established that for this procurement, an offeror’s
self-representations that its proposed products met all
technical requirements would be inadequate. The record
reflects that AEEA’s proposal failed to include the
information required by the RFP. Even if it could be
argued that AEEA’s self-prepared “seller bid
specifications” met the requirement for extended
specifications, it is clear that AEEA failed to provide
existing product literature from the manufacturer or
third-party source substantiating the acceptability of its
offered products. AR, Tab 6, AEEA Proposal, at 1-23. Nor
did the picture which AEEA submitted make up for the lack
of product literature: the picture does not by itself
establish a compliant product; moreover, AEEA acknowledges
that the picture is of a generic, Sunkax transformer that
does not represent what it intends to deliver. Therefore,
we find the State Department’s technical evaluation to be
unreasonable because AEEA failed to submit with its
proposal the information which the solicitation mandated
that offerors provide.
The State Department does not argue that AEEA submitted
the required product literature substantiating the
acceptability of their proposed products. Rather, the
agency contends that the step-down transformers are a
common commercial item, and that AEEA self-representations
that its products would comply with all technical
requirements were sufficient. We disagree. While AEEA’s
proposal represented that its offered transformer met all
of the specifications, such blanket statements of
compliance or an offeror’s belief that its products will
meet the agency’s needs are not sufficient where the
solicitation requires the offeror to provide product
literature to demonstrate that its offered item complies
with the specifications listed in the solicitation. See
Sea Box, Inc., B-405711.2, Mar. 19, 2012, 2012 CPD ¶ 116
at 5. In sum, the agency’s argument, that it could rely
upon AEEA’s self-representations of compliance, has no
merit because the solicitation here required AEEA to also
provide product literature to substantiate the
acceptability of its proposed products, and AEEA failed to
do so. (Export 220Volt,
Inc. B-412303.2: Jan 20, 2016) (pdf)
Evaluation of [key performance attributes] KPAs
PTF raises two essential challenges to the agency’s
evaluation of the awardees’ proposals as technically
acceptable for non-ballistic materials under the KPAs.
Supp. Protest at 2. First, with respect to KPAs 8, 9, and
10, PTF asserts that the language of Section L of the RFP,
as set out above, required offerors to provide test
documentation for all 13 KPAs demonstrating that their
BCSs met all of the requirements defined in RFP, section
M, Table 1. Supp. Protest at 6-12. In addition to its
contentions regarding test documentation, PTF argues the
awardees failed to otherwise meet the requirement of KPA
10, air permeability. Id. at 7-12. PTF also contends that
the agency improperly changed certain requirements during
award without amending the RFP. Protest at 10-13.
In reviewing a protest challenging an agency’s evaluation,
our Office will not reevaluate proposals, nor substitute
our judgment for that of the agency, as the evaluation of
proposals is a matter within the agency’s discretion.
Computer World Servs. Corp., B-410513, B-410513.2, Dec.
31, 2014, 2015 CPD ¶ 21 at 6. However, we will review the
record to determine whether the agency’s evaluation was
reasonable and consistent with the stated evaluation
criteria and with applicable procurement statutes and
regulations. Id.
In answering this protest, the agency disputes PTF’s
understanding of the RFP’s terms, arguing that Section L
must be “read in full context to fully understand what
test documentation was required to be provided.” Supp. AR
at 3. The agency contends in this regard that the RFP did
not require test documentation for KPAs 8, 9, and 10;
rather, the agency argues that “test data for most of the
non-ballistic requirements in the PD are to be determined
during First Article Testing”; and that section L
instructs offerors to provide “a technical narrative that
clearly addresses and demonstrates” the 13 KPAs. Id. at 1,
3. The agency explains, further, that this means offerors
were “required to submit clear declarative statements for
each of the thirteen KPAs . . . .” Id. at 3-4.
On this record, even adopting, for purposes of this
discussion, the agency’s reading of the RFP concerning
test documentation, we find that the agency’s evaluation
of the awardees’ narratives addressing the requirements
for KPA 10 was unreasonable and not consistent with the
stated evaluation criteria.
Again, section L of the RFP required offerors to provide a
narrative that clearly addressed and demonstrated
compliance with each KPA. RFP, § L, at 71. Section M
stated that each KPA would be rated as acceptable or
unacceptable, and that, to be considered for an award, a
proposal had to be rated as acceptable for all 13 KPAs.
Id., § M, at 79.
As shown above, for KPA 10, air permeability, Table 1
lists as the minimum requirements that the product will be
tested, and that (1) it will meet the PD requirements, and
(2) the testing will be performed using the identified
ASTM method. RFP at 80. To address compliance with the
KPAs, Carter included a table that reproduced Table 1,
with an additional column titled “Requirement Met?” For
the air permeability KPA, Carter inserted, as its
narrative, a statement that its product had been tested
using ASTM D 737, with a reference to attached test
results. However, the attachment showed that its product,
in fact, did not comply with the [Original Purchase
Description] PD requirements. AR, Tab 8a, Carter Initial
Tech. Proposal, at 7, 230.
In evaluating the acceptability of Carter’s proposal with
respect to this KPA, the agency’s evaluation record simply
states “acceptable.” AR, Tab 3a, Carter Tech. Eval., at 3.
The agency ignored the test data Carter submitted, stating
in the agency report that it did not consider the data
because, according to the agency, the data was not
required under the RFP. Supp. AR, at 4. We find the
agency’s evaluation unreasonable. Given that here, (1)
test results are provided, and (2) the proposal’s
narrative on which the agency based its evaluation relies
on that test data, we do not find reasonable the agency’s
determination to ignore such evidence in reaching its
determination of acceptability. Moreover, the agency has
not explained, in this regard, how a mere statement that
the product “was tested”--the only thing left in the
narrative addressing this KPA, after test results are
ignored--satisfied the requirement for a narrative that
“clearly addresses and demonstrates” the performance
attributes.
Similarly, we find unreasonable the agency’s evaluation of
both Point Blank’s and Short Bark’sproposals for KPA 10.
As with Carter, the agency’s evaluation of both of these
offerors’ narratives for KPA 10 simply contained the
notation “acceptable,” with no rationale given. AR, Tab
3b, Point Blank Initial Tech. Eval., at 3; AR, Tab 3d,
Short Bark Initial Tech. Eval., at 3. Neither of these
offerors’ proposals provided a meaningful narrative for
this KPA. Point Blank’s proposal simply stated that its
product had been tested using the prescribed method,
without any narrative concerning compliance with the PD
requirements. AR, Tab 9a, Point Blank Initial Proposal, at
11. Short Bark stated that [deleted]. However, there is no
mention of any method of testing, whether past or future.
AR, Tab 11a, Short Bark Initial Tech. Proposal, at 16.
Accordingly, it does not appear that either Point Blank or
Short Bark provided narratives that “clearly address[ed]
and demonstrate[d]” compliance with the minimum
requirements for KPA 10.
An offeror has the obligation to affirmatively demonstrate
that its proposal will meet the government’s needs, where
required to do so. See TRS Research, B-274845, Jan. 7,
1997, 97-1 CPD ¶ 6 at 3; Discount Machinery & Equip.,
Inc., B-253094, Aug. 2, 1993, 93-2 CPD ¶ 68 at 4. Although
all of the awardees provided a blanket statement that
their products would comply with the RFP, it is
well-established that such statements are not enough to
demonstrate technical acceptability. See United Satellite
Sys., B-237517, Feb. 22, 1990, 90-1 CPD ¶ 201 at 4.
In sum, the awardees’ proposals did not demonstrate how
their products met the requirements set forth in KPA 10,
as required by the evaluation criteria. Further, the
agency provided no explanation of its rationale for
finding that these offerors’ proposals were technically
acceptable for this KPA. Accordingly, we find that the
agency’s evaluations of the awardees’ narratives for KPA
10 were unreasonable and not consistent with the RFP’s
evaluation criteria. We sustain the protest on this
ground.
RECOMMENDATION
We recommend that the Army either: (1) reasonably
re-evaluate the proposals in accordance with the RFP as
written, and make new source selection decisions; or (2)
re-examine the RFP’s minimum requirements and evaluation
criteria, to determine whether they accurately and
unambiguously reflect the agency’s needs, and, if
appropriate, amend the RFP, re-open discussions, obtain
and evaluate revised proposals, and make new source
selection decisions.
We also recommend that the protester be reimbursed its
reasonable costs of filing and pursuing this protest,
including attorneys’ fees. Bid Protest Regulations, 4
C.F.R. § 21.8(d)(1). The protester’s certified claim for
such costs, detailing the time expended and costs
incurred, must be submitted directly to the agency within
60 days after receipt of this decision. 4 C.F.R. §
21.8(f)(1).
The protest is sustained. (Protect
the Force, Inc. B-411897.2, B-411897.4: Nov 24, 2015)
(pdf)
SRA protests that the decision to reject its offer was
unreasonable. According to SRA, the letter of commitment
submitted for the infrastructure lead meets the
requirements of the solicitation because: (1) it is a
signed letter of commitment, and (2) it states that the
employee is available to begin work on the project start
date. In SRA’s view, the agency unreasonably rejected the
letter of commitment because the proposed individual might
not be available if he died or became ill.
In reviewing a protest against an agency’s evaluation, we
will not reevaluate quotations but we will examine the
record to ensure that an agency’s evaluation was
reasonable and consistent with the evaluation criteria in
the solicitation, as well as procurement statutes and
regulations. See RightStar Sys., B-407597, Jan. 16, 2013,
2013 CPD ¶ 35 at 5. An agency’s evaluation of technical
quotations is primarily the responsibility of the
contracting agency, since the agency is responsible for
defining its needs and identifying the best method of
accommodating them. See Affolter Contracting Co., Inc.,
B-410878, B-410878.2, Mar. 4, 2015, 2015 CPD ¶ 101 at 4.
The agency explains that it was necessary that proposed
key personnel commit to be available on the day
performance under the contract began since months of time
could pass between the date for proposal submission and
the date when performance would begin. Therefore, the
agency wanted assurance that the signatory was not
contemplating life or career changing decisions that could
deprive the team of his or her services between proposal
submission and the project start date. CO Statement at 3.
The agency also explains that it did not reject SRA’s
proposal because the proposed key person might die or get
sick, but because his proposed availability was contingent
upon actions under his control, such as taking a leave of
absence. Id. Since the proposed infrastructure lead stated
that he could be unavailable based on events within the
control of his employer (such as being moved to a
different internal business group) or himself, the agency
reasonably concluded that he was not unequivocally
committed to being available when performance began.
Therefore, we find the agency’s conclusion that SRA’s
proposal was unacceptable to be reasonable.
Disparate Treatment
SRA also complains that the agency subjected it to
disparate treatment in its evaluation of letters of
commitment with respect to two offerors--SAIC and American
Systems. Comments at 14-17. SAIC submitted letters of
commitment for its key employees which read as follows:
Based on a direct match of your
experience with the requirements as well as your deep
domain knowledge . . . we are pleased to notify you of
our intent to propose you for the Key Personnel position
. . . For all Key Personnel, we are required to submit a
signed letter of Commitment to the Government. Signing
this letter of Commitment indicates your commitment the
following:
You are committed to continuing to work on this contract
in the event SAIC is awarded this task order.
* * * * *
By signing below you authorize SAIC
to use your resume . . . .
I [employee] accept the terms as outlined above. I
understand that although I am a current SAIC employee,
this does not constitute a guarantee of employment.
See, e.g., AR, Tab 34, SAIC Letter of
Commitment, at 1.
SRA asserts that SAIC’s offer should have been rejected as
unacceptable because SAIC’s letters of commitment do not
state that its employees will be available on the date
performance begins. SRA also argues that SAIC’s letters of
commitment are unacceptable because the proposed key
person is not guaranteed employment. Therefore, SRA argues
that it is foreseeable that the key person will be
unavailable to perform. Comments at 14-15.
These arguments are without merit. As an initial matter,
we find that the agency reasonably concluded that if the
individual is committed to continuing to work on the
contract, he or she will be available when performance
starts. Further, we find no merit in SRA’s argument that
the letters of commitment are unacceptable because they
fail to guarantee employment. The solicitation here did
not require binding bilateral employment agreements, but
rather, only letters that clearly showed the intent on the
part of the signatory to work for the offeror. The letters
submitted by SAIC met this requirement. See USATREX Int’l,
Inc., B‑275592, B-275592.2, Mar. 6, 1997, 98-1 CPD ¶ 99 at
9 (letters of commitment that demonstrate employee’s
intent to work for offeror are sufficient even though they
did not amount to binding employment contracts where
solicitation did not require binding bilateral employment
agreements); Laser Power Technologies, Inc., B-233369,
B‑233369.2, Mar. 13, 1989, 89-1 CPD ¶ 267 at 14.
With regard to American Systems, its proposal contained
letters of commitment for its proposed key personnel that
stated as follows:
If AMERICAN SYSTEMS is awarded a
contract from the aforementioned solicitation, I hereby
commit to be employed and available to begin work on the
project start date.
See, e.g., AR, Tab 11, American
Systems Letter of Commitment, at 1. SRA complains that
American Systems’ proposal should have been rejected
because the letters of commitment it provided for its
proposed key personnel did not show that there were
current employees or contingent employees. SRA asserts
that the letters of commitment are further defective
because they state the authors commit to being available
on the contract start date, not that they are available on
the contract start date. Comments at 17.
As indicated above, the solicitation required the letters
of commitment to include a statement that the proposed key
person is employed. In the course of the procurement, a
potential offeror asked about proposing contingent hires
as key personnel. In response, the agency stated that
contingent hires were acceptable, and that this
information should be included in the letter of
commitment. AR, Tab 44, Question and Answer 66.
The agency reasonably accepted the letters of commitment
submitted by American Systems as adequate. Based on the
wording of the letters, specifically that the employee
committed “to be employed” by American Systems, the
letters of commitment indicated that the signatories were
contingent employees who will become employees if the task
order is issued to American Systems. Further, we find no
merit in SRA’s argument that American Systems’ letters are
insufficient because they state that the employee
“commits” to be available on the start date, rather than
stating that the employee “will” be available on the start
date. In our view, stating that the author of the letter
commits to being available to start work on the contract
start date is a commitment that the author will be
available on that date. Since the solicitation required
letters of “commitment,” we find that the agency
reasonably concluded that this language satisfied the
solicitation’s requirements. (SRA
International, Inc. B-411773, B-411773.2: Oct 20,
2015) (pdf)
Where, as here, an agency issues an RFQ to FSS contractors
under FAR subpart 8.4 and conducts a competition, we will
review the record to ensure that the agency’s evaluation
is reasonable and consistent with the terms of the
solicitation and applicable procurement laws and
regulations; our office will not reevaluate the
quotations. Carahsoft Tech. Corp., B-401169, B-401169.2,
June 29, 2009, 2009 CPD ¶ 134 at 5; OPTIMUS Corp.,
B-400777, Jan. 26, 2009, 2009 CPD ¶ 33 at 4.
Here, we find that the agency’s evaluation was
unreasonable and inconsistent with the terms of the RFQ
for two reasons: one, the SSA failed to evaluate whether
the personnel proposed by Intellizant were capable of
performing the PWS tasks; and two, the agency failed to
evaluate whether Intellizant’s quotation demonstrated the
ability to provide the personnel it proposed. As
previously discussed, technical acceptability was one of
the RFQ’s three non-price evaluation factors. RFQ amend.
002, at 0196. Vendors were to be rated acceptable or
unacceptable based on their ability to meet the RFQ’s PWS
requirements. Id. The solicitation’s evaluation criteria
stated that a vendor was “responsible for making
sufficient representations to demonstrate that it can meet
the requirements in the PWS.” Id. The PWS required vendors
to have the ability to provide on-site support staff with
certain qualifications. Id. at 0165. The RFQ’s
instructions notified vendors that quotations that merely
proposed to perform in accordance with the solicitation
would be deemed ineligible for award. Id. at 0193.
Furthermore, the solicitation’s instructions required
vendors to submit a technical appendix with attachments
such as transcripts or letters of commitment. Id.
The record shows that the agency failed to evaluate
whether the awardee proposed personnel who could perform
the PWS tasks. The first PWS task, “[f]unctional knowledge
transfer and training,” required a vendor to demonstrate
its ability to provide “personnel with an in-depth
knowledge of PSC’s implementation of Oracle Federal
Financials,” who also “have an in-depth knowledge of
federal accounting and PSC’s specific accounting needs.”
Id. at 0165. Moreover, the PWS stated that it was
important for a vendor’s personnel to meet these
requirements because of the task’s critical nature, which
included 13 “[f]unctional knowledge transfer and training
requirements.” Id. at 0165‑0166.
(paragraph deleted)
On this record, we
cannot conclude that the SSA’s evaluation was reasonable.
In order to be acceptable, the RFQ required vendors to
demonstrate their ability to perform all of the PWS tasks.
RFQ amend. 002, at 0196. The SSA testified that she
believed that a vendor’s ability to perform the task order
was based entirely on the qualifications of the staff
provided by a vendor. (Tr.) 49: 20-22; 50:1-2. In spite of
her belief, the SSA indicated that she did not evaluate
whether the personnel proposed by Intellizant had the
qualifications required to perform the PWS tasks but
instead focused her evaluation on Intellizant’s transition
plan. The SSA failed to address whether the personnel
proposed by Intellizant had the experience and
qualifications required in the PWS tasks. AR, Tab 5d, SSD,
at 0474-0483. Moreover, we find unpersuasive the SSA’s
explanation that she did not evaluate Intellizant’s
proposed personnel against the minimum requirements of the
functional knowledge transfer and training task because
she evaluated their resumes under the resource
requirements section of the RFQ. In this regard, the SSA
testified that she did not know, and had not performed a
review to confirm, whether the experience and
qualifications identified under the resource requirements
section of the PWS were sufficient to demonstrate that a
vendor would meet the minimum requirements of the
functional knowledge transfer and training task. Tr.
88:11-22; 89:1-10. Thus, the SSA could not have reasonably
determined whether Intellizant’s proposed personnel met
the minimum requirements of the functional knowledge
transfer and training task. Accordingly, because the
record does not demonstrate that the SSA properly
considered Intellizant’s ability to provide personnel
qualified to perform the functional knowledge transfer and
training task, we find that the SSA’s evaluation was not
conducted in accordance with the RFQ’s evaluation criteria
and we sustain the protest. (Starry
Associates, Inc. B-410968.2: Aug 11, 2015) (pdf)
Unequal Treatment
Finally, Electrosoft argues that [Department of Homeland
Security] DHS evaluated the vendors’ quotes under the
technical approach factor unequally. We agree.
It is a fundamental principle of government procurement
that competition must be conducted on an equal basis; that
is, the contracting agency must treat all offerors or
vendors equally; it must even-handedly evaluate offers
against common requirements and evaluation criteria. See
Diebold, Inc., B-404823, June 2, 2011, 2011 CPD ¶ 117 at
4; Tidewater Homes, Realty, Inc., B-274689, Dec. 26, 1996,
96‑2 CPD ¶ 241 at 3.
Electrosoft notes several areas where DHS’s evaluation of
the vendors’ quotes reflects more positive comments for
the awardee, despite what the protester contends were
similar approaches, or similar levels of detail, in the
vendors’ quotes. For example, the TET identified a
strength for PKH’s quote for demonstrating the ability to
develop an Enterprise PIV program oversight plan and
integrate personal identity verification (PIV) task into
an ICAM integrated master schedule. See AR, Tab 14,
Technical Evaluation Report, at 11. The protester contends
that its quote also described in detail its experience
with similar PIV projects; described how it would support
DHS in developing and implementing the PIV Logical Access
Control System solution; and described how it would
develop an Enterprise PIV program oversight plan and
integrate the PIV task into the ICAM Integrated Master
Schedule. See AR, Tab 8, Electrosoft Quote, at 11-19. The
protester also notes that its quotation addressed the PIV
logical access requirements and future target state ICAM
requirement and experience using Attribute-Based Access
Control architecture. See id. The protester argues that
the agency treated the vendors unequally here because the
agency found that the awardee’s quote provided a “very
solid understanding” of the PIV requirements and provided
a lengthy description of the benefits of its approach,
while noting that the protester’s quote reflected “an
understanding” of the requirements. See AR, Tab 14,
Technical Evaluation Report, at 8, 11.
Similarly, Electrosoft contends that the agency’s
evaluation of the vendors’ knowledge and understanding of
DHS business management and analysis requirements ignored
similar details in the vendors’ quotes, and unreasonably
assigned more favorable comments--and a higher rating of
superior--to the awardee’s quote. See id.
In response to arguments by the protester that its
quotation was comparable the awardee’s, DHS acknowledges,
in several instances, that the vendors’ quotes were
similar. See Second Supp. AR (Dec. 9, 2013) at 13‑16.
Nonetheless, the agency contends that even where it used
more favorable language to describe the awardee’s quote,
despite the similarities to the protester’s quote,
Electrosoft was not prejudiced because any such errors
would likely not have changed the result of the
competition. See id. As discussed above, however, we
conclude that there were other errors in the agency’s
evaluation of the vendors. For this reason, we do not
agree with DHS that its apparent unequal treatment of
Electrosoft under the technical evaluation was a
non-prejudicial error.
Moreover, the record produced in the course of this
protest provides little explanation about why PKH’s
quotation was rated superior under the technical approach
factor, while Electrosoft’s quotation was rated only
satisfactory. In areas where the protester has highlighted
similarities in the quotations, the agency has, in
essence, conceded the similarities and acknowledged that
the evaluators had little basis for the distinctions made
in the contemporaneous record. While it is not the job of
our Office to evaluate quotations, and while there may
have been a basis for drawing distinctions between these
quotations, given the record developed here, we find no
basis to conclude that the agency treated the vendors
equally. We therefore sustain this basis of protest.
(Electrosoft Services, Inc.,
B-409065, B-409065.2, B-409065.3: Jan 27, 2014) (pdf)
NRDC argues that the agency
improperly evaluated the adequacy of its proposed staffing
based on a comparison of the overall number of FTEs
proposed to an undisclosed government estimate, which
resulted in a downgrading of the proposal’s rating under
the staffing plan/key personnel factor. As explained
below, we agree with the protester.
While an agency may rely on its own estimates of the
staffing levels necessary for satisfactory performance in
evaluating proposals for the award of a fixed-price
contract, it is improper for an agency to downgrade a
proposal simply because the offeror’s overall proposed
FTEs differ from the government’s estimate, where the
government’s estimate was not disclosed to the offerors,
the agency failed to conduct discussions with the offeror
concerning the discrepancy, and the agency did not look
beyond the bottom line numbers to determine whether there
were specific areas in which the offeror’s proposed
staffing was inadequate See Olympus Bldg. Servs., Inc.,
B-285351, B-283351.2, Aug. 17, 2000, 2000 CPD ¶ 178 at 10;
see also NCI Info. Sys., Inc., B-405589, Nov. 23, 2011,
2011 CPD ¶ 269 at 7.
This is precisely what happened here--NRDC’s proposed
staffing was found to present a performance risk solely
because the protester’s overall number of FTEs was below
an undisclosed agency estimate, without discussions
providing adequate notice of the agency’s concern or
analysis of whether the protester’s proposed staffing in
particular areas was sufficient.
Regarding the FTE comparison, there is simply no evidence
of any analysis as to specific areas in which the
protester’s final staffing numbers were considered
insufficient, notwithstanding the fact that NRDC’s
proposal included a breakout of its FTEs by labor category
and functional area, as did the agency estimate. Rather,
the record reflects that the agency mechanically compared
NRDC’s total FTE count with the total FTE set forth in the
government estimate where the contracting officer simply
explained that the agency had concerns with NRDC’s
staffing because it provided “over [deleted]% fewer
personnel than identified in the Government estimate of
[deleted] FTE’s.” Contracting Officer’s Statement of Facts
at 14. Such a comparison, without any analysis or
explanation was fundamentally unreasonable.
We recognize that the agency raised the issue of
inadequate staffing with the protester during discussions.
During those discussions, however, the agency specifically
identified as too low the number of proposed FTEs for
lodging housekeeping, laundry services, and mail; that is,
the agency did not advise NRDC that it considered the
protester’s staffing numbers in other areas to be too low.
The protester addressed the identified staffing shortfalls
in its final proposal. NRDC increased its proposed FTEs
for mail handling to a level in excess of the number in
the agency estimate, and increased its number of
housekeeping/laundry workers to a level that, while below
the agency staffing estimate, was apparently sufficient
given that it exceeded the number of FTEs proposed by
Bestway.
In sum, we conclude that the agency acted unreasonably in
finding NRDC’s overall level of staffing to be a weakness
representing performance risk without providing the
protester with reasonable notice of, and the opportunity
to address, the discrepancy between the agency’s overall
staffing estimate and its own overall number, and without
providing any analysis as to specific areas in which the
protester’s final staffing numbers were considered
insufficient. As a result, we sustain the protest. (Native
Resource Development Company, B-409617.3: Jul 21,
2014) (pdf)
MSI next argues that the
contracting officer’s best value decision was flawed because
“the final TEC consensus scores upon which he relied are
inconsistent with the final scores assigned by the individual
TEC members.” Comments at 19. MSI also asserts that, because the
record contains no explanation of why the scores changed between
the individual evaluation worksheets and the final consensus
report, the protest must be sustained for failure to adequately
document the source selection decision. Id. Our review of the
record affords us no basis to question the agency’s evaluation.
It is not unusual for individual evaluator ratings to differ
significantly from one another, or from the consensus ratings
eventually assigned; indeed, the reconciling of such differences
among evaluators’ viewpoints is the ultimate purpose of a
consensus evaluation. J5 Systems, Inc., B- 406800, Aug. 31,
2012, 2012 CPD ¶ 252 at 10. Our overriding concern is not
whether an agency’s final ratings are consistent with
preliminary ratings, but whether they reasonably reflect the
relative merits of the proposals, consistent with the
solicitation. Id. Further, our Office has consistently held that
numerical point scores are useful only as guides for intelligent
decision-making and are not generally controlling for award
because they often reflect the disparate, subjective judgments
of the evaluators. National Medical Seminars Tempharmacists,
B-233452, Feb. 22, 1989, 89-1 CPD ¶ 191 at 2.
We are unaware of any requirement that every individual
evaluator’s scoring sheet track the final evaluation report, or
that the evaluation record document the various changes in
evaluators’ viewpoints. J5 Systems, Inc., supra at 13 n.15. More
importantly, our review of the record does not lead us to
conclude that the agency’s evaluation was objectionable. The
alleged inconsistencies upon which the protester asks us to
sustain this protest amount to nothing more than quibbling with
the minutia of the agency’s scoring of proposals. The probative
question here, however, is not whether the agency’s point scores
were off by one or two points, but whether the agency properly
justified paying a price premium for IRG’s technically superior
solution. We next turn to MSI’s contention in this regard.
(Management Systems International,
Inc., B-409415, B-409415.2: Apr 2, 2014) (pdf)
In summary, the record here establishes that: (1) SAIC’s
proposal, and the agency’s evaluation thereof, was based on the
technical approach, resources, and costs associated with “old”
SAIC; (2) “old” SAIC did not intend to perform as the prime
contractor, but rather intended that another entity, with
substantially fewer resources, that was completely separate from
“old” SAIC, would be the prime contractor; (3) notwithstanding
all of the evidence indicating that “old” SAIC would not be the
entity performing this contract, the SSA relied on “old” SAIC’s
“strikingly lower probable cost,” which it attributed to its
proposed technical approach; (4) the substitution of “new” SAIC
as the prime contractor may well have a material effect on both
the costs incurred and the technical approach employed during
contract performance; and (5) the agency gave no meaningful
consideration to either the technical approach or probable costs
associated with “new” SAIC’s performance of the contract
because, in the agency’s words, “there was no definitive
information to consider.” On this record, the awardee’s
proposal, and the agency’s evaluation thereof, failed to
reasonably reflect the manner in which the contract will be
performed, the level of costs likely associated with
performance, and the corporate entity that will perform the
contract. Accordingly, there is inadequate support for the
agency’s assessment of projected cost savings and, similarly, no
reasonable basis for award.
The protest is sustained. (Wyle
Laboratories, Inc., B-408112.2, Dec 27, 2013) (pdf)
Technical Evaluation Errors
While BAE’s protests present a variety of allegations concerning
the Navy’s technical and engineering judgments with regard to
multiple aspects of its own, and Raytheon’s technical
evaluations, our Office’s review of the record in this case
reveals straightforward errors in the Navy’s application of the
RFP’s stated evaluation scheme, and in its documentation of the
evaluation record. First, our review of the record demonstrates
that the Navy failed to conduct an assessment of proposal risks
concerning the “Aeromechanical/Air Vehicle” sub-element of the
design approach technical subfactor, which was required pursuant
to the terms of the RFP. Second, our review has identified
multiple technical areas in which a weakness or risk concerning
Raytheon’s proposal was identified, was not resolved through
discussions, but nonetheless disappeared from the evaluation
without explanation.
The evaluation of an offeror’s proposal is a matter within the
agency’s discretion. VT Griffin Servs., Inc., B-299869.2, Nov.
10, 2008, 2008 CPD ¶ 219 at 4; IPlus, Inc., B-298020,
B-298020.2, June 5, 2006, 2006 CPD ¶ 90 at 7, 13. In reviewing a
protest of an agency’s evaluation of proposals, our Office will
examine the record to determine whether the agency’s judgment
was reasonable and consistent with the stated evaluation
criteria and applicable procurement statutes and regulations.
Shumaker Trucking & Excavating Contractors, Inc., B-290732,
Sept. 25, 2002, 2002 CPD ¶ 169 at 3. While we will not
substitute our judgment for that of the agency, we will sustain
a protest where the agency’s conclusions are inconsistent with
the solicitation’s evaluation criteria, undocumented, or not
reasonably based. DRS ICAS, LLC, B-401852.4, B-401852.5, Sept.
8, 2010, 2010 CPD ¶ 261 at 4-5. (BAE
Systems Information and Electronic Systems Integration Inc,
B-408565, B-408565.2,B-408565.3, Nov 13, 2013) (pdf)
Small Business Subcontracting Plan Weakness
As noted above, the evaluators identified as a weakness in SRA’s
proposal the protester’s omission of a copy of exhibit 5 from
its small business subcontracting plan, and the failure of the
plan to specify the extent of the protester’s commitment to use
subcontractors through enforceable (vs. nonenforceable)
agreements. SRA disputes both of these findings.
The evaluation of technical proposals is a matter within the
discretion of the contracting agency, since the agency is
responsible for defining its needs and the best method for
accommodating them. Visual Connections, LLC, B-407625, Dec. 31,
2012, 2013 CPD ¶ 18 at 3. In reviewing an agency’s evaluation,
we will not reevaluate technical proposals, but instead will
examine the agency’s evaluation to ensure that it was reasonable
and consistent with the solicitation’s stated evaluation
criteria and with procurement statutes and regulations. Id. at
4.
In connection with the first finding, the protester argues that
it was unreasonable for the evaluators to fault it for failing
to furnish a copy of exhibit 5 with its proposal given that it
provided all of the required information in a proposal chart.
The agency argues that SRA omitted required information from its
proposal by failing “to provide 25% . . . of the information
required by Exhibit 5 . . . where they were to provide the
dollar amount of total planned subcontracts.” Agency Memorandum
of Law, Aug. 30, 2013, at 11. We are not persuaded by the
agency’s argument, as it unreasonably reflects an elevation of
form over substance.
The RFP instructed offerors to provide a breakdown of their
proposed small business subcontracting goals by small business
category. In this regard, the RFP included the following chart
as an example for a contract with a maximum ordering value of
$100 million and estimated subcontracts of $50 million:
|
Column A |
Column B |
Column C |
Category |
Percent of ID/IQ Maximum Ordering
Value |
Dollar Value |
Percent of Subcontracting Value |
Small Business Concerns |
25 percent |
$25,000,000 |
50 percent |
The following subcategories are
inclusive of the above Small Business percentage |
Small Disadvantaged Business
Concerns |
5.5 percent |
$5,500,000 |
11 percent |
Women Owned Small Business
Concerns |
9 percent |
$9,000,000 |
18 percent |
Historically Black Colleges and
Universities |
1.5 percent |
$1,500,000 |
3 percent |
HUBZone Small Business Concerns |
1.5 percent |
$1,500,000 |
3 percent |
Veteran Owned Small Business
Concerns |
2.5 percent |
$2,500,000 |
2.5 percent |
Service-Disabled Veteran-Owned
Small Business Concerns |
1.5 percent |
$1,500,000 |
3 percent |
RFP, amend. 000001, at 41. Based
on the instructions accompanying the chart, offerors were to
propose small business subcontracting goals as a percentage of
the ID/IQ maximum ordering value in column A; the proposed
dollars subcontracted for the category in column B; and
calculate the percentage goals expressed as a percent of total
subcontract dollars in column C.
The solicitation instructed offerors to complete a separate
chart entitled Small Business Subcontracting Plan Goals at
exhibit 5. The exhibit 5 chart requested the same information
pertaining to small business subcontracting goals as the sample
chart on page 41 of the RFP, as set forth above. The only
difference was that exhibit 5 repeated the dollar value column
(Column B above) under the proposed goals as a percentage of the
value of the contract and under the proposed goals as a
percentage of subcontracts. The exhibit 5 chart had the
following headings:
BUSINESS CATEGORY |
Proposed Goals Based on ID/IQ
Maximum Ordering Value of $26M |
Proposed Goals Based on Total
Planned Subcontracts of $TBP |
|
Dollar Amount* |
Percentage |
Dollar Amount* |
Percentage |
The repetition of the dollar
amount in the exhibit 5 chart did not require offerors to
provide any additional information as compared to the chart on
page 41. Rather, exhibit 5 simply required offerors to include
their proposed dollar amounts for the various small business
subcontracting categories twice. In this regard, the asterisk in
both dollar amount columns specifically established that the
dollar amounts in the two columns “should be the same.” RFP,
exhibit 5.
The record reflects that the protester furnished a breakdown of
its proposed small business subcontracting goals in a chart
using the format of the sample chart on page 41 of the
solicitation, but did not submit a separate exhibit 5 chart.
That is, it furnished its goals for subcontracting with the
various categories of small businesses in a chart that contained
a single dollar value column, rather than two dollar value
columns as contemplated by exhibit 5.
As detailed above, however, the chart used by the protester
(page 41 from the solicitation) and the chart on Exhibit 5
conveyed identical information, just in slightly different
formats, with the result being that SRA’s proposed dollar
amounts for the subcontracting categories appeared once, rather
than twice. Thus, we fail to see how the protester’s failure to
furnish the same dollar amounts twice may reasonably be
characterized as an omission of required information, as the
agency has argued. Accordingly, we conclude that this finding of
weakness lacks a reasonable basis.
SRA further argues that its failure to furnish copies of
agreements demonstrating its commitment to use particular
subcontractors in several of the small business categories
should not have been considered a weakness because the
solicitation required offerors to connect work to particular
subcontractors and to specify the extent of the commitment to
use those subcontractors (enforceable vs. nonenforceable
agreements) only to the extent the subcontractors were known.
According to the protester, it complied with the requirement by
furnishing copies of enforceable agreements for the
subcontractors whom it identified in its proposal, and thus
should not have been assigned a weakness. In response, the
agency argued that the finding was reasonable because RFP
section 4.2 provided for evaluation of the extent of the
offeror’s commitment to use small business subcontractors.
We find unpersuasive the agency’s explanation regarding SRA’s
purported failure to address the extent of subcontractor
commitments. We will read a solicitation as a whole and in a
manner that gives effect to all its provisions, ArmorWorks
Enters. LLC, B-405450, Oct. 28, 2011, 2011 CPD ¶ 242 at 3. Here,
in addition to the language cited by the agency, the RFP
contained language (at section 3.6) that required offerors to
“connect the work to the subcontractor and specify the extent of
commitment to use the subcontractor(s)” only if the
subcontractor(s) were known. Consistent with these provisions,
the protester furnished copies of enforceable agreements for its
known subcontractors; thus, we fail to see a reasonable basis
for finding that SRA failed to provide information as
contemplated by the RFP. (SRA
International, Inc., B-408624, B-408624.2, Nov 25, 2013)
(pdf)
Corporate Experience Evaluation of Savvee
Savvee next objects to the agency’s evaluation of its quotation
under the corporate experience factor, arguing that many of the
weaknesses assessed in its quotation under this factor were
unreasonable.
The RFQ provided that the agency would assess under the
corporate experience factor whether the vendor had at least 3
years of corporate experience providing the types of services
required in the SOW to either a government or commercial entity.
RFQ amend. 2, at 2. Vendors were required to address: (1) the
organization’s number of years of corporate experience in
providing the services offered; (2) the organization’s size,
experience in the field, and resources available to enable it to
fulfill requirements of the size anticipated under any resultant
task order; and (3) the information demonstrating the vendor’s
organizational and accounting controls, and manpower presently
in-house or the ability to acquire the types and kinds of
personnel proposed. Id.
Savvee’s quotation described its corporate experience and the
experience of its subcontractor (XL, Inc.), which totaled 32
years with the DDTC, DOS, and other federal agencies. AR, Tab 7,
Savvee Technical Quotation, at 8. Savvee’s quotation also
addressed the vendor’s size, resources, and experience by
detailing that Savvee/XL supported more than 15 DOS bureaus and
had more than 500 employees in over 40 countries, which Savvee
stated provided it with substantial experience in the field. Id.
Savvee also described its organizational and accounting
controls, and ability to acquire necessary personnel. Id. at 9.
In addition to describing generally the recruiting efforts to be
utilized if additional personnel were required, Savvee stated
that it would provide the entire incumbent (XL) staff and
submitted resumes for these individuals demonstrating the
required skills, experience, and security clearances. Id. at
9-10; Appendix A at A-1 – A-76.
The TEP rated Savvee’s corporate experience as good, noting nine
strengths and five weaknesses. AR, Tab 25, TEP Report, at 1-19.
As explained below, we find, however, that most of the
weaknesses identified in Savvee’s quotation under this factor
were not assigned reasonably or in a manner consistent with the
stated evaluation criterion.
First, the TEP found that Savvee satisfied the requirement to
demonstrate at least 3 years experience providing the services
set forth in the SOW. Id. at 4. Nonetheless, the TEP also
identified as a weakness that Savvee had not shown any
experience providing similar services to the private sector. Id.
at 15. The RFQ, however, required vendors to demonstrate their
experience with either a government or commercial entity. There
was no requirement that vendors have experience with both
government and private entities. Therefore, we find that
assessing a weakness for Savvee’s lack of private sector
experience--when it or its subcontractor had significant
experience performing these services for the government--was not
reasonable.
Next, the TEP also identified as a weakness that Savvee itself
did not have experience with DDTC, stating that “[t]he Bureau
and Directorate experience mentioned . . . only extends to the
proposed subcontractor. The Directorate does not have any
current or past experience with the proposed prime contractor.”
Id. at 11. Although the RFQ required that vendors have relevant
experience with any government or commercial entity, it did not
require specific experience with DOS or DDTC. As there was no
requirement that vendors have specific experience with DOS or
DDTC, the weakness assigned to Savvee’s quotation for not having
such experience was unreasonable.
Also, the agency determined that Savvee possessed the size and
resources needed to fulfill all anticipated SOW requirements;
indeed, the record shows that the TEP found that the “Savvee/XL
team combined has a wealth of experience sufficient to meet
current and future needs.” Id. at 15. Nonetheless, the TEP
identified a weakness in Savvee’s quotation because of the
evaluators’ “[c]oncern they may be too big to focus on needs of
DDTC.” Id. at 15. The contemporaneous record and the agency’s
arguments in response to the protest provide no support or
explanation for such a conclusion. Contracting agencies are
required to adequately document their evaluation results, and
sufficiently support the findings on which award determinations
are made. Here, the RFQ encouraged vendors to demonstrate the
breadth and depth of their resources and experience, and to
explain how these resources and experience would enable them to
fulfill all possible task order requirements. The agency has
failed to explain how having broad experience and resources
indicates that the vendor is “too big,” and may not focus on the
very job they spent time and money pursuing. Absent explanation,
we find that DOS acted unreasonably in assigning this weakness.
The TEP also identified as a weakness that Savvee failed to show
it had ever successfully recruited necessary personnel.
Specifically, the TEP stated that, “[i]n our experience with the
proposed sub-contractor, out of 23 current positions, they only
recruited for one position. All other employee resumes were
passed to them from previous contracts. Therefore it is
difficult to rate the offeror’s past success in successful
recruitment.” AR, Tab 25, TEP Report, at 12. The RFQ provided,
however, under the corporate experience factor, that the agency
would evaluate whether vendors demonstrated that they presently
possessed the manpower in-house or had the ability to recruit
the required personnel. RFQ amend. 2, at 2. Savvee’s quotation
demonstrated that it already possessed all required manpower
in-house through its subcontractor XL, the incumbent contractor;
the agency does not dispute this representation. As a result, we
find this weakness to be unreasonable. Quite simply, the
solicitation required vendors to demonstrate that they either
possessed the manpower in-house, or the ability to recruit the
needed personnel, not both.
In sum, the record demonstrates the rating assigned to Savvee’s
quotation under the corporate experience evaluation factor was
based in substantial part on weaknesses that were not
reasonable, or were inconsistent with the stated evaluation
criterion.
Corporate Experience Evaluation of Edmonds
Savvee also challenges the evaluation of Edmonds’ quotation
under the corporate experience evaluation factor. In this
regard, Savvee again argues that the agency applied an unstated
evaluation criterion when it identified as a strength Edmonds’
understanding of DDTC’s future directions with respect to
information technology (IT) support.
The TEP rated Edmonds’ corporate experience as excellent, noting
six strengths and no weaknesses. AR, Tab 25, TEP Evaluation
Report, at 13-19. Also as relevant here, the TEP stated in its
evaluation summary that, “[w]hile both proposals more than
adequately addressed all areas to be considered . . . Edmonds
briefly addressed future directions of the Directorate with
respect to IT support. This is a plus as the Directorate is
highly dependent upon the automation of our day to day
operations.” Id. 5.
The TEP made similar findings in earlier versions of its
evaluation report. Specifically, the agency evaluators found as
a technical discriminator that Edmonds “is much stronger in
understanding the future direction of IT and the present office
environment.” AR, Tab 8, TEP Report (Sept. 26, 2012), at 4.
The record shows, however, that the contracting officer
questioned the appropriateness of such a finding: “How does this
relate to our evaluation criteria of [m]anagement approach,
corporate experience, personnel and past performance.” AR, Tab
15, Contracting Officer Email to TEP, Jan. 15, 2013, at 3. The
TEP responded that “IT was included in the SOW, Task 4.” Id. The
contracting officer replied as follows: “My concern is that this
does not tie clearly/specifically to a particular evaluation
criteria. My view is that all comments that form the basis for
the decision have to clearly link to an evaluation criteria,
period.” Id. The TEP’s final report nevertheless identified
Edmonds’ understanding of DDTC’s future IT direction as a
quotation advantage and a discriminator between these vendors.
We find that the agency’s decision to view this issue as a
technical discriminator was unreasonable. As a preliminary
matter, our review indicates that Edmonds’ quotation--within the
management approach section--did little more than “parrot back”
the SOW requirements regarding long range IT strategy. AR, Tab
5, Edmonds Technical Quotation, at 8. Moreover, the TEP’s
assessment--as the contracting officer originally
indicated--failed to show how understanding the agency’s
long-range IT strategy related to any of the evaluation
criteria, including corporate experience.
As set forth above, the corporate experience factor was
essentially retrospective in nature; it looked at previous
efforts to assess performance risk going forward. In contrast,
the TEP credited Edmonds for its strong technical understanding
of what the agency’s future IT requirements would be going
forward. While Edmonds may have had a strong understanding of
DDTC’s future IT needs, this technical understanding was not
reasonably related to, or encompassed by, the corporate
experience evaluation factor.
In light of the unreasonable weaknesses assigned to Savvee’s
quotation, and the favorable assessment assigned to Edmonds’
quotation (based on an unstated criterion), it is unclear
whether the adjectival ratings (good and excellent,
respectively) would have remained the same had the quotations
been assessed reasonably under this evaluation factor. (Savvee
Consulting, Inc. B-408416, B-408416.2, Sep 18, 2013) (pdf)
BOQ also protests that the agency improperly assigned NCH a
strength because the “[c]ontracts presented were large and very
similar to our needs. They provide good evaluations from large
contracts”. NCH Technical Evaluation at 3. BOQ asserts that
NCH’s proposal does not demonstrate that its past performance
was on contracts that were large or otherwise similar in scope
to the solicitation.
We agree with BOQ that the information in NCH’s proposal does
not demonstrate that the contracts listed were large or similar
in size to the solicitation. Specifically, the requirements as
defined by the solicitation are for a minimum of 20,000 room
nights and a maximum of 200,000, RFP § B.5.1, or between 500 and
700 apartments per night. RFP, SOW, §§ C.3.6, 3.7. For two of
the contracts listed to support NCH’s past performance (one with
the Federal Reserve Board and one with Lexicon Relocation),
however, the proposal did not indicate how many room nights or
apartment nights NCH provided. For a third contract (one with
the Federal Emergency Management Agency contract), the proposal
indicated that NCH provided 60 apartments for 90 days, and 200
rooms for an undisclosed amount of time, NCH Proposal at B-1,
B-2, a number of rooms which does not approach the number
contemplated by the solicitation. Accordingly, based on the
information in its proposal, it was unreasonable to award NCH a
strength on the basis that it performed large contracts. (Basic
Overnight Quarters, LLC, B-406964, B-406964.2, Oct 9, 2012)
(pdf)
Grunley argues that GSA’s evaluation of its proposal under the
project management approach evaluation factor was unreasonable.
For the reasons discussed below, we agree.
The evaluation of offerors’ technical proposals, including the
determination of the relative merits of proposals, is primarily
a matter within the contracting agency’s discretion, since the
agency is responsible for defining its needs and the best method
of accommodating them. Highmark Medicare Servs., Inc., et al.,
B-401062.5 et al., Oct. 29, 2010, 2010 CPD ¶ 285 at 12. In
reviewing an agency’s evaluation, we will not reevaluate the
proposals, but will examine the record of the evaluation to
ensure that it was reasonable and consistent with the stated
evaluation criteria and procurement statutes and regulations,
and to ensure that the agency’s rationale is adequately
documented. Carothers Constr., Inc., B-403382, Oct. 28, 2010,
2010 CPD ¶ 268 at 6.
The solicitation provided under the project management approach
factor that the agency would evaluate an offeror’s
understanding of the contract
requirements including design phase services and the Offeror’s
ability to manage the construction including risk management,
quality control, scheduling, protection and construction in
special spaces, cost control including contingency management,
change order management, and “open book accounting.”
RFP at 37-38.
The agency assigned a rating of 5.9 out of 10 points to
Grunley’s proposal under the project management approach
evaluation factor. In doing so, the agency evaluated Grunley’s
proposal as evidencing a number of “strengths,” including its
“familiarity with the site and their management and support
staff as reflected in the details” provided. The agency also
evaluated as strengths Grunley’s inclusion of a “[DELETED],”
both of which were considered by the agency to be “critical
elements of the design phase services.” AR, Tab 19, SSEB Report,
at 32. The agency continued, by noting as strengths, Grunley’s
“proposed use of [DELETED] during the design phase,” as well as
Grunley’s “philosophy to be willing to work effectively and
efficiently with their subcontractors, as well as other
contractors on the site.” Id. Finally, the agency noted that
“Grunley provided a transition plan for use with [DELETED] and
addressed potential [DELETED].” Id.
The agency also evaluated Grunley’s proposal under the project
management approach factor as evidencing weaknesses.
Specifically, the SSEB report states as follows:
Weaknesses:
Grunley did not discuss within their revised technical
proposal a shift transition plan, nor did they specifically
explain how [DELETED] within their manpower-loaded schedule
would operate. Grunley’s schedule did not include float, which
explicitly does not take into account suspensions of work or
demonstrate a proactive approach to unforeseen conditions,
high project risk, change orders, and project mutability. The
overall discussed approach, in conjunction with the schedule,
demonstrates a more reactive- than proactive- approach to
project management This prevented Grunley from achieving a
higher score on this factor.
AR, Tab 19, SSEB Report, at 32.
The protester argues that the agency’s evaluation of Grunley’s
proposal under the project management approach factor was
unreasonable because, contrary to the finding of the agency, its
proposal did provide a shift transition plan and include float
in its schedule.[5] In support of its argument, the protester
points to specific pages in its proposal addressing its shift
transition plan, and points out that its schedule includes float
for numerous activities. Protest at 18-19; Protester’s Comments
at 9-10; Protester’s Supp. Comments at 7, Exh. A, Schedule, at
1-10; AR, Tab 13, Grunley’s Final Revised Proposal, at 71-72.
With regard to shift transition, GSA’s response to Grunley’s
argument notes that the SSEB also found that the protester
provided a shift transition plan, and that this aspect of its
proposal constituted a strength. The agency also asserts,
without further explanation, that “[a]s set forth in the SSEB
Report, Grunley’s receipt of a score of 5.9 for [the project
management approach factor] was not due to a lack of a
transition plan, but rather due to the SSEB’s reasonable
determination that Grunley’s proposal suffered from a variety of
weaknesses.” AR at 15.
We cannot find this aspect of the agency’s evaluation
reasonable. As discussed above, the record shows that while the
SSEB noted as a strength that “Grunley provided a transition
plan for use with [DELETED],” it also noted as a weakness that
“Grunley did not discuss within their revised technical proposal
a shift transition plan.” AR, Tab 19, SSEB Report, at 32.
Despite the fact that Grunley pursued this issue in detail
during this protest, the agency has not meaningfully explained,
or even acknowledged, what appears to be an inconsistency in the
SSEB Report. In sum, we cannot find this aspect of the agency’s
evaluation of Grunley’s proposal to be reasonable, given the
apparent inconsistency in the record regarding this aspect of
the agency’s evaluation, and the agency’s failure during the
course of this protest to meaningfully respond to the
protester’s arguments.
In response to Grunley’s assertion that its proposed schedule
included float, GSA points to a sentence in Grunley’s proposal,
under the section pertaining to the issue of work stoppages,
which the agency contends demonstrates a lack of float. This
sentence provides as follows:
Work Stoppages: In
evaluating the scope of work and period of performance, the
Government’s and Grunley’s schedule is void of any free float,
or any float at all for that matter.
AR, Tab 13, Grunley’s Final
Revised Proposal, at 73-74. The agency concludes, based on this
sentence, that “[t]he SSEB’s determination that the schedule
lacked float is reasonable because Grunley did not provide any
float for the project as a whole and stated so in its proposal.”
Supp. AR (Feb. 13, 2013) at 5.
Again, despite the fact that the protester has pointed out that
the 10-page schedule in its proposal included float in numerous
instances, the agency has made no effort to further explain this
aspect of its evaluation. Protester’s Comments, Attach. A,
Schedule, at 1-10. That is, the agency, in responding to the
protester’s arguments, has simply ignored the fact that the
protester’s schedule included float in numerous instances, and
has made no effort to explain why the sentence quoted above, in
context, was found by the agency to nullify the numerous
instances of float in the protester’s schedule. While we
recognize that the portion of Grunley’s proposal, cited above,
raises a question as to whether its proposal was inconsistent
with regard to float, we cannot find the agency’s assessment of
a weakness because “Grunley’s schedule . . . did not include
float,” to be reasonable, given the agency’s failure to
acknowledge or fully respond to the fact that the protester’s
schedule clearly included numerous instances of float. AR, Tab
19, SSEB Report, at 32. On this record, we sustain the protest.
(Grunley Construction Company, Inc.,
B-407900, Apr 3, 2013) (pdf)
As discussed above, Exelis’s challenge to the agency’s
evaluation of PAE’s proposed staffing focuses on the difference
between what PAE and Exelis proposed, as well as the difference
between PAE’s proposed staffing and its staffing under the
predecessor contract. Exelis does not challenge any particular
area of PAE’s base year staffing. The protester also has not
demonstrated that the agency’s evaluation is inherently
unreasonable, or that PAE’s use of the Whitestone model (along
with its incumbency knowledge) to identify appropriate staffing
was an unreasonable approach to establishing an appropriate
level of effort to accomplish the sample task requirements. In
sum, we conclude that the agency’s evaluation of PAE’s proposed
staffing for the base year of the sample task was reasonable.
The record shows, however, that PAE significantly reduced the
level of effort it proposed to perform the sample task in the
later years of the contract. Specifically, the record shows
that, although its original proposed level of effort called for
[deleted] FTEs ([deleted] [American nationals] ANs, [deleted]
[third-country nationals] TCNs and [deleted] [local Iraqi
nationals] LNs), it gradually reduced the number of FTEs
throughout the life of the contract, and also replaced a large
number of TCNs with LNs; by the final year of contract
performance, it proposed only [deleted] FTEs ([deleted]
ANs, [deleted] TCNs and [deleted] LNs). PAE Proposal,
Attach. A, Staffing Plan, at 15. As a result of reducing the
number of FTEs overall, as well as exchanging LNs for TCNs over
the course of the sample task, the record shows that PAE
effectively reduced its proposed level of effort by more than
[deleted] percent, even though the workload over the life
of the sample task remained constant. PAE’s proposal offers no
explanation for how it will be able to perform what amounts to
constant requirements with a progressively lower level of
effort.
The evaluation record also is silent on the question of PAE’s
reduction in its proposed level of effort and makes no attempt
to explain how PAE might be able to perform the requirement in
the later years of the contract. As discussed above and in our
prior decision, the RFP required the agency to give
consideration to the level of effort proposed to perform the
sample task. Exelis Systems Corp., B-407111 et al., Nov. 13,
2012, 2012 CPD ¶ 340 at 12-13. Here, while the agency reasonably
evaluated the adequacy of PAE’s proposed level of effort to
perform the base year of the sample task, there is nothing in
the record to show that the agency considered how PAE would
perform the same requirements in the out years of the contract
with substantially less staff. Accordingly, we agree with Exelis
that the agency’s evaluation in this area was unreasonable. We
therefore sustain this aspect of Exelis’s protest. (Exelis
Systems Corporation, B-407111.5, B-407111.6, B-407111.7, May
20, 2013. (pdf)
Global Dynamics protests the exclusion of its proposal from the
competitive range, arguing that the weaknesses identified by the
evaluators under the personnel methodology factor are without
basis and/or could have been resolved without major revision of
its proposal.
(sections deleted)
The protester
disputes several facts of the evaluators’ first finding of
weakness, arguing that its proposal did provide a detailed plan
for the recruitment of the incumbent contractor’s staff and
additional qualified candidates. With regard to the evaluators’
specific examples of missing detail, GD pointed out that its
proposal expressly identified a target for the retention of
incumbents by stating that its “goal” was “to retain [deleted]
of all incumbent [health care providers] HCPs.” Protester’s
Technical Proposal at 30. The protester further argued that it
was improper for the evaluators to fault its proposal for
failing to specify whether GD would focus first on local
recruitment given that the solicitation did not require offerors
to provide this sort of detail; the protester also argued that
only minor clarification of its recruitment plan would have been
required to address the evaluators’ concerns.
Based on our review, we find the agency’s first finding of
weakness to be unsupported. In response to the protest, the
agency argued that while the protester’s proposal “discuss[ed]
the Protester’s methodology for contacting incumbents,” it “did
not explain conceptually how [the protester’s] incumbent
retention plan would unfold” and did not identify a target for
incumbent retention. The agency maintained that the protester’s
“stated aspiration to retain [deleted] of the incumbent
workforce” was “not a retention target,” but rather was “a wish,
unsupported by a detailed plan to make that wish come true.”
Agency Report at 10-11.
Contrary to the agency’s assertions, the proposal did not simply
outline the protester’s methodology for contacting incumbent
employees; it also discussed the protester’s approach to
addressing incumbent employee concerns (through the [deleted]),
educating incumbent employees regarding employment with GD
(through the distribution of written information about the GD
team, [deleted]), and [deleted] employees of the incumbent
contractor to remain in their positions (by offering certain
incumbent employees [deleted]). Protester’s Technical Proposal
at 29-30. Moreover, the proposal furnished a “goal” for the
retention of incumbent employees, i.e., [deleted]. To the extent
the agency draws a distinction between a “goal” and a “target,”
it is a distinction without a material difference. Both
reasonably reflect a firm’s objective to retain incumbent
employees. Also, with regard to the proposal’s alleged failure
to indicate whether the protester would focus first on local
recruiting or recruit on a national level from the outset, the
record reflects that the proposal described an approach to
recruiting that involved the use of national-level databases to
locate qualified personnel throughout the process. Id. at 19-20.
Thus, we fail to see a reasonable basis for the criticism.
We also conclude that the third weakness identified by the
evaluators under the recruitment plan subfactor was
unreasonable. As quoted above, the third weakness states that
the protester’s proposal incorrectly referred to pediatric and
orthopedic certifications “as required,” which indicated a lack
of understanding of the solicitation’s requirements. Our review
of the record shows that the evaluators have misquoted, and not
presented fairly, the statements in the proposal on these
issues.
For example, in the proposal’s representations about pediatric
and orthopedic nurses, there is no statement that certification
is required by the solicitation. Protester’s Technical Proposal
at 11. Instead, the evaluation worksheet appears to be quoting a
section--located on the same page--describing the qualifications
of medical surgery nurses. This sentence in its entirety states:
[deleted]. Our comparison of the evaluation worksheet with the
proposal leads us to agree with the protester that the weakness
identified is not supported by the underlying record.
The protester also disputes the evaluators’ findings regarding
the second weakness pertaining to nursing qualifications,
arguing that to the extent its proposal made some mistaken
references to required qualifications for certain nursing
specialties, the errors were minor matters that could easily
have been remedied through discussions, as opposed to weaknesses
requiring major revision of the proposal. The agency argues in
response that the errors were significant in that they reflected
a misunderstanding of the solicitation’s requirements. As set
forth below, even this assessed weakness does not bear close
scrutiny.
The evaluators’ criticism pertaining to the orthopedic and
plastic surgery nurses is essentially that the protester’s
description of these nursing specialties includes functions that
the nurses sought here will not be required to perform because
they will be working in clinics only--i.e., clinic nurses
generally do not assist with operations. Even assuming that the
protester’s proposal reflects a failure to understand that the
orthopedic and plastic surgery nurses here will not be required
to assist with operations, it is not apparent how such a
misunderstanding may reasonably be characterized as a
significant matter requiring major revision of the protester’s
proposal. Deleting assistance with operations from the
descriptions of orthopedic and plastic surgery nurses would not
require significant rewriting of the position descriptions, nor
would it have an impact on the qualifications of the individuals
recruited to perform those positions. Further, it is not
apparent that the protester’s failure to include ACLS
certification as a required credential for burn unit nurses
indicated a lack of understanding of the solicitation’s
requirements, as opposed to simply an easily correctable
oversight on the part of the protester.
In our view, the record here does not support the agency’s
evaluation of the protester’s proposal under the recruitment
plan subfactor of the personnel methodology factor. Elimination
of one, two, or all three weaknesses could easily have resulted
in the assignment of a rating of [deleted], rather than
[deleted], under the subfactor, particularly given that in
addition to the discussed weaknesses, the evaluators identified
a strength under the subfactor. Moreover, increasing the
proposal’s rating under the subfactor from [deleted] to
[deleted] might have resulted in a different competitive range
determination, particularly given that [deleted]. Accordingly,
we sustain the protest. (Global
Dynamics, LLC, B-407966, May 6, 2013) (pdf)
The record shows that the AID evaluators first prepared
individual evaluation worksheets, which listed the point scores
assigned, and the strengths, weaknesses, significant weaknesses
and deficiencies identified for the proposals under each
evaluation factor. The evaluators then met to prepare a
consensus TEC report. This procedure occurred during the initial
evaluation, and again during the evaluation of the offerors’
revised proposals. In arriving at the consensus point scores for
each offeror, the evaluators calculated a mathematic average of
the point scores assigned by the individual evaluators, rather
than developing a consensus score and rationale. AR, exh. 7,
Initial TEC Report, at 3; AR, exh. 13, Final TEC Report, at 4,
21. The record shows that the agency’s reliance on the
mathematic average of the scores assessed by the individual
evaluators appears to have resulted in mechanically-calculated
scores that did not reflect the consensus judgment of the
evaluators. While the use of a mathematic average in scoring
proposals is not per se improper, in this case it resulted in
the scores being distorted in the ways discussed below.
First, not all of the strengths and weaknesses identified by the
individual evaluators were carried forward to the consensus TEC
evaluation report. For example, in the final evaluation, one of
the individual evaluators assigned only a single weakness to the
Nexant proposal under the personnel evaluation factor, and
assigned the proposal a numeric score of only [deleted] out of
20 available points. AR, exh. 14, Final Evaluator A Scoresheet (Nexant),
at 6.[3] The weakness assigned by evaluator A that resulted in
the [deleted] point score (rather than a higher score), however,
was not carried forward to the final TEC report. AR, exh. 13,
Final TEC Report, at 28-29. Since the weakness identified by
evaluator A appears not to have been accepted in the consensus
evaluation, the evaluators’ averaging method for calculating the
consensus scores resulted in Nexant’s score being artificially
lower than it should have been.
Second, the record reflects instances where Nexant’s individual
score was based on a weakness or significant weakness assessed
by an individual evaluator, even though the consensus evaluation
report shows that the evaluators expressly found that the
weakness or significant weakness had been resolved during
discussions. For example, evaluator B assigned the Nexant
proposal a significant weakness under the management approach
evaluation factor for not having adequate key personnel in the
third year of the contract, and assigned the proposal a numeric
score of only [deleted] out of 20 available points. AR, exh. 14,
Final Evaluator B Scoresheet (Nexant), at 2. In the final TEC
evaluation report, however, the evaluators specifically found
that this significant weakness had been eliminated by Nexant
during discussions. AR, exh. 13, Final TEC Report, at 25-25. The
record therefore shows that inconsistencies such as these also
had the effect of artificially lowering Nexant’s consensus
scores.
Third, the record reflects similar errors regarding the
individual evaluator scores for Deloitte’s proposal. For
example, evaluator C originally identified a significant
weakness in Deloitte’s proposal under the management approach
factor for failing to present the awardee’s capacity in South
Asian countries; the evaluator assigned the Deloitte proposal a
score of [deleted] out of 20 available points under that
evaluation factor. AR, exh. 8, Initial Evaluator C Scoresheet
(Deloitte), at 2. This significant weakness was carried forward
to the initial TEC report. AR, exh. 7, Initial TEC Report, at 9.
After discussions, evaluator C continued to note this same
significant weakness in the Deloitte proposal, but for reasons
that are not explained in the record, evaluator C raised
Deloitte’s score under the management factor to [deleted]. AID
concedes that this scoring was in error. See Supp. AR at 10.
Therefore, Deloitte’s score was artificially increased, even
though the significant weakness remained.
Finally, the record shows that the TEC’s consensus evaluations
contained internal inconsistencies that affected Nexant’s
evaluation score, as well as the agency’s source selection
decision. For example, under the management approach evaluation
factor, the initial TEC report identified a weakness based on
Nexant’s failure to adequately describe the roles and
responsibilities of its program staff. AR, exh. 7, Initial TEC
Report, at 34. After conducting discussions with Nexant, the
evaluators expressly found that: “In its response to question
16, the Offeror fully addresses the TEC’s question regarding
staff responsibilities and roles in implementing tasks.” AR, exh.
13, Final TEC Report, at 26. Notwithstanding this express
finding, however, the final TEC report stated elsewhere, in its
summary rationale for the protester’s final score for the
management approach factor, that this remained an unresolved
weakness. Id. at 28. This weakness, along with a second resolved
weakness, is identified in the FMN as a basis for distinguishing
the Nexant proposal from the Deloitte proposal. AR, exh. 15, FMN,
at 29.
In sum, the record reflects a lack of rationality and
consistency on the part of the agency evaluators in assigning
point scores to the proposals, such that the scores relied on by
the SSA cannot be said to be reliable or reflective of the
comparative merits of the proposals. (Nexant,
Inc., B-407708, B-407708.2, Jan 30, 2013) (pdf)
Next, Exelis argues that DOS’s evaluation failed to identify
strengths based on unreasonable evaluations or unequal treatment
as compared to PAE. The protester raises numerous examples,
certain of which were not specifically pursued in its comments
on the agency report, and others of which we conclude lack
merit. Discussed below are areas where we agree that the agency
has not provided a reasonable explanation of the record for
unequal evaluation of PAE’s and Exelis’ proposals.
For evaluation factor 1, Exelis argues that DOS identified
strengths for PAE’s proposal, but failed to recognize that the
protester had proposed a similar strength. The strength assigned
for the awardee’s proposal was as follows:
PAE has approximately 6,200
personnel supporting O&M services around the world, including:
facilities management, electrical/power distribution, water
treatment, water treatment, [heating, ventilation &
airconditioning] HVAC, fire suppression, security access
control, carpentry, billeting, construction, warehousing,
vehicle maintenance, janitorial, pest control, landscaping
services, and [information technology] IT network and
telecommunications support. PAE has aligned key partners with
in-place supply chain capabilities and local labor serving all
DoS Iraq locations.
AR, Tab 13, TEP Consensus Report,
at 15. This strength was also cited as one of the discriminators
in favor of award in the tradeoff between PAE’s and Exelis’
proposal in the CO award recommendation. AR, Tab 19, CO Award
Recommendation, at 14.
Exelis argues that its proposal should have merited a similar
strength, as its proposal demonstrated a similar, if not greater
level of experience and capabilities as compared to PAE. In this
regard, the TEP Chair’s evaluation scoresheet noted the
following strength for Exelis’ proposal:
Exelis is an experienced
Operations and Maintenance (O&M) contractor, currently
managing more than 13,000 employees in the region. Exelis uses
a multi-level training approach that appears to an effective
program. Exelis’ proposal supports the BEC OMSS contract with
fully trained, certified, qualified, experience personnel.
AR, Tab 8, Exelis TEP Evaluator
Scoresheets (TEP Chair). Notwithstanding this finding, the
agency did not identify a strength for Exelis’ proposal in the
TEP consensus report, the CO award recommendation, or the SSD.
The record and the agency’s response to the protest do not
explain why a strength was credited to PAE for this aspect of
its proposal but not for Exelis. Instead, the agency argues that
the basis for PAE’s higher rating was unrelated to the unequal
treatment cited by the protester, and that the protester’s
arguments express no more than disagreement with the agency’s
judgment. AR at 22. As discussed above, however, the disputed
strength was cited in the CO award recommendation as a
discriminator between the offerors’ proposals.
For evaluation factor 2, Exelis argues that DOS found a strength
for PAE’s proposal because it demonstrated experience in 30
areas of the SOW. AR, Tab 13, TEP Consensus Report, at 16-17.
The protester states that the agency identified a strength for
Exelis’ proposal based on the same list, but included only 15 of
the 30 areas of SOW. See id. at 23-24. The protester contends
that its proposal demonstrated experience in the same areas as
PAE, but did not receive equal credit. See Supp. Protest (Sept.
7, 2012) at 6-8, citing AR, Tab 6, Exelis Proposal, vol. I, Tab
2.3; Supp. Protest (Sept. 20, 2012), at 29-32. The protester
also argues that the agency unreasonably failed to assign a
strength or otherwise recognize the value of its proposed
personnel, citing as an example its operations manager, who, the
protester contends, demonstrated qualifications and experience
that exceed the RFP requirements. See Protest (Aug. 2, 2012) at
33, citing AR, Tab 6, Exelis Proposal, vol. I, Tab 2.1, at 1X.
Here again, the agency did not directly respond to the
protester’s arguments.
Because the agency does not meaningfully respond to the
protester’s arguments above, we find no basis to conclude that
the evaluation here was reasonable. Where, as here, the agency
has assessed strengths for the awardee’s proposal--particularly
those that were cited as discriminators between the proposals in
the award recommendation--and has not provided a meaningful
explanation as to why it did not assign similar strengths to the
protester’s proposal, we conclude that the evaluation was
unreasonable. See TriCenturion, Inc.; SafeGuard Services, LLC,
B-406032 et al., Jan. 25, 2012, 2012 CPD ¶ 52 at 17. In sum, we
sustain the protests concerning unequal evaluation to the extent
discussed herein. (Exelis
Systems Corporation, B-407111, B-407111.2, B-407111.3,
B-407111.4, Nov 13, 2012) (pdf)
SAIC argues that [Defense Intelligence Agency] DIA improperly
failed to consider CSC’s proposal of “large numbers” of
uncleared personnel as a risk or weakness because it indicates
that the firm does not understand the ORION program. SAIC
acknowledges that the TOR did not prohibit offerors from
proposing a mix of cleared and uncleared personnel. However,
SAIC asserts that where, as here, the awardee will need access
to a significant level of [DELETED] systems, materials, and
information, it was incumbent upon the agency to consider CSC’s
extensive use of uncleared personnel to perform the work.
SAIC explains that the ORION program is deployed on a [DELETED]
network and is intended for use by cleared personnel. The
[DELETED] systems at issue here all reside at [DELETED]
government facilities and the awardee will be required to
perform work on such systems at such sites routinely. SAIC
states that, while uncleared personnel could participate in
early development activities, only fully cleared staff would be
permitted to participate in integration, test, and deployment
activities on a [DELETED] system. DIA does not contest SAIC’s
characterization of the program’s [DELETED] nature. DIA’s only
response, which appears to agree with SAIC, is that part of the
task order can be conducted outside of a [DELETED] facility
prior to introduction of the systems into the [DELETED]
networks. Id. The record, as discussed below, shows that DIA
believed CSC’s use of “large numbers” of uncleared personnel was
an issue it needed to consider, but it also shows that DIA
failed to meaningfully consider the matter.
In its price proposal, CSC indicated that it intended to use
“large numbers” of uncleared personnel in performance of the
contract. Specifically, its price proposal read as follows:
By using a large numbers of [uncleared]
individuals in our proposed solution, CSC is able to propose
skilled individuals without having to include the salary
premiums normally associated with cleared personnel in our
proposed price. [DELETED]
CSC truly believes that using highly qualified [uncleared]
personnel has enabled us to propose the lowest executable
price without introducing additional risk to the ORION
Analytics effort.
AR, Exh. 7.2, CSC Initial Price
Proposal at 1-2.
CSC’s price proposal does not specify how many or which of its
proposed personnel were to be uncleared, and is otherwise silent
on the matter.
In its report on the price evaluation, the price panel included
the following statement indicating that the SSEB found
acceptable CSC’s use of uncleared personnel:
CSC’s proposed solution is to
utilize a large number of [uncleared] individuals, while
ensuring that critical personnel have [DELETED] clearances.
The use of [uncleared] personnel was briefed as acceptable by
the requiring activity and the SSEB was fully aware that CSC’s
proposal utilized [uncleared] personnel.
AR, Exh. 11.1, Price Panel Report,
at 3.
There is no contemporaneous record of the SSEB’s consideration
of this issue, however, and, even if the SSEB were “fully aware”
of CSC’s approach, as the price panel report suggests, there is
no evidence of the SSEB’s views on the matter.
In responding to the protest, the agency asserts that the SSEB
confirmed that CSC’s mix of cleared and uncleared workers was
“appropriate,” citing a statement made by the CO in response to
the protest. Specifically, the CO explained that upon realizing
that CSC’s price was significantly below the government cost
estimate, the CO:
[r]eviewed the amount and type of
labor CSC had stated would be conducted by personnel without
[DELETED] clearances (which was allowed). The labor for these
individuals is often significantly lower than personnel with
[DELETED] clearances. The CO then provided the SSEB with those
sections of the CSC proposal where uncleared personnel were
used in order to assure the SSEB considered this mix a viable
solution. The SSEB noted that the level of CSC’s use of
uncleared personnel was acceptable.
CO’s Statement at 6.
As discussed below, we find that the CO’s Statement does not
reasonably comport with the underlying record.
First, as noted by SAIC in its comments, the CO could not have
reviewed the “amount and type of labor CSC had stated would be
conducted by noncleared personnel,” because no such information
exists in CSC’s proposal. Second, the CO could not have provided
the SSEB with any sections of CSC’s proposal “where uncleared
personnel were used” because, aside from the price proposal,
which only generally indicates CSC’s utilization of “large
numbers” of uncleared individuals, there are no such sections.
Third, there is no contemporaneous evidence that the SSEB “noted
that the level of CSC’s use of uncleared personnel was
acceptable,” and, more important, the record does not show how
the SSEB could have identified CSC’s level of use of uncleared
personnel or its mix of cleared/uncleared personnel based on the
general statements in CSC’s price proposal. Despite two
opportunities to respond to the protester’s allegations, DIA has
not provided any statement from any SSEB member concerning what
the SSEB knew or concluded concerning CSC’s use of “large
numbers” of uncleared personnel.
Rather, in response to SAIC’s comments, the CO provided a
supplemental statement in which he concedes that CSC’s proposal
did not “specifically dictate the number of personnel or
specific labor categories that will be cleared versus uncleared.”
Supp. CO Statement at 3. The CO explains that he identified
passages in CSC’s proposal as areas where “the use of uncleared
personnel or [DELETED] became apparent,” which “indicated” to
him “a use of uncleared resources,” and that he provided these
passages to the SSEB. Id.
CSC’s technical/management proposal, however, does not contain
any passages that “indicate” CSC’s use of uncleared personnel.
Indeed, the CO also concedes that, “standing alone, these
sections do not convey the use or extent of use of uncleared
personnel.” Id. Thus, even if the CO had given sections of CSC’s
price proposal to the SSEB, and even if the SSEB had found CSC’s
use of uncleared personnel acceptable in principle (an assertion
for which there is no contemporaneous evidence), DIA has not
shown it was even possible for the SSEB to ascertain the extent
to which CSC proposed to use uncleared personnel. As a
consequence, DIA’s claim that it had “adequate information by
which to identify risk, assess risk, and conclude that there was
no risk to successful performance” based on CSC’s proposal of
“large numbers” of uncleared personnel is unsupported.
Accordingly, we conclude that DIA did not reasonably consider
CSC’s proposed use of “large numbers” of uncleared personnel to
perform the solicitation requirements in evaluating CSC’s
proposal, and sustain the protest on this basis. Id. (Science
Applications International Corporation, B-407105,B-407105.2,
Nov 1, 2012) (pdf)
Technical Evaluation
The protester also challenges the agency’s evaluation of its
technical proposal, arguing that many of the weaknesses
identified by the evaluators were based on an incomplete or
incorrect reading of the proposal. As discussed below, we agree
with the protester that a number of the disputed weaknesses
lacked a reasonable basis.
In reviewing protests challenging an agency’s evaluation, our
Office will not reevaluate proposals; rather, we will review the
record to ensure that the agency’s evaluation was reasonable and
consistent with the terms of the solicitation. Unisys Corp.,
B-406326 et al., Apr. 18, 2012, 2012 CPD ¶ 153 at 8.
Among the weaknesses that the protester disputes are the
following:
•Indicate they will provide Tech
Writer/Documentation Specialist that can [deleted] and this is
not with scope of this task order. (Technical approach)
•Required description of benefits to incumbent staff which was
requirement of this section, deferred until another section.
No statement of approach to obtain and/or replace qualified
staff. (Management plan)
•Statement [deleted] assumes they are PMing projects on this
task order. That indicates an erroneous assumption and lack of
understanding. They are not responsible for project plans
under this TO. (Management plan)
•There is no indication [if]
[deleted] will be charges so not clear if government will be
responsible. (Management plan)
•Ability to quickly obtain staff
appears weak as it relies on [deleted]. (Staffing plan)
•Approach to conflict resolution
is included only as an Appendix, not part of technical
proposal which was requirement of PWS. (Technical approach)
•Resolution Management process not
included in Technical Proposal as instruction indicated but
instead included as an Appendix. (Technical approach)
•Conflict management approach is
detached from technical proposal. (Management plan)
Technical Panel Summary Report, at
13-15.
With regard to the first weakness, one of the categories of
support staff that the contractor was to provide was Technical
Writer/Documentation Specialist. In describing its technical
approach, the protester stated that it would provide experienced
technical writers to help develop and maintain a variety of
document types [deleted]. Protester’s Technical Proposal at 3.
As noted above, the evaluators identified as a weakness the
protester’s inclusion of [deleted] on the list of documents with
which its technical writers might assist on the grounds that
developing [deleted] plans was not within the scope of the task
order. In this connection, the contracting officer asserts that
“there are numerous pages of tasks listed for the Tech
Writer/Documentation Specialist, and none of them are for
[deleted].” Contracting Officer’s Statement at 5.
The protester directly rebuts the contracting officer’s
assertion by pointing to several places in the [request for task
order proposals] RFTOP where [deleted] is included in the
description of the duties to be performed by a Technical
Writer/Documentation Specialist. For example, the RFTOP
describes the duties of the above position as including
[deleted]. (Emphasis added.) Accordingly, we agree with the
protester that this assessment of a weakness in its proposal
lacks a reasonable basis.
With regard to the second weakness, the protester points out
that contrary to the evaluators’ assertion, the RFTOP did not
require offerors to furnish a description of the benefits to be
provided incumbent staff as part of their management plans;
rather, that information was to be furnished as part of their
staffing plans. See RFTOP at 56 (offerors’ description of their
staffing plans should include “[a] description of any special
benefits to incumbents as recruiting incentive as well as
overall benefits package for all employees”). Along the same
lines, the RFTOP instructed offerors to provide a description of
their approach to obtaining and/or replacing qualified staffing
resources as part of their staffing--as opposed to their
management--plans. To the extent the agency argues that what the
evaluators really meant was that Emergint had not described its
methodology for attracting incumbent staffing (which, per the
RFTOP at 55, was to be included in offerors’ management plans),
the record does not support this position either. The record
reflects that Emergint included a description of its approach to
attracting incumbent staffers in its management plan. See
Protester’s Technical Proposal at 6 (approach to attracting
incumbent staffing includes [deleted].) Thus, we also find this
weakness to be unsupported.
With regard to the third weakness, the agency concedes that the
finding was in error, but argues that the protester was not
prejudiced by the finding. In this regard, the agency contends
that the TEB chairperson, who identified the weakness, increased
the score that she assigned the protester’s proposal under the
management plan factor by 5 points after the consensus meeting
to correct for the erroneous finding. Contracting Officer’s
Statement at 5. This argument is not supported by the underlying
record. While there is some indication in the record that one of
the evaluators increased the score s/he assigned Emergint’s
proposal under the management factor after the consensus meeting
(i.e., the scores originally assigned by the evaluators, when
averaged, do not equal the consensus score), there is nothing in
the contemporaneous record establishing that it was the TEB
chairperson who revised her scoring, or that any change made in
scoring was to correct for this error. Moreover, the weakness
was carried through in the final consensus report that was
reviewed by the contracting officer. Technical Evaluation
Summary Report at 14. In sum, we do not agree with the agency
that the protester suffered no prejudice as a result of the
erroneous finding.
With regard to the fourth weakness, Emergint’s proposal directly
contradicts the agency’s finding that the protester did not
clearly address whether the government would be responsible for
[deleted]. Specifically, Emergint’s proposal described its
organizational structure, including [deleted], in section 2.1 of
its technical proposal and specifically provided that [deleted].
Protester’s Technical Proposal at 5. Similarly, with regard to
the fifth weakness, Emergint’s proposed approach to recruiting
staff did not rely exclusively on [deleted], as the agency’s
finding implies; rather, the protester’s proposal described its
[deleted] in its proposal, and then stated that its [deleted].
Id. at 14. Based on this record, we agree with the protester
that neither weakness was reasonable.
Finally, with regard to the last three weaknesses, the protester
argues that it provided descriptions of its conflict resolution,
issue resolution, and conflict management processes in its
technical proposal, and that it provided more detailed diagrams
of all three processes in an appendix, as permitted by the terms
of the RFP. In response, the agency maintains that while the
solicitation instructed offerors that they could include
supporting documents, including graphics and tabular content, in
appendices--where they would not be counted against the 15-page
limit on technical proposals--the RFP clearly stated that “the
graphs and tables should only be used for supplemental
information” and offerors “should not use graphs and tables as
the majority of the proposal text.” RFTOP at 51. According to
the agency, Emergint’s inclusion of the diagrams, which provided
the majority of its description of the above processes, in an
appendix rather than in the text of its proposal permitted the
protester to exceed the limitation on the length of technical
proposals, which was unfair to other offerors.
We are not persuaded by the agency’s argument that considering
the information in the appendices to the protester’s proposal
would have been unfair. As pointed out by the protester, DB also
included significant information required to be provided as part
of its technical proposal, i.e., a description of its employee
benefits package, in an appendix, and the evaluators did not
identify this as a weakness. Thus, contrary to the agency’s
position, the evaluators acted unfairly when they considered the
content of the appendices to DB’s proposal in their evaluation,
while failing to consider the content of the appendices to
Emergint’s proposal.
Based on our conclusion that some of the evaluators’ technical
findings, which provided the basis for their point scoring of
the protester’s proposal, were unreasonable, and our conclusion
that the evaluators improperly conducted a realism analysis of
the protester’s proposed pricing, which the contracting officer
relied on in making his trade-off determination, we sustain the
protest. (Emergint Technologies,
Inc., B-407006, Oct 18, 2012) (pdf)
Both protesters allege that the agency, in evaluating proposals,
improperly used an undisclosed government staffing estimate to
determine whether their respective proposed staffing was
adequate. According to the protesters whenever any of the
proposals included less than the number of full time equivalents
(FTE) used to prepare the undisclosed staffing estimate, the
agency automatically assigned a deficiency, which caused their
proposals to be rejected as unacceptable.
While an agency properly may rely on its own estimates of the
manning levels necessary for satisfactory performance, absolute
reliance on estimates can have the effect of arbitrarily and
unfairly penalizing an innovative or unusually efficient offeror.
KCA Corp., B-255115, Feb. 9, 1994, 94-1 CPD ¶ 94 at 6-7; The
Jonathan Corp.; Metro Machine Corp., B-251698.3, B-251698.4, May
17, 1993, 93-2 CPD ¶ 174 at 11. Thus, it is inappropriate to
determine the acceptability of proposals by the mechanical
application of undisclosed government estimates, since doing so
fails to assess whether a firm’s proposed workforce is
particularly skilled and efficient, or whether, because of a
unique approach, a firm could satisfactorily perform the work
with different staffing than estimated by the agency. KCA Corp.,
supra.
The record here shows that in every instance where the agency
identified a deficiency or weakness in any offeror’s proposed
staffing, it was because the proposed staffing was less than the
level identified by the agency as necessary to perform the
requirement in its undisclosed government estimate, even where
the variation was seemingly de minimus in comparison to the
overall requirement.
(sections
deleted)
In evaluating the
awardee’s proposal under the resources subfactor, the agency
used the same approach as above, albeit the agencies’ criticisms
in this area were couched in terms of proposal weaknesses rather
than deficiencies. For example, the agency’s evaluators found a
weakness in CLS’s proposal under the resources subfactor as
follows: “Per proposal by offeror, Figure ES-5 Page 6, they do
not show enough manpower. They show [deleted] FTE’s for
Maintenance Electrician when 14 FTE’s are needed in Buildings &
Structures (C.13, CLS 31).” OAR, exh. Q, SSEB Report, encl. 2,
Technical Approach Factor Resources Subfactor Report for CLS’s
Proposal, at 1.
Our review of the record shows that, consistent with these
examples, the agency used the same undisclosed staffing estimate
to assess the adequacy of all of the offerors’ proposed
staffing. These staffing figures originate in two documents in
the record provided our Office: (1) the agency’s independent
government cost and labor estimate, OAR, exh. J; and (2) a
derivative document referred to as the SSEB Manpower Guide. OAR,
exh. K. A comparison of the staffing estimates included in the
SSEB Manpower Guide to the staffing references included in the
agency’s technical evaluation materials shows that, in every
instance, the agency mechanically applied its undisclosed
staffing estimates to the offerors’ proposed staffing to
evaluate and determine the acceptability or unacceptability of
the proposals.
The agency contends that it only used the undisclosed staffing
estimate as a tool to assess the reasonableness of the staffing
proposed by the offerors. See CAR, exh. V, Decl. of SSEB
Chairman (June 15, 2012), at 1. However, the SSEB evaluation
documents quoted above indicate that this estimate was more than
just a tool to assess reasonableness of the proposed staffing;
instead, it was used as a benchmark of the resources the agency
had concluded were required to perform this effort. Contrary to
the agency’s arguments, there is no evidence in the
contemporaneous record that the agency considered whether any
approach to staffing other than that reflected in its
undisclosed government estimate might be adequate. Instead, the
evaluation documentation reflects that the agency stated what
staffing it “needed” or “required” for each labor area, and that
this staffing number was that reflected in the undisclosed
government estimate.
In its defense, the agency asserts that it advised the
protesters during discussions of the areas where their staffing
was considered inadequate and that these discussions should have
been sufficient to lead the offerors into the areas where the
staffing levels were considered unacceptable. However, where the
agency believed that the undisclosed government estimate
reflected its minimum requirements, as is shown by this record,
it was required to disclose this to the offerors during
discussions rather than leaving the offerors to guess what
staffing would satisfy the government’s requirements. See
OneSource Energy Servs., B-283445, Nov. 19, 1999, 2000 CPD ¶ 109
at 12.
Our review of the record also indicates that the agency
subjected the protesters’ proposed staffing to a stricter level
of scrutiny in its evaluation as compared to its evaluation of
the awardee’s proposal. As noted above, the agency found
deficiencies with the protesters’ proposals where even minor
deviations from the undisclosed staffing estimates were found,
but found weaknesses when similar issues were identified in the
awardee’s proposal.
For example, the agency assigned the Orion proposal a deficiency
for proposing [deleted] plumber/pipefitter FTEs where the agency
found that seven would be needed. OAR, exh. P, SSEB Report,
encl. 2, Technical Approach Factor Resources Subfactor Report
for Orion’s Proposal, at 1. Similarly, the agency assigned the
Chenega proposal a deficiency for offering [deleted] FTEs for
elevator mechanics where agency found that three would be
needed. CAR, exh. P, SSEB Report, encl. 2, Technical Approach
Factor Resources Subfactor Report for Chenega’s Proposal, at 2.
In contrast, the agency only assigned the awardee’s proposal a
weakness rather than a deficiency when it found that ATL
proposed only [deleted] maintenance electrician FTEs where the
agency thought 14 would be required. CAR, exh.Q, SSEB Report,
encl. 2, Technical Approach Factor Resources Subfactor Report
for ATL’s Proposal, at 1. The evaluation record contains no
explanation regarding why the agency reached apparently
inconsistent conclusions regarding the adequacy of the
protesters’ proposed staffing on the one hand, and the awardee’s
proposed staffing on the other, even though this evaluation
resulted in unacceptable ratings for the protesters’ proposals
and an acceptable rating for the awardee. We sustain the
protests on these bases. (Orion
Technology, Inc.; Chenega Integrated Mission Support, LLC,
B-406769, B-406769.2, B-406769.5, Aug 22, 2012) (pdf)
The solicitation sought proposals to develop and provide MAST
units, which are intended to allow users to simulate threats
posed by man-portable air defense systems (MANPADS), i.e.,
shoulder-launched surface-to-air antiaircraft missiles. The MAST
units will be used to train aircrews to respond to MANPADS
threats. The solicitation statement of work (SOW) explained that
"[t]he abundance of MANPADS, particularly among terrorist
organizations, has emerged as an increasing threat to both
civilian and military aircraft operating in support of Overseas
Contingency Operations." RFP, SOW, at 1. The SOW identified
"Russian-made MANPADS such as the SA-7, SA‑14, SA-16, and SA-18"
as popular models that pose a threat to aircraft. Id. The
proposed MAST units will be required to integrate an existing
technology used to simulate MANPADS threats, known as the
multiple integrated laser engagement system (MILES), as well as
provide additional simulation capabilities.
(sections
deleted)
DRS challenges the
Army's evaluation of its technical proposal, and the agency's
evaluation of offerors' prices. As discussed below, we conclude
that the Army's evaluation of DRS's proposal had numerous
prejudicial errors under the first two non-price evaluation
factors--technical approach, and path forward for system
performance and schedule. We conclude that the protester's
challenge to the agency's evaluation offerors' proposed prices
is untimely.
The evaluation of an offeror's proposal is a matter within the
agency's discretion. IPlus, Inc., B‑298020, B-298020.2, June 5,
2006, 2006 CPD para. 90 at 7, 13. A protester's mere
disagreement with the agency's judgment in its determination of
the relative merit of competing proposals does not establish
that the evaluation was unreasonable. VT Griffin Servs., Inc.,
B-299869.2, Nov. 10, 2008, 2008 CPD para. 219 at 4. In reviewing
a protest against an agency's evaluation of proposals, our
Office will not reevaluate proposals but instead will examine
the record to determine whether the agency's judgment was
reasonable and consistent with the stated evaluation criteria
and applicable procurement statutes and regulations. See
Shumaker Trucking & Excavating Contractors, Inc., B-290732,
Sept. 25, 2002, 2002 CPD para. 169 at 3. While we will not
substitute our judgment for that of the agency, we will question
the agency's conclusions where they are inconsistent with the
solicitation criteria, undocumented, or not reasonably based.
Public Communications Servs., Inc., B-400058, B-400058.3, July
18, 2008, 2009 CPD para. 154 at 17.
This decision is based, in part, upon testimony the Army SSEB
Chair and SSA provided during a hearing conducted by our Office
on August 21, 2010. In reviewing an agency's evaluation of
offerors' proposals, we do not limit our consideration to
contemporaneously-documented evidence, but instead consider all
the information provided, including the parties' arguments,
explanations, and any hearing testimony. Navistar Def., LLC; BAE
Sys., Tactical Vehicle Sys. LP, B‑401865 et al., Dec. 14, 2009,
2009 CPD para. 258 at 6. While we generally give little or no
weight to reevaluations and judgments prepared in the heat of
the adversarial process, Boeing Sikorsky Aircraft Support,
B-277263.2, B-277263.3, Sept. 29, 1997, 97-2 CPD para. 91 at 15,
post-protest explanations that provide a detailed rationale for
contemporaneous conclusions, and simply fill in previously
unrecorded details, will generally be considered in our review
of the rationality of selection decisions--so long as those
explanations are credible and consistent with the
contemporaneous record. NWT, Inc.; PharmChem Labs., Inc.,
B-280988, B-280988.2, Dec. 17, 1998, 98‑2 CPD para. 158 at 16.
(section
deleted)
A. Technical
Approach Factor Evaluation
The protester raises several challenges to the evaluation of its
proposal as marginal under the technology insertion and open
architecture subfactor, and the target sensitive subfactor of
the technical evaluation subfactor. As discussed below, we find
that there were errors in five of the weaknesses identified by
the agency in DRS's proposal under these two subfactors.
1. USB ports
DRS argues that
the agency unreasonably assessed a weakness in its proposal
under the technology insertion and open architecture subfactor,
based on the protester's proposed use of USB ports.
In the RFP performance specification, the "Data Transfer
Interface" requirement required offerors to provide an
input/output interface for uploading and downloading data. RFP,
Performance Specification, sect. 3.7.1.2. The relevant provision
is as follows: "Note: Due to Information Assurance requirements
all MAST related USB ports, if part of the MAST design, shall
preclude the use of commercial standard USB physical connections
through keying or some other physical means." Id.
As relevant here, keying is a hardware-based means of
encryption, which is considered among the most advanced forms of
security for USB devices. See Tr. at 196:19-22. DRS's proposal
stated in several places that it would use "keyed External USB
ports" for data transfer. See AR, Tab 7A, DRS Technical
Proposal, at 6-7, 14, 23.
In the revised evaluation report, the agency identified a
weakness in DRS's proposal because "[t]he offeror does not
provide a methodology to satisfy the [information assurance] USB
requirement or an alternate solution of transferring data." AR,
Tab 18, Revised Evaluation Report, at 19. The selection decision
also stated that "no methodology is provided to satisfy the
[information assurance] USB requirement." AR, Tab 22, SSD, at 6.
The agency contends that the plain language of the solicitation
prohibited the use of any USB ports. AR at 63-64. The SSEB Chair
testified that the intent of the provision was to prohibit the
use of USB connections of any kind. Tr. at 197:5-12. In this
regard, the agency argues that the word "preclude" refers to all
USB ports, and the phrase "through keying or some other physical
means" refers to clarifying examples of what kinds of USB ports
are prohibited. AR at 64.
We think that the plain language of the solicitation provision
does not support the agency's interpretation that all USB ports
are prohibited. The first part of the provision states that "all
MAST related USB ports, if part of the MAST design. . ." RFP,
Performance Specification, sect. 3.7.1.2. We think this phrase
plainly indicates that MAST related USB ports are allowed. The
next part of the provision states that the USB ports that are
part of the MAST design "shall preclude the use of commercial
standard physical USB connections through keying or some other
physical means." Id. We think that the second phrase plainly
states that USB ports are permitted, provided that they
"preclude the use of commercial standard physical USB
connections" through the method specified, i.e., "keying or some
other physical means." Id.
Based on the plain language of the solicitation, we think that
the protester's interpretation of the RFP as permitting keyed
USB ports is reasonable, and that the agency's interpretation of
the RFP as barring use of all USB ports is unreasonable. Thus,
we think the agency's assessment of a weakness regarding DRS's
proposed use of USB ports was not reasonable under the technical
evaluation factor.
(section
deleted)
(DRS
ICAS, LLC, B-401852.4; B-401852.5, September 8, 2010)
(pdf)
On the record before us, we conclude that the VA unreasonably
evaluated DCFD's quotation with a "fail" rating for the
geographic coverage factor for lack of a plan to cover the
geographic area where the awardee's quotation did not include
any more specific information than DCFD's quotation, which was
assigned a "pass" rating under the geographic coverage factor.
See Kathryn Huddleston and Assocs., Ltd., B‑289453, Mar. 11,
2002, 2002 CPD para. 57 at 7. While it may be that the agency
believed that Metro West identified more resources than DCFD
that could be utilized to satisfy the geographic coverage
factor, this does not provide a basis for rating DCFD's
quotation as "fail" under this factor. As noted above, in order
to find DCFD to have "failed" under this factor, the agency was
required to conclude that based on DCFD's "qualifications and
experience, extreme doubt exists that the offeror will
successfully perform the required effort." RFQ at 62. On this
record, the agency has provided no basis for a determination
that "extreme doubt exists" that DCFD "could successfully
perform the required effort," particularly given that DCFD was
the incumbent for these services and indicated in its quotation
that it would continue to provide them, as in the past, over the
same required geographic coverage area. Furthermore, we note
that DCFD's quotation responded to this factor in a manner
similar to Metro West's response, which the agency found to be
acceptable. (Douglas County Fire
District #2, B-403228, October 4, 2010) (pdf)
Velos also protests the reasonableness of the contracting
officer assigning a performance risk to its proposal under
business experience/history for not providing a more current D&B
report than did Medidata.
This factor included evaluating "the overall viability of the
Offeror in performing for the term of the contract including
financial viability and organizational stability." RFP sect.
M.1. Item 3 of the agency's FPR request was for "financial
information to show current financial status" to include "your
most recently available D&B report." AR, Tab 12C, Request for
FPRs (Feb. 3, 2009), at 1-2. In response to item 3, Velos
provided its last D&B report dated August 2008; updated
financial information from that previously submitted, including
an updated income statement and balance sheet; and additional
explanations addressing its financial viability. AR, Tab 14B,
Velos FPR (Feb. 3, 2009) at 3-4, attachs.
The contracting officer has not expressed, nor does the record
establish, that the agency had a specific basis to challenge the
actual financial viability of Velos. Indeed, the contracting
officer explained that the only concern was based on the date of
the D&B report. Tr. at 79-83. Velos states that it furnished the
most current financial reports on its company in its FPR and
that the D&B report dated August 2008 provided by Velos was the
most recent report in its possession. In the absence of any
evidence that the contracting officer had any specific concerns
about the financial viability of Velos, it was not reasonable to
assign a performance risk solely on the basis of the date on
Velos's D&B report, particularly since the agency did not
request that Velos have its D&B report updated. Thus, this
discriminator between the proposals also lacks a reasonable
basis. (Velos, Inc., B-400500.8;
B-400500.9, December 14, 2009) (pdf)
Trusted Base complains that DOS misevaluated its proposal under
every evaluation factor. In reviewing protests of alleged
improper evaluations, it is not our role to reevaluate
proposals. Rather, our Office will examine the record to
determine whether the agency’s judgment was reasonable and in
accord with the stated evaluation criteria and applicable
procurement laws and regulations. Worldwide Language Res., Inc.,
B-297210 et al., Nov. 28, 2005, 2005 CPD para. 211 at 3. A
protester’s mere disagreement with the agency’s judgment does
not establish that an evaluation was unreasonable. UNICCO Gov’t
Servs., Inc., B-277658, Nov. 7, 1997, 97-2 CPD para. 134 at 7.
We have reviewed all of Trusted Base’s arguments and find that
none has merit. We address some of Trusted Base’s arguments
below.
Trusted Base asserts that its proposal was impermissibly
downgraded under the understanding of the requirements and
technical approach evaluation factor. The protester complains
that it was penalized for failing to adequately address the
approach/knowledge of office administrative support evaluation
subfactor, even though this subfactor had been deleted from the
evaluation scheme. Protest at 5; Protester’s Comments at 5.
Although DOS agrees that this evaluation subfactor was
deleted--and acknowledges that despite its deletion the
subfactor remained on the previously-prepared evaluation
materials--it explains that its concern was that the protester,
unlike AGTech, did not propose any FTEs for the [deleted]
position, a staffing requirement set forth in the TOR’s “Service
Offering Estimate.” Thus, in the agency’s view, it was not
Trusted Base’s failure to discuss its general approach to, or
knowledge of, office administrative support that resulted in the
assessment of a weakness in Trusted Base’s proposal. Rather, it
was the omission of an important position. The agency notes in
this regard that this omission raised concerns about the firm’s
understanding of the comprehensive TOR staffing requirements.
Trusted Base argues that it was unreasonable for the agency to
assume that the firm’s proposed [deleted] FTEs for the base year
staff did not include the [deleted] function position. See
Protester’s Comments at 5. In fact, Trusted Base points out that
its proposal clearly indicated that it would satisfy this
requirement by using its [deleted] support to ESOC. Id.
While the record shows that Trusted Base’s proposal explained
that its [deleted] support, we do not think it was unreasonable
for the agency to be concerned about the proposal’s failure to
address the requirement for a [deleted]. In addition, this
position was specifically identified as a requirement in the TOR,
and in other areas the agency found that other aspects of
Trusted Base’s proposed staffing indicated a lack of
understanding of the TOR’s comprehensive requirements. To the
extent the protester believes that the information in its
proposal explaining how it would use its [deleted] (a position
not usually associated with [deleted]) should have addressed the
agency’s concerns, the protester is simply disagreeing with the
agency’s judgment; the protester’s disagreement does not make
the agency’s concerns unreasonable. See UNICCO Gov’t Servs.,
Inc., supra.
In any case, Trusted Base’s central arguments focus on the
agency’s judgment that the protester’s proposed staffing for the
base and option years was understated and created doubt as to
the company’s ability to successfully perform the TOR services.
The protester disagrees with this evaluation finding on the
basis that the data in the “Service Offering Estimate” was
simply an estimate of the staffing hours required. As such, the
protester asserts it could propose staffing hours that, in its
business judgment, would be sufficient to perform the required
services. Protester’s Comments at 2-5.
Again, while Trusted Base is correct that the staffing hours and
labor categories in the TOR were estimates, the protester’s
argument misses the point. Potential offerors were instructed to
use this information--which was based on historical workload
data and TOR requirements--in preparing their proposals. Thus,
this information provided a level-playing field for reviewing
how offerors would perform--and the relative differences in
their cost to perform--the time and materials effort at issue
here. When the protester elected to generate its own estimates
of the future workload, it assumed the risk that its fewer
proposed staffing hours would result in an evaluation finding
that the firm’s staffing plan was understated. Also again, the
protester’s disagreement with the agency’s evaluation judgment
does not render it unreasonable. Coastal Drilling, Inc.,
B-285085.3, July 20, 2000, 2000 CPD para. 130 at 4.
(Trusted Base, LLC, B-401670,
October 27, 2009) (pdf)
First, AINS objects to the agency's finding that its product
failed "to fully meet" critical requirement [deleted], which
requires the vendor to propose [deleted] RFQ, Att. D
(Requirements Matrix), at 24. The protester responded to this
requirement by noting in its self-assessment that this feature
was targeted for release in version 7.0 of its FOIAXpress
software, which was due out in July 2009. During discussions,
the agency noted that the requirement had to be met by the time
of user acceptance testing, and that AINS's proposed date of
July 2009 appeared to be well beyond the user acceptance testing
date. The agency also noted that the proposed solution would be
in an AINS product (version 7.0) that was not yet in existence.
Accordingly, the agency asked AINS to confirm whether it could
meet the requirement by the time of user acceptance testing and
to explain how it would do so and the state of development of
its software with regard to meeting the requirement.
AINS responded by stating that it would have critical
requirement [deleted] met by the time of user acceptance
testing. The protester explained that it would accomplish this
by enhancing the latest version of its FOIAXpress software to
include the ability to [deleted], and that after this had been
accomplished, a user with the appropriate permission would be
able to [deleted], which could then be [deleted]. The protester
furnished a detailed timeline for completing and testing the
enhancement.
While the contemporaneous record lacks detail as to the basis
for the evaluators' finding that AINS's response failed to
demonstrate full compliance with requirement [deleted], the
agency furnished the following explanation in its report:
although AINS's revised quotation had indicated that, once
enhanced, the protester's software would be able to [deleted],
and that these reports could then be exported into various
formats, "it [did] not address making the underlying [deleted],
which is what Section [deleted] specifically requires." AR at
16. The agency also noted that the protester's response made
clear that its software was in a developmental stage.
The protester argued in response that it had addressed the
requirement for saving the [deleted] in a format that would
allow it to be made available to the public by explaining in its
discussion question response that its upgraded software would
permit the [deleted]. The protester further argued that the
evaluators had treated the vendors unequally by failing to
identify Privasoft's response to the requirement as a weakness,
even though it also failed to address making the [deleted]
available to the public.[8] In its report on the protest, the
agency responded to the protester's allegation of unequal
treatment by conceding that Privasoft had not directly addressed
the requirement pertaining to [deleted]; DOJ maintained that it
reasonably had not questioned Privasoft's compliance, however,
because Privasoft had "elsewhere indicated unequivocally it
could meet the requirement." The agency noted in this connection
that Privasoft had represented in its self-assessment that the
capabilities required by [deleted] were "currently available" in
its software. Supp. AR at 16.
We fail to see a meaningful distinction between the two vendors'
responses to the requirement pertaining to [deleted], neither of
which specifically addressed the capability to make the data
available outside the context of a report.[9] Accordingly, we do
not think that the agency has demonstrated a reasonable basis
for finding that AINS, but not Privasoft, failed to "fully meet"
the above requirement. (AINS,
Inc., B-400760.2; B-400760.3, June 12, 2009) (pdf)
SBSS asserts that the technical evaluation of IST’s proposal was
inconsistent with the cost/price evaluation, in that the
technical proposal was not reevaluated in light of the staffing
adjustments made for purposes of the most probable cost
adjustment. According to the protester, an offeror “could
propose a technical approach which included staffing well in
excess of the Government’s optimal staffing levels (and thus
receive a higher technical rating) and not be penalized” in the
technical evaluation even where the agency assumed a lower
staffing level in evaluating cost. SBSS Comments at 2.
Where the cost and technical evaluation of a proposal reach
contradictory conclusions, the agency generally is required to
reconcile the evaluations. See Serco, Inc., B-298266, Aug. 9,
2006, 2006 CPD para. 120 at 7; Information Ventures, Inc.,
B-297276.2 et al., Mar. 1, 2006, 2006 CPD para. 45 at 6.
Here, there was an ultimate disparity between the technical
proposal staffing and the staffing level used in the cost
evaluation--based on its estimate of required FSA staffing, the
Air Force assumed lower FSA staffing for IST in the cost
evaluation than the level proposed in IST’s technical proposal,
resulting in a downward adjustment in IST’s cost/price. While,
on its face, this appears to present an evaluation
inconsistency, in fact, there is no indication that IST’s
proposal received a higher evaluation rating as a result of the
higher staffing level in its technical proposal; the record
indicates that IST’s proposal in fact received only an
acceptable rating, without any evaluated strengths, under the
technical approach/support subfactor of the mission capability
factor. Since IST received no evaluation benefit from its higher
proposed staffing level--there is no argument that IST’s
staffing should have been found less than acceptable--there is
no basis for us to object to this aspect of the evaluation of
IST’s proposal. (SelectTech
Bering Straits Solutions JV; Croop-LaFrance, Inc., B-400964;
B-400964.2; B-400964.3, April 6, 2009) (pdf)
As an initial
matter, we are aware of no statute or regulation that requires
an agency to create a consensus report in evaluating proposals,
nor is there any requirement that every individual evaluator’s
scoring sheet track the final evaluation report. See Andrulis
Corp., B-281002.2, June 2, 1999, 99-1 CPD para. 105 at 5 (a
consensus score need not be the same score as initially scored
by the individual evaluators). In short, these matters alone
will not lead our office to conclude that there was an
impropriety in the agency’s evaluation or a violation of
procurement law or regulation. (Smart
Innovative Solutions, B-400323.3, November 19, 2008) (pdf)
As an initial matter, we find the protester’s criticisms of the
SEB’s assignment of adjectival ratings and point scores and its
corresponding documentation of this process misplaced. In
Wackenhut’s view, the number of strengths and weaknesses found
for each offeror “has to be the most important basis for an
adjectival rating and score,” Protester’s Comments at 16, and,
given that the “significant strengths” were the chief
discriminator between the proposals, “if [Wackenhut’s] having
twice as many significant strengths as Coastal is not a
meaningful difference, then nothing in the SEB’s process can
be.” Id. at 18. Wackenhut also maintains that in those instances
where the SEB in fact gave some significant strengths more value
than others, it failed to create a record explaining its
rationale in this regard, thereby, in the protester’s view,
rendering the SEB’s exercise of its discretion “unreviewable”
and “illegal.” Id.
In a hearing conducted by our Office, during which we heard
testimony from the SSA and the SEB Chairperson, the SEB
Chairperson described the evaluation process and explained that
each proposal was evaluated independently against the criteria
set forth in the RFP, with the SEB reaching consensus on its
findings (the various strengths and weaknesses identified for
each proposal). Based upon these findings, the SEB assigned, on
a consensus basis, adjectival ratings, and then point scores. In
assigning ratings and scores, the SEB Chairperson explained
that, with respect to significant strengths, it was the content
of the offerors’ proposals, as reflected in the specific
findings, which was used to rate and score offerors. The SEB
Chairperson stated that the process of assigning ratings and
scores was not merely a “numbers game” driven by the mere number
of significant strengths identified in an offeror’s proposal.
Hearing Transcript (Hr. Tr.) at 82.
We find the agency’s evaluation process unobjectionable. At the
core, Wackenhut seeks a mathematical or mechanical consideration
of the number of significant strengths in determining the
evaluation ratings and attributing points to the offerors’
proposals. Adjectival ratings and point scores, however, are
mere tools in the evaluation and selection process and should
not be mechanically derived or applied. Rather, it is the
agency’s qualitative findings in connection with its evaluation
of proposals--in this case the documented written narratives
underlying and justifying the SEB’s findings of particular
significant strengths--that govern the reasonableness of an
agency’s assessment of offerors’ proposals. MCR Federal, Inc.,
B-280969, Dec. 14, 1998, 99-1 CPD para. 8 at 9.
Here, the offerors’ proposals were evaluated independently
against the RFP’s evaluation criteria and the evaluation ratings
and scores simply served as an expression of the agency’s
exercise of its discretion in making its qualitative findings.
As a consequence, there is nothing per se improper with Coastal
and Wackenhut receiving similar ratings and scores under the
mission suitability factor, and related subfactors,
notwithstanding the fact that Wackenhut’s proposal had more
significant strengths as compared to the proposal submitted by
Coastal. All Star Maintenance, Inc., B-271119, June 17, 1996,
96-1 CPD para. 278 at 4 (holding that agency’s evaluation of two
offers as essentially equal was not unreasonable notwithstanding
the fact that the protester had five strengths while the awardee
had only two strengths). Absent some basis for concluding that
the technical findings underlying the ratings and scores were
flawed, there is no basis for our Office to conclude that the
SEB’s ratings were unreasonable. Moreover, with respect to the
documentation issue, the relevant material is NASA’s underlying
technical findings, which have been amply documented by the SEB,
and it is those findings which are reflected in the SEB’s
qualitative findings and which formed the basis of the
adjectival ratings and point scores. (Wackenhut
Services, Inc., B-400240; B-400240.2, September 10, 2008) (pdf)
PHASE-IN
ManTech challenges NASA's evaluation of offerors' approaches to
phase-in at the start of the contract. In this regard, the RFP,
which included a 30-day phase-in period, provided that the
government would evaluate under the management plan subfactor
(under the mission suitability factor) each offeror’s phase-in
approach
for continuity and a smooth
transition with the incumbent Contractor during the 30-day
phase-in period. The Government will evaluate how clearly the
phase-in plan demonstrates an ability to assume full contract
responsibility on the effective date of the contract. The
Government will evaluate how the phase-in plan specifically
address[es] . . . the proposed . . staffing plan. . . .
RFP sect. M.3. ManTech asserts
that the agency failed to recognize its advantage as the
incumbent contractor.
Analex’s proposal was rated overall excellent under the
management plan subfactor. This rating was based in part on two
evaluated significant strengths, one for proposing (1) an
“extremely well-planned, comprehensive and detailed phase-in
plan that provides for a smooth transition and demonstrates a
superb ability to assume full contract responsibility,” and the
other one for a superior approach to using bonuses tied to the
award fees earned by Analex and other bonuses as an incentive to
maximize overall employee morale and performance. SSD at 4;
Integrated Evaluation Panel Revised Final Report at 41-53. In
addition, Analex’s proposal received four strengths under the
management plan subfactor, for (1) proposing a good system (its
online task order management system discussed above) to process
task orders and manage multiple ongoing tasks, (2) a good
approach for mitigating performance risks and for interfacing
with government personnel, (3) a robust fringe benefits package,
which would aid in recruiting and retaining employees, and (4) a
thorough, comprehensive Mission Assurance Plan which
demonstrated Analex’s knowledge of and compliance with GSFC’s
Mission Assurance Guidelines. Analex’s only evaluated weakness
under this subfactor was an inadequate location allowance and
severance pay policy. Integrated Evaluation Panel Revised Final
Report at 41-53.
ManTech’s proposal likewise was rated overall excellent under
the management plan subfactor. ManTech’s rating was based in
part on a single evaluated significant strength for a superior
approach to phase-in, involving no major changeover of
personnel, processes or procedures, which would significantly
reduce the risk of phase-in. In addition, ManTech’s proposal
received six strengths under the management plan subfactor, for
proposing (1) policies and incentives that have a high
probability of retaining employees, maintaining high morale and
increasing productivity in a union environment, (2) a good task
order and property management system, (3) a good approach to
managing workload variability, (4) desirable benefits for
part-time workers, (5) an Integrated Knowledge Environment
portal that demonstrated a good approach to managing contract
work, and (6) a robust total compensation plan. In addition,
ManTech’s proposal received a weakness on the basis that, in
contrast to Analex’s thorough, comprehensive mission assurance
plan, ManTech’s significantly less detailed mission assurance
plan was inadequate, failing to demonstrate compliance with
GSFC’s Mission Assurance Guidelines. SSD at 7; Integrated
Evaluation Panel Revised Final Report at 104-18. Thus, the
record indicates that NASA in fact recognized, and assigned
ManTech’s proposal a significant strength under the management
plan subfactor on account of its superior approach to phase-in.
NASA specifically recognized that, given ManTech’s policies and
incentives, reflected in a 95 percent employee retention rate
over the past 5 years, ManTech had a high probability of
maintaining its incumbent workforce, thereby significantly
reducing transition risk. Integrated Evaluation Panel Revised
Final Report at 104-06. However, NASA also recognized that
Analex had proposed an “exceptional strategy,” warranting a
significant strength, in support of its proposal to hire
[REDACTED] percent of the incumbent employees. In this regard,
the agency noted that Analex had proposed a very attractive
package for recruiting incumbent employees, including:
[REDACTED]. In addition, NASA concluded that Analex’s proposed
superior approach to bonuses (for which it earned a second
significant strength) would likewise increase the likelihood of
recruiting incumbent employees. In this regard, the agency noted
that Analex had proposed bonuses [REDACTED], in a total amount
of up to approximately [REDACTED] percent of salaries, with the
bonuses commencing when Analex earned an award fee of at least
[REDACTED] percent. (In contrast, in ManTech’s proposed approach
of offering bonuses tied to the earned award fee, Mantech
commenced the bonuses only when ManTech earned an award fee
above [REDACTED] percent, and it only encouraged, but did not
require, participation by its subcontractors.) NASA also
determined that Analex’s robust fringe benefits package would
aid in recruiting employees. Finally, the agency determined that
Analex’s plan for [REDACTED], would greatly improve the
likelihood of a successful transition. Integrated Evaluation
Panel Revised Final Report at 41-45; Agency Supplemental Report,
Mar. 17, 2008, at 4-11; ManTech FPR Mission Suitability Proposal
at 113, 118; Analex FPR Mission Suitability Proposal at C-26.
We conclude that NASA reasonably determined that Analex offered
an exceptional transition approach, with many incentives for
incumbent employee retention, which warranted a significant
strength under the management plan subfactor. Furthermore,
ManTech has not shown that, given Analex’s overall two
significant strengths and four strengths under the management
plan subfactor, it was unreasonable for the agency to rate
Analex’s proposal as excellent in this area. Certainly the fact
that ManTech was the incumbent contractor here did not
automatically entitle it to a higher rating under the management
plan subfactor. See Karrar Sys. Corp., B‑310661, B-310661.2,
Mar. 3, 2008, 2008 CPD para. 51 at 4-5. (ManTech
International Corporation, B-311074, April 15, 2008) (pdf)
The evaluation of proposals is a matter within the discretion of
the contracting agency, and in reviewing protests against
allegedly improper evaluations, it is not our role to reevaluate
proposals. Rather, our Office examines the record to determine
whether the agency’s judgment was reasonable, in accord with the
evaluation factors set forth in the RFP, and whether the agency
treated offerors equally in its evaluation of their respective
proposals and did not disparately evaluate proposals with
respect to the same requirements. Hanford Env’t. Health Found.,
B-292858.2, B-292858.5, Apr. 7, 2004, 2004 CPD para. 164 at 4;
Rockwell Elec. Commerce Corp., B-286201 et al., Dec. 14, 2000,
2001 CPD para. 65 at 5.
As stated above, the contemporaneous record does not evidence
any consideration, or even awareness, by the Army of the stated
“assumptions” in Fluor’s TEP that there would be a period
between TO award and NTP during which it would receive
incremental funding and could perform various tasks, such as
procuring material and deploying equipment. See Fluor TEP,
Scenario Approach, at 3, 33-34. The fact that the agency may
have been unaware of the assumptions on which Fluor’s TEP stated
it was predicated does not change the fact that Fluor’s
assumptions were not consistent with the terms of the scenario
that the other offerors, as well as the agency, treated as
mandatory. It is a fundamental principle of federal procurement
that a contracting agency must treat offerors equally, and the
agency’s acceptance of Fluor’s proposal stating that it was
predicated on the above-mentioned assumptions was improper and
unfair to the other offerors.[9] See Farmland National Beef,
B‑286607; B-286607.2, Jan. 24, 2001, 2001 CPD para. 31 at 8 (a
proposal that fails to conform to one or more of a
solicitation’s material terms is technically unacceptable and
cannot form the basis for an award); Loral Terracom; Marconi
Italiana, B-224908; B‑224908.2, Feb. 18, 1987, 87-1 CPD para.
182 at 9 (agency’s acceptance of a proposal based upon a
methodology different than that set forth in the solicitation
was improper where the agency did not inform all offerors that
the agency’s requirements were not as rigid as indicated in the
solicitation). Moreover, we cannot find the agency’s evaluation
of Fluor’s TEP under the technical evaluation subfactors
reasonable, given that it was based upon a misreading of Fluor’s
proposal with regard to the stated “assumptions.” The record
does not show that this is an instance where an agency noted
that a proposal was taking exception to a solicitation
requirement in some respect, but ultimately determined that the
proposed approach would meet the agency’s needs. Rather, the
contemporaneous record, as well as the agency’s arguments during
the course of these protests, evidence that the Army simply
misread or altogether overlooked the stated “assumptions” in
Fluor’s proposal regarding the period of time and availability
of funding for tasks to be performed between TO award and NTP.
(Contingency Management Group, LLC;
IAP Worldwide Services, Inc., B-309752; B-309752.2;
B-309752.3; B-309752.4; B-309752.5; B-309752.6; B-309752.7,
October 5, 2007) (pdf)
GAP asserts that the agency’s proposal scoring scheme was flawed
in that it essentially “negated” the technical distinctions
among the proposals. In this respect, the record shows that the
agency evaluated proposals by assigning numeric scores between 0
and 5 points for each of the 13 evaluation subfactors, and then
multiplied the raw score by the weight assigned to each of the
subfactors to arrive at weighted scores. According to the
protester, because all of the acceptable proposals (that is,
proposals that it describes as likely to have been included in
the competitive range) would in practice be assigned raw numeric
scores of either 3 or 4, the effect was to artificially narrow
the range of possible total scores, such that it would appear
from the numeric scores that all of the proposals were
technically equal. According to the protester, this effectively
left the source selection to be based on low price rather than
on technical considerations which were to have received
paramount consideration under the terms of the RFP. This
argument is without merit. It is well established that ratings,
be they numerical, adjectival, or color, are merely guides for
intelligent decision making in the procurement process. Business
Consulting Assocs., LLC, B-299758.2, Aug. 1, 2007, 2007 CPD
para.134 at 4. Where the evaluators and source selection
official reasonably consider the underlying bases for the
ratings, including advantages and disadvantages associated with
the specific content of competing proposals, in a manner that is
fair and equitable and consistent with the solicitation, a
protester’s disagreement over the actual adjectival or numeric
ratings assigned essentially is inconsequential in that it does
not affect the reasonableness of the judgments made in the
source selection decision. Id. The record here shows that the
agency performed an evaluation of the proposals consistent with
the RFP evaluation factors and prepared detailed narrative
materials reflecting the evaluators’ findings. AR exhs. D12,
D19. The record also shows that those findings were specifically
considered in detail by the agency’s source selection official
in making the agency’s award decision. Indeed, because the
scoring was so close among the four proposals, the source
selection official expressly queried the evaluators “to
determine if there was a proposal that contained technical
aspects that would clearly set it apart from any of the other
ones.” AR exh. G7, Source Selection Decision, at 8. The
technical panel concluded that no proposal had technical
advantages or discrepancies that would set it apart from the
other proposals and, as a result, the proposals were determined
to be technically equal. GAP does not challenge any of the
agency’s underlying substantive findings with respect to the
relative merits of the competing proposals. The protester does
not, for example, allege that the agency unreasonably failed to
identify strengths that were present in its proposal or
unreasonably identified weaknesses that were not present; nor
does it challenge the agency’s underlying findings with respect
to the other proposals in the competitive range, including the
awardee’s. Moreover, GAP also has not alleged or demonstrated
that it should have been assigned higher numeric scores, or that
the awardee should have been assigned lower numeric scores,
based upon the strengths or weaknesses identified by the agency
in its evaluation. In these circumstances, given the absence of
any substantive challenge to the agency’s detailed narrative
evaluation findings, we simply have no basis to question the
agency’s evaluation results. We therefore deny this aspect of
GAP’s protest. (Gap Solutions,
Inc., B-310564, January 4, 2008) (pdf)
To the extent that Sherrick argues that the selection decision
did not accurately tally the number of strengths and weakness
for each proposal, or that the selection decision focuses on
specific discriminators between the proposals instead of their
evaluation ratings, these arguments are unavailing. The
evaluation of proposals and the assignment of adjectival ratings
should generally not be based upon a simple count of strengths
and weaknesses, but upon a qualitative assessment of the
proposals consistent with the evaluation scheme. See Smiths
Detection, Inc., B-298838, B-298838.2, Dec. 22, 2006, 2007 CPD
para. 5 at 5-6. Moreover, it is well established that ratings,
be they numerical, adjectival, or color, are merely guides for
intelligent decision-making in the procurement process. Citywide
Managing Servs. of Port Washington, Inc., B-281287.12,
B-281287.13, Nov. 15, 2000, 2001 CPD para. 6 at 11. Where an
agency reasonably considers the underlying bases for the
ratings, including advantages and disadvantages associated with
the specific content of competing proposals, in a manner that is
fair and equitable and consistent with the terms of the
solicitation, the protesters’ disagreement over the actual
adjectival or color ratings is essentially inconsequential in
that it does not affect the reasonableness of the judgments made
in the source selection decision. See id.; National Steel &
Shipbuilding Co., B-281142, B-281142.2, Jan. 4, 1999, 99-2 CPD
para. 95 at 15. The record here shows that the agency had a
rational basis for its evaluations and source selection
determination. Although our decision does not address every
challenge raised by Sherrick regarding the number of strengths
and weaknesses or adjectival ratings assigned to each offeror’s
proposal, we have reviewed all of the issues raised and find
that none has merit. (Sherrick
Aerospace, B-310359.2, January 10, 2008) (pdf)
Pemco first challenges the agency’s assignment of “satisfactory”
confidence ratings to both Pemco’s and Boeing’s proposals under
the past performance factor. In this regard, Pemco complains
that the agency “unreasonably failed to rate Pemco higher than
Boeing,” and that the agency “was unreasonable in rating both
Boeing and Pemco equally.” Protest at 15, 17.
As our Office has frequently stated, adjectival ratings are only
guides to assist agencies in evaluating proposals; information
regarding particular strengths and weaknesses of proposals is
the type of information that source selection officials should
consider, in addition to ratings and point scores, to enable
them to determine whether and to what extent meaningful
differences exist between proposals. See, e.g., TPL, Inc.,
B-297136.10, B-297136.11, June 29, 2006, CPD para. 104 at 17. In
this regard, proposals with the same adjectival ratings are not
necessarily of equal quality, and an agency may properly
consider specific advantages that make one proposal higher
quality than another. See Pueblo Envtl. Solution, LLC, B-291487,
B‑291487.2, Dec. 16, 2002, 2003 CPD para. 14 at 10; Oceaneering
Int’l, Inc., B-287325, June 5, 2001, 2001 CPD para. 95 at 13.
Here, the record clearly establishes that the agency did not
evaluate Boeing’s and Pemco’s proposals as being equal with
regard to past performance; rather, Pemco’s proposal was found
superior to Boeing’s. Specifically, although both proposals
received overall ratings of “satisfactory” confidence, the
agency’s contemporaneous evaluation documents show that, after
considering multiple strengths and weaknesses in both offerors’
past performance record, Pemco’s overall past performance rating
was considered to be at “the high end of satisfactory” and
Boeing’s overall rating was considered to be [deleted]. PAR at
159; Email from SSA (Riechers) to SSAC Advisors, Aug. 5, 2007.
Similarly, the SSA unambiguously stated in his source selection
decision: “I consider Pemco to have a better past performance
record than Boeing.” SSDD at 22. Accordingly, there is no merit
to Pemco’s protest that the two proposals were evaluated
“equally” with regard to past performance; to the contrary, the
agency considered this aspect of Pemco’s proposal to be better
than Boeing’s. (Pemco Aeroplex, Inc.,
B-310372, December 27, 2007) (pdf)
Bio-Rad
finally argues that the agency’s evaluation of IDEXX’s proposal
as “satisfactory” under the small disadvantaged business
participation factor was unreasonable. The proposal preparation
instructions stated with regard to the small disadvantaged
business participation factor that offerors “[s]hall identify in
this section of the technical proposal specific large business,
small business, HUBZone small business, small disadvantaged
business, [WOSB], [VOSB], and HBCU/MI and Service Disabled
Veteran-owned small business subcontract participation in
contract performance.” RFP at 56. The RFP concluded here that
“[t]he proposed participation of subcontractors will be
evaluated based on the information submitted in the
Subcontractor Information Sheet,” a copy of which was provided
as an attachment to the solicitation. RFP at 57. The section of
IDEXX’s technical proposal addressing the small disadvantaged
business participation factor consists of one sentence, which
states that “[t]here is no subcontract participation in the
contract performance.”[15] AR, Tab 13, IDEXX’s Technical
Proposal, at 18. The Subcontractor Information Sheet attached to
IDEXX’s proposal contained a single notation of “‑NA‑.” Id. at
19. The agency evaluated IDEXX’s proposal under the small
disadvantaged business participation factor as follows:
[IDEXX] does not propose any subcontracting under this
contract and did not submit a subcontracting plan. Therefore,
the firm is technically neutral in this area and will receive
a “Satisfactory” rating.
AR, Tab 6, TEB Consensus Memorandum, at 11; Tab 7, BCM, at 23.
The Source Selection Memorandum provides some explanation as to
the reasoning behind this aspect of the agency’s evaluation,
stating:
The firm did not intend to subcontract any portion of the
subject contractor to large or small business entities if it
received award. Therefore, in accordance with the [FAR] and
the [SSP] I found the firm satisfactory in the area of small
business subcontracting.
AR, Tab 8, Source Selection Memorandum, at 4.
In responding to Bio-Rad’s protest, the agency does not defend
the propriety of its evaluation of IDEXX’s proposal under the
small disadvantaged business participation factor as
“satisfactory.” With that said, we note that contrary to the
apparent belief of the SSA, there is no provision in the FAR
that either requires or allows an agency, under the
circumstances here, to evaluate a proposal as “neutral” and thus
“satisfactory” under a factor designed to evaluate the
participation of small disadvantaged businesses and the other
types of small businesses, where the proposal expressly provides
that there will be no participation by such businesses if the
offeror is awarded the contract. Additionally, and contrary to
the view of the SSA, there is nothing in the SSP or RFP
providing or otherwise indicating that the evaluation of IDEXX’s
proposal as “satisfactory” under the circumstances here was
appropriate, particularly given that firm’s large business
status.[16] Accordingly, the agency’s evaluation of IDEXX’s
proposal under the small disadvantaged business participation
factor as “satisfactory” was not reasonably based. (Bio-Rad
Laboratories, Inc., B-297553, February 15, 2006) (pdf)
The record shows that, in calculating the number of hours
necessary for SCLINS 0002AAA and SCLIN 0003AA, the IGCE standard
apparently had included the staffing required for two other
SCLINS (SCLINS 004AAA and 0005AAA, the program management
functional area SCLINS for Hungary and Macedonia). Amended KTR
Program Management Document, May 27, 2005, at 1. The record
further shows that there were two variables that were considered
by the agency in making these changes, the number of hours per
year for FTEs and the number of FTEs overall for each functional
area. We find nothing objectionable in the agency’s
recalculation of the number of labor hours per FTE; because the
IGCE standard had used what was subsequently considered an
unreasonably high number of hours per FTE, the agency reasonably
averaged the offerors’ number of hours per FTE to arrive at a
revised figure for use in the LSA standard. Id. However, the
agency’s explanation for arriving at the number of FTEs required
to perform these functional areas is problematic. The record
shows that the agency used KBR’s proposed staffing approach as
the basis for arriving at the distribution of FTEs among the
various SCLINS. Specifically, the cognizant agency personnel
determined that, because KBR’s staffing approach was similar to
the IGCE staffing approach, they would use KBR’s proposed
staffing to arrive at the appropriate number of FTEs for the
various program management functional area SCLINS; they
concluded that this would provide a more realistic FTE and hour
count for the government standard. Amended KTR Program
Management Document, May 27, 2005, at 1. Both the agency and the
awardee maintain that there was nothing improper in the agency’s
actions because, by using KBR’s proposed staffing approach as
the basis for establishing the agency’s evaluation standard, the
agency actually reduced the FTEs under SCLINs 0002AAA and 0003AA
more than if ITT’s proposed staffing had been used. The agency’s
actions were improper. The agency’s use of an offeror’s staffing
approach to arrive at the government’s standard was not a
reasonable substitute for an agency-generated estimate based on
historical workload data and the PWS requirements. While the
agency’s estimate and the staffing approach proposed in the KBR
proposal could be similar, in the absence of an analysis from
the agency reconciling the two, there was no reasonable basis
for the agency to use the KBR staffing approach as its benchmark
for evaluation purposes. Using the KBR staffing approach without
such an analysis had the effect of possibly giving KBR an
improper competitive advantage since, obviously, KBR’s staffing
automatically would be deemed acceptable, while ITT’s would not.
We note as well that the agency performed this recalculation of
the government standard after receiving FPRs. Thus, in addition
to the standard being problematic in and of itself, ITT was not
afforded the opportunity to align its proposed staffing with the
revised LSA standard for program management. (ITT
Federal Services International Corporation, B-296783;
B-296783.3, October 11, 2005) (pdf)
KBR
asserts with regard to the contingency plan factor that the
agency overlooked a number of strengths, and assessed a number
of weaknesses that were unreasonable. The agency responded in
cursory fashion that KBR’s proposal was “more general” and
provided “limited details,” and contended that the protester’s
arguments reflect only “mere disagreement” with the agency. AR
at 17; SAR at 9. Our review of the record shows more than “mere
disagreement.” In its protest filing, KBR provided citations to
its proposal showing where in KBR’s proposal the firm addressed
each of the areas identified in the RFP for this factor, and
showing where it addressed items the agency stated were
weaknesses. KBR also made detailed arguments why its proposal
was deserving of strengths or significant strengths, including
pointing out where other offerors received similar strengths.
See, e.g., KBR’s Protest at 14-23. The agency has failed to
respond to these specific allegations. Moreover, the TEB report
contains only one short statement that “KBR’s only support for
their [contingency] response plan was to reference past projects
and provided a general overview of their plans for contingency
response,” and identified the following unelaborated weakness in
KBR’s proposal under the contingency plan factor:
Very limited detail provided on existing resources and how
these resources will be coordinated. Contractor mainly used
past contracts to show responses. Very limited detail provided
for subcontractor agreements, pre-positioned materials,
equipment or people. AR, Tab 21, TEB Report, at 74-75.
Our review of the record does not show that KBR’s contingency
plan is more limited or general than the awardees’ plans.
Although URS proposed a plan that was approximately twice as
long as the other offerors’, KBR, Fluor, and ACC all provided
contingency plans of similar length. From our review, it is not
apparent that the level of detail in any of the plans is
significantly different. Notwithstanding KBR’s specific protest
contentions, the agency has failed to provide any specific
examples of where the plans are dissimilar, and none are
apparent from the record. All offerors appear to have addressed
the requirements of the RFP and all offerors cited past projects
as examples to demonstrate how their plan would successfully be
implemented. Although KBR perhaps cited a few more examples than
the other offerors, the agency has not explained why this is a
weakness and not a strength, given that the examples appear to
demonstrate that the proposed contingency plan has been
implemented successfully. Given that the record does not, on its
face, support the agency’s ratings, and the agency has otherwise
failed to explain the difference in ratings, we sustain the
protest on this ground. (Kellogg
Brown & Root Services, Inc., B-298694; B-298694.2;
B-298694.3, November 16, 2006) (pdf)
Regarding Honeywell's proposed staffing, the SEB identified nine
"strengths" in Honeywell's proposal and one "weakness" in
evaluating Honeywell's proposal under the technical performance
subfactor paragraph relating to the staffing plans." Among the
strengths noted by the SEB were "Honeywell has provided
[DELETED]." AR, Tab 14, Honeywell's Proposal's Strengths and
Weaknesses, at 7. Additionally, the record reflects that, based
upon its review of two exhibits in Honeywell's proposal
pertaining to its proposed staffing plan (detailing Honeywell's
staffing by, among other things, PWS section, skill category,
number of personnel, and minimum qualification standards
proposed), the SEB found that one exhibit "demonstrates that
Honeywell has thought through the critical skill mix required to
operate the [centers]," and that the other exhibit "further
demonstrates a strong knowledge of the qualification standards
required by each permanent member of the [contract work team]."
Id. As mentioned previously, Honeywell's proposal received an
overall rating of "excellent" under the technical performance
subfactor. AR, Tab 14, SSA Initial Briefing, at 40. However,
during its cost realism evaluation of Honeywell's proposal, the
same SEB found that the proposal had "limited supporting
rationale for being 45 FTE's below the Government estimate from
day one." AR, Tab 43, Honeywell Cost Realism Rationale, at 1.
The contemporaneous documentation of the agency's cost realism
evaluation of Honeywell's proposal with regard to proposed
staffing is two pages long, with only one page actually
addressing the agency's "rationale" for the adjustments made. On
this page, the agency identified each of the specific skill
categories where adjustments were found to be necessary in the
proposed staffing, typically concluding in each case, with
little further elaboration, that Honeywell had not provided
"sufficient rationale" for its proposed staffing. This
evaluation resulted in a significant increase in the evaluated
most probable cost of Honeywell's proposal, as well as a
significant deduction from the proposal's mission suitability
score, such that Honeywell's proposal score fell from the
"excellent" range to the "very good" range. The contemporaneous
record, however, does not discuss or attempt to reconcile the
SEB's conclusions in the cost evaluation, where Honeywell's
staffing plan was deemed "inadequate" by 43 FTEs, with the
evaluation of the staffing plan under the technical performance
subfactor, where Honeywell's plan was found to contain
"strengths" and an "appropriate" number of proposed personnel.
Thus, the contemporaneous record provides no basis to conclude
whether the SEB's evaluation of Honeywell's proposed staffing
under the technical performance subfactor or with regard to cost
realism, or either, was reasonable. (Honeywell
Technology Solutions, Inc.; Wyle Laboratories, Inc.,
B-292354; B-292388, September 2, 2003) (pdf)
It is apparent from the foregoing that the TEB here re-rated
CMR's performance under the various projects using different
questions and a different rating scale than those used by the
evaluators who conducted the reference checks. Given the
difference between the rating scales used and the questions
asked, there is no way that we can be certain that the ratings
on the forms completed by the TEB here accurately reflect the
opinions expressed by the individuals contacted. For example,
there is no evidence that any of the references contacted
expressed an opinion as to whether CMR's performance conformed
to its quality control plan, yet the TEB rated CMR's performance
as having met the standard (but not exceeded it) with regard to
all four projects. Further, there is no evidence that it was
reasonable for the TEB to translate the references' original
ratings (of excellent/good/satisfactory/etc.) into virtually
across-the board ratings of "met the standard." In this regard,
the agency has furnished us with neither the original rating
sheets, nor with an explanation as to how it translated the
scores. For example, it has not been explained (and it is
unclear to us) whether ratings of "met the standard" reflect
original ratings of satisfactory (as would seem appropriate) or
original ratings of good (pursuant to the reasoning that "met
the standard" was the second highest rating under the original
scale, whereas "good" was the second highest under the scale
used by the TEB). Without such information, we have no basis
upon which to conclude that the agency's evaluation of the
protester's past performance was reasonable. Further, as noted
above, the record reveals that Pizzarotti's past performance was
initially rated using a different, more liberal rating scale
than either of those described above. According to the statement
of the individual who conducted Pizzarotti's reference checks
pursuant to an earlier solicitation, he explained to the
references whom he contacted that they were to rate Pizzarotti's
performance in accordance with the following scale: met or
exceeded the standard, close to the standard, departure from the
standard with increased risk, departure from the standard with
significant risk, unacceptable departure from the standard. This
scale differs from those used to rate CMR's past performance in
that it collapses the top two (in the case of the exceeded the
standard/met the standard/etc. scale) or possibly three ratings
(in the case of the excellent/good/satisfactory/marginal/poor
scale) into the single top category of met or exceeded the
standard. It would obviously have been easier for Pizzarotti to
attain top ratings under this scale than for CMR to attain top
ratings using either of the others. In our view, the use of
these different rating scales calls into question the
even-handedness of the evaluation of CMR and Pizzarotti's past
performance. (Cooperativa Muratori
Riuniti, B-294980; B-294980.2, January 21, 2005) (pdf)
Since an offeror is not bound to provide goods or perform
services that it has not offered to provide or perform, agencies
may properly evaluate proposals only on the basis of the
information and representations contained therein. See, e.g.,
Microcosm, Inc., B-277326 et al., Sept. 30, 1997, 97-2 CPD ¶ 133
at 6-7. Here, as shown above, the agency credited ATK’s proposal
with meeting the required level of counter‑countermeasure
performance--even though ATK’s proposal did not address the
[deleted] necessary to achieve the specified level of
performance. Rather, the agency’s determination that ATK’s
proposal “include[d] a full description of design feature[s]
that enable stated performance” was based on an agency advisor’s
perception of the knowledge and capabilities of an ATK
subcontractor. Since ATK’s proposal did not address the
[deleted], it is not clear that all of the costs associated with
ATK’s proposed [deleted] were included in its proposal, nor that
ATK, or its subcontractor, is bound to provide the [deleted] the
agency assumed to be part of ATK’s proposal. Based on this
record, the agency’s evaluation of ATK’s proposal lacks a
reasonable basis. (Lockheed
Martin Corporation, B-293679; B-293679.2; B-293679.3, May
27, 2004) (pdf)
In sum, because the agency essentially ignored Locus’s revised
proposal and failed to consider price in the evaluation, we
sustain the protest. (Locus
Technology, Inc., B-293012, January 16, 2004) (pdf)
The task
of gathering data regarding the offerors' past performance was
assigned to a TEP coordinator . . . . He gathered past
performance evaluations from the offerors' contract references
and recorded the information onto worksheets . . . . The
worksheets identifies six separate aspects of past performance
to be evaluated . . . and utilizes six separate adjectival
ratings -- Outstanding, Excellent, Good, Fair, Poor and N/A [not
applicable]. The [TEP coordinator] recorded the references'
evaluations . . . [and] then transcribed this data to a summary
matrix, recording a score of “5” for each “Outstanding” rating,
a score of “4” for each “Excellent” rating, etc. . . . The
rating provided by two of Shaw's references . . . for “Overall
Performance” were incorrectly recorded on the summary matrix as
[deleted] rather than [deleted]. . . . The [TEP coordinator]
then utilized these incorrect scores to arrive at an average
score for Shaw's overall performance.
In addition to the evaluation
errors discussed above--correction of which would decrease the
margin of Shaw's total point score superiority from [deleted] to
[deleted]--ManTech's supplemental protest identifies other
apparent errors, including another apparent transcription error
relating to Shaw's rating under the most heavily weighted
evaluation factor, personnel. Specifically, ManTech points out
that the final evaluation record incorporates an evaluator's
rating of [deleted] for Shaw's proposed analytical
chemist/manager, but the contemporaneous evaluator worksheets
reflect a score of either [deleted] or [deleted]. Accordingly,
ManTech maintains that Shaw's evaluated advantage with regard to
proposed personnel was improperly inflated due to transcription
errors in much the same way the agency acknowledges occurred
with regard to Shaw's past performance rating. As discussed
above, it is clear that, even considering only the past
performance evaluation errors, which the agency acknowledges,
the initial cost/technical tradeoff, on which the source
selection decision was based, reflected consideration of a
technical point score superiority for Shaw that was more than
double the margin that existed following correction of the
agency errors. Correction of these errors, alone, eliminates
Shaw's evaluated advantage under the second most important
evaluation factor--past performance--which, in part, formed the
basis for the agency's conclusion that Shaw's technical
superiority outweighed ManTech's [deleted] cost advantage.
Further, as discussed above, the contemporaneous evaluation
record regarding the most important evaluation
factor--personnel--suggests that Shaw's slightly higher point
score under that factor may have been, similarly, inaccurately
recorded. In short, it appears that the evaluation record
supporting Shaw's evaluated technical superiority, including the
total point score ratings on which the initial source selection
decision appears to significantly rely, was materially flawed. (ManTech
Environmental Research Services Corporation, B-292602;
B-292602.2, October 21, 2003) (pdf)
As discussed, the record reflects numerous instances where the
agency either unreasonably reached conclusions relating to the
EDS offer in light of the language included in the proposal (for
example, in the areas of single sign-on access and the provision
of Oracle database software), or apparently failed to thoroughly
evaluate the proposals critically, and in a manner that would
have revealed inconsistencies or deficiencies in what was being
offered (for example, in the remote access, IMAC and small
business areas). We find the agency’s conclusions troubling in
light of its evaluation of the LMIS proposal. For example, one
of the discriminators relied on by the agency to make award to
EDS was its conclusion that the LMIS proposal did not actually
provide mainframe and distributed systems operating system or
database software. AR, exh. 30, at v, 36; AR, exh. 32, at 2. The
agency reached this finding based on the wording of the software
support core function narrative of the LMIS PWS, concluding that
the proposal was “carefully worded to exclude the actual
provision of software.” AR, exh. 30, at v. LMIS vigorously
contests the agency’s finding in this regard, maintaining that
other sections of its PWS, as well as the list of products
contained in its CWBS made clear that the firm had in fact
offered the software in question. We observe that, in reading
the two proposals, the record shows that the agency seems to
have applied a double standard. On the one hand, when reading
the EDS proposal, the agency tended to be expansive, resolving
doubt in favor of EDS (which, as we have found, led it to reach
conclusions not warranted by the actual language of the firm’s
offer). On the other hand, when reading the LMIS proposal, the
agency applied a more exacting standard, requiring an
affirmative representation within the four corners of each
section of the PWS before it was prepared to conclude that one
or another requirement was being met. We need not resolve which
standard should have been applied in the agency’s evaluation
but, to the extent that HUD essentially applied a more exacting
standard in reviewing one proposal than it did in reviewing the
other proposal, this was improper. (Lockheed
Martin Information Systems, B-292836; B-292836.2;
B-292836.3; B-292836.4, December 18, 2003) (pdf)
To argue that the agency's failure to meaningfully evaluate a
material part of Orenda's proposal is excusable on the basis
that the plan's “process itself is negotiable” and will be
considered and negotiated after award ignores the contracting by
negotiation process set forth in part 15 of the FAR. That
process generally provides that in negotiated acquisitions, an
agency is to evaluate proposals in accordance with the
evaluation factors set forth in the solicitation, document its
evaluation of proposals, if appropriate negotiate with offerors
through the conduct of discussions and allow offerors to revise
their proposals, and select a proposal for award based upon the
selection criteria set forth in the solicitation. FAR §§
15.304-15.308. The negotiated acquisition process as described
in the FAR and set forth in this RFP does not contemplate the
selection of an inadequately evaluated proposal for award with
the intent to negotiate the process by which the offeror will
accomplish the agency's requirements at some later point in
time. See Global Assocs. Ltd., B-271693, B‑271693.2, Aug. 2,
1996, 96-2 CPD ¶ 100 at 5 (protest sustained where the awardee's
proposal was at best unclear as to its compliance with a
material requirement of the solicitation, and the agency's
post-award communications with the awardee to clarify the
awardee's proposal in this regard constituted improper post-best
and final offer discussions). Accordingly, we sustain
Sabreliner's protest on the basis that the agency's evaluation
of Orenda's proposal as acceptable under the transition
subfactor to the technical evaluation factor was not reasonably
based. (Sabreliner Corporation, B-290515; B-290515.2; B-290515.3,
August 21, 2002) (txt
version)
Failure of individual evaluators to
comment upon proposals under each evaluation factor in
evaluation worksheets does not render evaluation flawed--there
is no general requirement for such all-inclusive
comments--particularly where consensus evaluation sheet
contained references to proposal advantages and disadvantages,
as well as consensus scores. (North
State Resources, Inc., B-282140, June 7, 1999)
Specifically, the protester
complains that two original TEB members assigned its proposal
three points, the maximum possible score, while, without
explanation from the original or reconvened TEB, two others
assigned it zero points. This argument is without merit.
First, disparate scoring among evaluators by itself does not
suffice to establish an improper evaluation. Unisys Corp.,
B-232634, Jan. 25, 1989, 89-1 CPD para. 75 at 6. (General
Security Services Corporation, B-280388; B-280388.2,
September 25, 1998)
|