Non-Numerical RFP Evaluations: Making a Good Business Decision
|
The written offers in response to your Request for Proposal (RFP) have been evaluated and a winner has been selected. Through a traditional numerical rating system, the evaluation spreadsheet shows that out of a possible 100 points, the first place vendor has received 95 points, second place has received 94.5 points, third place has received 93 points, and so on and so forth.
You have a winner, but you do not have a clear winner. How do you explain to the second place offeror that they lost by one-half of a point? Where exactly did they lose that one-half of a point? Realistically, there are no reasonable explanations for such an insignificant difference in score.
Many years ago when I worked as a purchasing agent in private industry, I was tasked with selecting a coffee supply company to provide each of our departments with packaged soup and coffee on a twice-weekly basis. We solicited offers from five different suppliers and evaluated them all on several different criteria. Price, of course, was an important consideration. Along with price, however, we also requested product samples and had several of our employees give their opinions about how the coffee tasted. We checked customer references. We compared delivery schedules among the suppliers to determine which one best met our needs. And, finally, we interviewed the delivery person to assess their background, ability, and desire to provide the level of customer service we desired.
Once our individual evaluations were complete, the review team met, discussed the strengths and weaknesses of each supplier, and decided which vendor should receive the contract. Essentially we made a business decision as to which vendor we felt would give us the best “bang for the buck.” Nowhere in this process did we use a numerical rating system. The end result was that we hired a vendor who consistently provided an excellent product at a reasonable price and in a timely manner.
But this example is from private industry and unlike public procurement offices, they have the ability to use a non-numerical evaluation process. Right? Not necessarily. In regards to the evaluation of proposals received in response to a RFP, the Colorado Procurement Code and Rules state that “Numerical rating systems may be used.” In essence, this rule is saying that the use of numerical rating systems is optional and is not a requirement under Colorado law.
In the summer of 2002, the State of Colorado issued a RFP to select a vendor (or vendors) for its procurement, fleet, and travel card programs. Within the RFP document, the state described the evaluation process to be used:
“Multiple evaluation committees, through a phased evaluation process, will evaluate the merits of proposal received in accordance with the evaluation factors stated in this RFP and formulate a recommendation. While a numerical rating system may be used to assist the evaluation committee in selecting the competitive range (if necessary) and making an award decision, the award decision ultimately is a business decision that will reflect an integrated assessment of the relative merits of the proposals using the factors and their relative weights disclosed in the RFP.”
In this particular RFP, a numerical rating scheme was utilized, but merely as a tool to identify the strengths and weaknesses of each written offer. The final score for each offeror did not necessarily result in a final determination as to which vendor(s) would be recommended for award.
Phase I
In Phase I of the evaluation, separate evaluation committees for each of the three card programs reviewed and scored proposals submitted for their respective programs. For each card program, general factors and sub-factors were all assigned relative numerical weights. Each factor was then multiplied by a “Standards Score” from 1 to 5 with 5 being the best possible score. Standards Scores were based on an adjectival description for each general factor.
For example, a score of 5 ( excellent) was described as a proposal that met all requirements and exceeded most mandatory requirements in tangible, clearly advantageous ways, no clarifications or revisions were necessary to comply with the state’s requirements, and there very little if any risk of unacceptable or late performance. A Standards Score of 1 (poor) was characterized by an approach that did not substantially comply with the state’s requirements in many respects, the offer had a very limited understanding of the scope of work, and there was a high risk of unacceptable or late performance.
The Phase I committees then discussed scores and modified individual ratings as appropriate. For the purpose of conducting discussions with offerors, each proposal response was classified into one of three categories: “unacceptable” (a proposal which was so deficient that it would require extensive amounts of state involvement to change the approach and achieve successful completion), “potentially acceptable” (although deficient in several areas, clarifications and/or revisions could possibly make the offer acceptable), or “acceptable” (the proposal clearly met the state’s requirements with few or no clarifications and/or revisions needed).
Each Phase I committee then composed an executive summary which described the strengths and weaknesses of each offer. Numerical ratings were used in this phase of the evaluation as a mechanism to help identify the merits as well as deficiencies of each proposal response. The executive summaries also designated which offers each Phase I team recommended to be included in the “competitive range.” In this case, the competitive range consisted of those proposals that were classified as either potentially acceptable or acceptable.
Phase II
Upon completion of the initial Phase I review, one key individual from each Phase I team migrated to a single Phase II team. In addition, three new team members became a part of the Phase II evaluation committee. The Phase II team reviewed the findings and recommendations from each Phase I committee and decided which of the offerors should be in the competitive range and subsequently invited to give oral presentations.
Both Phase I and II teams attended the oral presentations. Upon completion of the oral presentations, Phase I committee members were allowed to adjust their original scores as appropriate and advised the Phase II team of these adjustments. At the same time, a Phase III committee consisting of three individuals, all at the executive level, also attended the oral presentations.
Phase III
Ultimately, this Phase III team reviewed the findings and recommendations of the Phase I and II committees. It was then decided which of the vendors should be sent a request to submit a Best and Final Offer (BAFO). BAFOs were then analyzed by the Phase I, II, and III teams. Phase I and II committees submitted their comments along with revised scores and recommendations to the Phase III team for their consideration. When all was said and done, it was the Phase III team which made the final award recommendations. A selection memorandum was written which described, in general, the evaluation process used, the strengths and weaknesses of each of the finalists, and, finally, the discriminating factors that lead to the final award recommendations.
This was a highly complex and rigorous source selection process. Along the way, the state utilized various subject matter experts to assess each vendor’s information technology capabilities as well as to evaluate some very complex financial issues (rebates and incentives). All in all, approximately thirty individuals throughout the State of Colorado were somehow involved in either evaluating proposals or providing technical advice.
This particular RFP evaluation was painstaking due to the complex nature of the procurement and the potential dollar volume (an estimated one billion dollars over a possible eight-year term) involved. Many, if not most, non-numerical evaluations need not be as exacting as was this one.
By not restricting itself solely to a numerical rating system, the state was able to make a business judgment as to which vendors provided the most advantageous offers (price and other factors considered). Although numerical ratings proved to be useful in assisting with these determinations, the state did not base its source selection exclusively on the final score for each of the vendors.
It should be noted that several of the offerors chose to inspect the procurement file after the award of this RFP. What they saw in their review of the file was an evaluation process that demanded a thorough and equitable analysis of each proposal response. As a result, although some vendors were understandably disappointed by not having received an award, the state received no formal protests against the award recommendations.
So, instead of finding yourself backed into a corner trying to explain to an unsuccessful offeror why they lost an award by one half of a point, consider the benefits and advantages of using a non-numerical evaluation system. When used properly, it’s a fair process, it’s a reasonable process, and it’s a completely justifiable and defensible means of selecting the vendor who will give your agency the best “bang for the buck.”