Winning Government Business
上QQ阅读APP看书,第一时间看更新

Chapter 3 Federal Government Source Selection

There are only two ways to acquire new business from the federal government: through a sole source award or through competition. Sole source awards are relatively rare. They occur only when there are time constraints or limited sources are available to the government. Sole source awards are also contrary to the government’s desire to acquire the best deal possible. Therefore, the vast majority of government business is acquired through competition.

New business is competitively awarded through either a sealed bidding process or negotiated procurement. Sealed bids are used when the product or services to be procured are very well defined. This process allows the government to shop for the best price. The government solicits bids from qualified suppliers through an invitation for bid (IFB) or a request for quote (RFQ) and then awards a contract to the lowest qualified bidder.

Sometimes the government uses a two-step sealed bidding process. Offerors first provide the government with a technical proposal to verify they are qualified to perform the contract. All qualified bidders then submit their prices. The award goes to the lowest bidder. IFBs and RFQs follow the same uniform contract format as requests for proposal (RFPs, described in Chapter 2). However, the specific RFP sections vary depending on the nature of the procurement.

Source selection for a sealed bid acquisition is fairly simple: The low bid wins. If a two-step procedure is used, the government first evaluates the technical proposals using a binary—go/no go—scoring system. Then the government makes an award to the lowest-priced qualified bidder.

Source selection procedures become far more complicated for competitive, negotiated acquisitions. Under this process, the government solicits proposals through an RFP and then evaluates the proposals to select the winner. Understanding the process the government uses to evaluate your proposal and arrive at a selection decision is critical to winning new business.

I do not know anyone who would enter a high-stakes poker game without first knowing the rules. Likewise, I cannot imagine a professional golfer competing in a tournament without first reviewing the course. Yet every day companies spend hundreds of thousands of dollars competing for government business without understanding the rules of the game. Sometimes they get lucky, just like gamblers, or they have a great game and win the tournament without any knowledge of the course. Over the long haul, however, this lack of knowledge proves to be very expensive, and it yields competitive advantage to someone else.

If you want to win new business from the government consistently, you must understand how your proposal will be evaluated and how the government selects winners. Otherwise, you must rely upon luck or having a better day than the rest of the players.

THE FEDERAL ACQUISITION REGULATION

The Federal Acquisition Regulation (FAR) is the primary source of procurement regulations used by all federal agencies to acquire products and services. The FAR system became effective April 1, 1984. It contains all the provisions and clauses used in government contracting, including those that govern source selection. For example, commercial acquisitions are governed by FAR Part 12; simplified acquisitions, by FAR Part 13; sealed bids, by FAR Part 14; and negotiated acquisitions, by FAR Part 15. Negotiated acquisitions require bidders to submit a technical and cost proposal in response to a government RFP.

Some major procuring agencies (e.g., Department of Defense, NASA, Department of Energy) implement the FAR with their own regulations or supplements. For example, DoD procurements are regulated by Defense FARs. Air Force procurements are then implemented by Air Force FAR Supplements (e.g., AFFARS Part 5315). Separate commands, such as the Air Force Materiel Command, can further supplement these regulations (e.g., AFMCFARS).

Depending on the particular procuring agency, up to four levels of regulations and supplements can be involved. However, there are two important points to keep in mind. First, the basic FAR may not be violated because it governs all federal procurements. Second, you must be familiar with the specific source selection policy and process used by the agency to which you are bidding. The good news is that all agencies use a common source selection process. They vary only in the specific details used to implement that process.

THE FEDERAL GOVERNMENT ACQUISITION PROCESS

The acquisition process starts as soon as a federal agency becomes aware that a need exists and efforts begin to fulfill that need. Acquisition planning is normally initiated well in advance of the fiscal year in which contract award or order placement is anticipated. The amount of time devoted to planning varies enormously depending on the scope, complexity, and urgency of the need. Planning to acquire a new product is generally far more complex than planning to acquire services, especially if the new product requires new or innovative technology.

An acquisition team, typically consisting of technical, legal, and contracting personnel, builds an initial acquisition plan, often drawing from information from previous plans. They define the need in terms of capability or performance, costs, applicable conditions (e.g., compatibility with existing systems), risk, quantity, quality, and delivery requirements, as well as any other information that will aid the acquisition process.

Multiple key activities take place during acquisition planning. They include conducting market research, developing an acquisition strategy and methodology, establishing a source selection organization, preparing an RFP, and preparing a source selection plan.

Acquisition agencies differ in the specifics of how they perform acquisition planning. Gaining competitive advantage requires that you understand the acquisition planning process of your government customers and that you actively participate in that process. Participation includes helping customers define requirements and acquisition strategy. It also involves collecting information essential to developing an effective bid strategy (see Chapter 8) and creating with the customer a favorable image of your organization and its ability to successfully meet that customer’s need.

Market Research

Once the need is sufficiently defined, the government conducts market research. The extent and type of research depend on such factors as urgency, estimated dollar value, complexity, and past experience. The focus is on finding viable sources capable of providing products or services to fulfill the need. Sometimes the government also uses information collected by means of market research to better define the requirement, identify required technologies, or refine initial budget estimates.

Typical market research techniques include:

Contacting knowledgeable persons in government and industry regarding market capabilities to meet requirements.

Reviewing the results of recent market research to meet similar requirements.

Querying the government-wide database of contracts and other procurement instruments available at www.contractdirectory.gov, as well as other government and commercial databases that provide information relevant to agency acquisitions.

Obtaining source lists of similar items from other contracting activities or agencies, trade associations, or other sources.

Publishing formal requests for information on the Federal Business Opportunities website (www.fbo.gov). These can take the form of a Sources Sought announcement or a request for information. In other cases the government may publish a similar request in trade journals or business publications.

Conducting interchange meetings or holding pre-solicitation conferences to query potential bidders early in the acquisition process. These meetings include Industry Days, which all prospective bidders are invited to attend; RFP conferences following the release of a draft or final RFP; and one-on-one sessions, at which members of each organization meet separately with the government acquisition team.

It is extremely important to participate in market research activities. Doing so enables you to gain early insight into program requirements and might offer the potential to influence those requirements in a way that will be favorable to your organization.

Acquisition Strategy and Methodology

A key strategy decision during acquisition planning involves selecting the type of contract to be issued and the role cost will play in the selection process. Cost may play a dominant selection role in cases where the requirement is clearly defined and risk of unsuccessful contract performance is minimal. Alternatively, in cases where requirements are less well defined, more development work is required, or higher performance risk exists, cost or price might play a less dominant role. Here, technical or past performance considerations might play a primary role in source selection. The government may choose from a combination of models that enable it to select the most advantageous offer with respect to the relative importance of cost or price and non-cost factors.

The government uses three different source selection models for negotiated procurements: trade-off or best value model, lowest price technically acceptable model, and performance-price trade-off model.

Trade-Off or Best Value Model

In the trade-off model, the government performs an integrated assessment of technical performance, past contract performance, risk, other non-cost factors, and proposed price to arrive at a best-value solution. Best value is also referred to as a trade-off model because the government performs a trade-off between better performance, risk, and price to determine the combination that is deemed most advantageous to the government. This model gives the government the latitude of selecting other than the low-price bidder. It is used when the government believes measurably different solutions will be proposed by different bidders and where the government is willing to pay for the benefits represented by better performance or lower risk.

Lowest Price Technically Acceptable Model

This model is used for non-complex, routine products or services or whenever the government does not expect significant differences between bidders or is unwilling to pay a premium for those differences. In the lowest price technically acceptable model, the government evaluates the technical proposal as acceptable or unacceptable. The government then makes an award to the bidder with an acceptable technical proposal and the lowest price. The lowest price technically acceptable model is used when best value is expected to result by selecting the technically acceptable proposal with the lowest evaluated price. This model does not consider past contract performance.

Performance-Price Trade-off Model

The performance-price trade-off (PPT) model is considered a variant of the trade-off model. It permits a trade-off between price and past performance and is applicable to the following types of procurements:

Replenishment spares

Non-complex operational contracting acquisitions

Some types of construction contracting

Non-developmental, non-complex service or supplies

Service contracts with only pass/fail technical requirements

Low-technical-complexity, “build to print” contracts.

Under this model the technical proposal is scored as acceptable or unacceptable. For bidders that submit an acceptable proposal, the government then performs a trade-off between past performance and price to arrive at the decision that is deemed most advantageous to the government. This model also gives the government the latitude to award to a higher-priced bidder if the government believes the past performance of that bidder is sufficient to justify the higher price. In actual practice, however, it appears the government is more likely to award to the lowest-price bidder with an acceptable technical solution and acceptable past performance.

The PPT model has been around since 2005. It saw increased usage in 2009 and 2010 and might become the dominant acquisition model going forward. Several factors make the PPT model appealing to government acquisition agencies. First, it simplifies proposal evaluation by scoring the technical proposal as acceptable or unacceptable rather than using the complex systems currently in place. Second, it simplifies selecting a winner by using price as the primary decision-maker. Third, it reduces the likelihood the losing bidders will protest and reduces the probability of such protests being won. Finally, PPT may reflect growing sentiment that the government is paying too much for routine products and services, yet it still retains past performance history as an evaluation criterion.

Development of RFP Evaluation Factors, Subfactors, and Criteria

During acquisition planning the government eventually translates its need into an RFP document. Different aspects of the need are described in the various RFP sections (as described in Chapter 2). Additionally, the government establishes the factors that will be used to evaluate proposals in response to the RFP. The factors are divided into cost and non-cost categories. Cost or price is always an evaluation factor. Non-cost factors include the technical area and past performance. Sometimes proposal risk or delivery schedule is identified as a non-cost factor. Other less common factors include areas like small business subcontracting, adherence to RFP terms and conditions, and compliance with special RFP requirements—for example, complying with U.S. arms control treaties and policies.

Technical performance evaluation factors are typically further divided into subfactors. Subfactors are supposed to emphasize areas that will enable the government to discriminate among bidders or where the government expects measurable differences in proposed solutions. The following illustrates the breakout of evaluation factors and subfactors:

Factor 1: Performance Capabilities (Technical)

Subfactor 1: Specification Compliance

Subfactor 2: Mission Assurance

Subfactor 3: Logistics Support

Subfactor 4: Systems Engineering and Test

Subfactor 5: Program Management Plan Factor 2: Schedule

Factor 3: Past Performance

Factor 4: Cost/Price

Each technical subfactor has a set of corresponding evaluation criteria. These criteria specify the general basis on which the subfactor will be evaluated. For example:

Subfactor 5: Program Management Plan

Evaluation Criteria: The proposed program management plan will be evaluated for its adequacy to correlate and organize contractor resources and any subcontractor efforts to fulfill the requirements of the statement of work and ensure timely, high-quality, cost-effective performance of work activities.

Each evaluation criterion has one or more associated evaluation standards. Standards establish the minimum level of compliance required to fulfill RFP requirements. They provide an objective basis for evaluating individual proposal responses. Evaluation standards can be either quantitative or qualitative. For example:

Description: This standard applies to the terminal control processor.

Standard: The standard is met when the terminal control processor is a handheld computer provisioned with 1 GB random access memory and a solid state hard drive with a minimum of 64 GB capacity.

Qualitative standards are more subjective. They require the evaluator to determine by degree the extent to which the standard is met. For example:

Description: This standard addresses the offeror’s proposed method of managing and controlling integrated logistic support. This is to include the logistic support program of any subcontractor and their respective management structure(s) and the interrelationship with the offeror’s management structure.

Standard: The standard is met when the proposal adequately identifies the management structure of the fundamental logistic areas to include Logistic Support Analysis, Technical Publications, Training, and Contractor Logistic Support. The type of network proposed between these areas should ensure that all necessary/pertinent information is communicated between the logistic areas to ensure a smooth-running support system.

Government evaluators use evaluation standards as the objective basis for scoring technical proposals. Even though the standards are mostly subjective themselves, they nonetheless are used to consistently score each bidder’s technical proposal. Each technical proposal is evaluated at the evaluation standard level. Evaluators read your proposal, compare it to the evaluation standard, and give it a score. Scores for individual standards are combined to derive a score for each evaluation subfactor.

Figure 3-1 shows the hierarchical relationship between factors, subfactors, evaluation criteria, and standards. Section M of the RFP lists evaluation factors and subfactors and identifies their relative weight in determining the winning bid. Section M also describes the evaluation criteria, but it very rarely provides evaluation standards. The standards are contained in the government’s source selection plan, which is not available to offerors.

Figure 3-1. The Hierarchical Relationship between Factors, Subfactors, Evaluation Criteria, and Evaluation Standards

Source Selection Organization

Source selection is performed by a formal source selection organization. The size and specific composition of this organization vary depending on the size, importance, and complexity of the procurement. Figure 3-2 shows a representative source selection organization for a major procurement.

Figure 3-2. Representative Source Selection Organization for a Major Acquisition

The source selection authority (SSA) makes the ultimate source selection decision. The SSA is supported by a source selection advisory council (SSAC) and a source selection evaluation board (SSEB). Led by a chairperson, the SSEB is divided into major areas that correspond to the factors used to evaluate proposals. At a minimum, the SSEB includes separate teams to evaluate technical, past performance, and cost factors. Typically, the technical evaluation team is further subdivided into teams that correspond to the technical evaluation subfactors. The example in Figure 3-2 shows an SSEB divided into separate teams for three technical subfactors—logistics, technical, and management.

The source selection organization for a non-major procurement or streamlined acquisition is the same as that shown in Figure 3-2, except there is no separate advisory council and the evaluation board is referred to as a source selection evaluation team (see Figure 3-3).

Figure 3-3. Representative Source Selection Organization for a Non-Major Acquisition

The role of the advisory council is performed either by the SSA or by a smaller team appointed and often chaired by the SSA. Otherwise, the basic roles and responsibilities of source selection evaluation team members remain the same. Individual evaluation panels or teams are still responsible for evaluating individual proposals. For non-major acquisitions, the procuring contracting officer (PCO) may serve as the source selection authority.

Each portion of the source selection organization has well-defined roles and responsibilities, as described below.

Source Selection Authority (SSA)

The SSA is responsible for the acquisition. He or she makes the final selection decision and ensures the selection process is conducted properly. Additional SSA responsibilities include:

Approving the acquisition plan or source selection plan used to guide source selection

Appointing the chairperson and members of the advisory council

Approving competitive range determinations and elimination of offerors from the competitive range

Authorizing release of the RFP and approving execution of the contract.

Source Selection Advisory Council (SSAC)

The advisory council consists of senior government personnel who advise the SSA on how to conduct the source selection. They also perform a comparative analysis of the results of the evaluation performed by the SSEB. They report their results to the SSA and may make a selection recommendation.

Additional roles of the SSAC include:

Developing the evaluation criteria contained in Section M and assigning relative weights to these criteria

Appointing the chairperson and members of the SSEB

Reviewing and weighing the findings of the SSEB

Approving the RFP.

For non-major procurements, the role of the advisory council is performed by the SSA or a small team appointed by the SSA.

Procuring or Principal Contracting Officer (PCO)

The PCO oversees source selection to ensure that the process complies with applicable acquisition regulations. He or she serves as a staff advisor to the SSA, SSAC, and SSEB and may chair the SSAC. The PCO is also responsible for contract terms and conditions, conducts negotiations with offerors, and serves as a single point of contact for bidders once the RFP is released. In addition, the PCO:

Ensures that the evaluation criteria in the source selection plan are properly reflected in the RFP

Makes competitive range recommendations

Decides whether to conduct discussions and how they should be conducted

Leads the contract team, which conducts discussions with bidders.

Source Selection Evaluation Board or Team (SSEB)

Led by a chairperson, the SSEB is responsible for evaluating proposals. Separate teams evaluate technical, cost/contract, and past performance factors. Evaluators evaluate each proposal against objective standards and do not compare proposals against one another. Typically, a team leader responsible for overseeing a particular portion of the evaluation leads each evaluation team. The evaluation board prepares an evaluation report for each bidder, which is provided to the advisory council. The SSEB chairperson oversees day-to-day evaluation processes and coordinates the evaluation between the SSEB and the SSAC, as well as between separate evaluation teams.

Source Selection Plan

As the RFP is being developed, the government also builds a source selection plan. This plan defines how the source selection will be conducted. Generally, it defines the need; states the acquisition goals and methodology; identifies the Source Selection Organization (SSO); lists RFP requirements, evaluation factors, subfactors, criteria, and standards; identifies the relative important of all cost and non-cost factors in selecting a winner; and lists the acquisition/source selection milestones.

The source selection plan is not available to bidders; it is an internal and confidential government planning document. The final RFP is not released until the SSA approves the plan.

OVERVIEW OF THE SOURCE SELECTION PROCESS

Once the final RFP is approved, it is released to industry. Each interested bidder prepares a proposal in response to the RFP and submits it to the government at the prescribed time and place. Competitive negotiation (FAR Part 15), formally called “source selection,” begins at this point.

After proposals are received, they are checked to see if they meet submittal requirements. For page-limited proposals, the government counts the pages. If the proposal exceeds the limit, pages over the limit are removed from the back of the proposal and returned to the bidder. Typically, source selection is conducted in a secure location. In today’s world, that means a secure local area network. No one outside the source selection team is allowed access to the proposals. However, many agencies currently use contractors to help with source selection.

The evaluation team is instructed on how source selection will be conducted. This includes security, schedules, work hours, rules of conduct, evaluation procedures, proposal scoring systems, evaluation criteria, and how to document source selection results. The proposal is divided among the various evaluation teams. For example, the technical team sees only the technical proposal. Typically they do not get to see cost data or past performance information. However, data may be shared between teams to facilitate the evaluation process based on the specific rules established by each acquisition agency. For example, the technical team might have access to cost information, such as labor hours, if that information is required to verify or support the technical proposal. Sometimes agencies request a version of the cost proposal without actual cost information so they can share it with the technical evaluation team.

The teams then begin evaluating proposals. Normally, only one proposal is evaluated at a time. Again, this is intended to preclude comparing proposals from different bidders. The order of evaluation is determined randomly. At least two people typically evaluate each subfactor, although the number may be much larger. On large proposals, and depending on the availability of evaluators, teams evaluate only the section of the proposal that deals with the factor or subfactor they are evaluating. On smaller proposals, they might evaluate larger sections of the proposal or the entire technical proposal for their area.

Proposals are evaluated at the lowest level of evaluation criteria. Results of the technical, past performance, and cost evaluation are integrated and passed to the SSAC or its equivalent. The SSAC compares the different offers and establishes a competitive range. Potentially successful proposals are identified and included in the “competitive range” (short list) based on price and non-cost factors. Bidders outside the competitive range are eliminated from further competition and are notified in writing. Unless an award is made without discussions, oral or written discussions are conducted with the offerors in the competitive range to clarify their bids and to eliminate proposal deficiencies. Each of those offerors is then given an opportunity to submit a final proposal revision that updates its proposal. Final proposal revisions are evaluated, and a comparative evaluation is conducted based on the updated proposals. A contract award is made to the bidder whose proposal is judged to be most advantageous to the government based on the stated evaluation criteria and the acquisition strategy. Unsuccessful bidders are then notified promptly in writing, and debriefings are held with offerors that request them.

The source selection process has traditionally been dominated by DoD procurements. The following discussion is based on this model. Some terminology differences exist between agencies and in the specific application of evaluation procedures. This is especially true for NASA and Department of Energy procurements. Nevertheless, the overall process is common across agencies.

Separate evaluations are conducted for the technical, past performance, and cost portions of the proposal.

TECHNICAL EVALUATION

Evaluators read their assigned proposal sections to determine what is being offered. Then they compare what they have found with the corresponding evaluation standards. Evaluation standards serve several purposes. First, they provide an objective and uniform basis for evaluating all proposals. That is, each proposal is scored against a set of objective standards rather than against other proposals. Therefore, every proposal is evaluated against the same criteria. Second, standards provide the basis for evaluating proposal responses. That is, evaluators compare your proposal response to the evaluation standard and assign it a rating or score based on the type of scoring system being used.

Scoring Systems

As previously stated, evaluators read your proposal and determine the extent to which it meets, exceeds, or fails to meet the standard. Procuring agencies use a variety of scoring systems to evaluate technical proposals. The Air Force uses a scoring system based on color codes. Some DoD agencies use a rating system, and yet other federal agencies, such as NASA, use a point scoring system. Despite these variations, all accomplish the same objective: to determine the extent to which the standard is met.

Figure 3-4 illustrates the Air Force color scoring system. It uses four colors—blue, green, yellow, and red—to denote cases where requirements are exceeded, met, not met but correctable, and unacceptable, respectively.

Figure 3-4. Air Force Evaluation Scoring System

Other color rating systems add a fifth category to distinguish between proposals that merely exceed requirements and those that significantly exceed requirements. Point scoring systems provide yet a finer distinction between how well requirements are met. The relationship between the different systems is shown in Figure 3-5.

Figure 3-5. Different Scoring Systems and the Relationship between Them

Each system has pros and cons. Generally, the finer-grain scoring systems result in more variability between individual evaluators. In some cases, evaluators use a simple “plus, check, and minus” system to denote proposals that exceed, meet, or do not meet requirements. They use this system to evaluate proposals at the lowest level. Then they sum the ratings at the subfactor level using a more fine-grain system.

Regardless of the system being used, evaluators score your proposal at the lowest level of evaluation criteria. Scores for individual evaluation criteria are then combined to determine a subfactor score. Sometimes subfactor scores are then combined to determine a factor score; in other instances the scores remain at the subfactor level.

Technical Strength, Weakness, Uncertainty, and Deficiency

Technical evaluators also identify proposal strengths, weaknesses, uncertainties, and deficiencies. A technical strength is a significant, outstanding, or exceptional aspect of a proposed approach that has merit and exceeds specified performance or capability requirements in a way beneficial to the government. A strength can also be any aspect of a proposal that appreciably decreases the risk of unsuccessful contract performance or that represents a benefit to the government.

A technical weakness is any aspect of a proposed approach that increases the risk of unsuccessful contract performance. A weakness can also be a case in which the proposal failed to completely satisfy the evaluation standard.

An uncertainty is any aspect of the proposal for which the intent of the proposed approach is unclear because there might be more than one way to interpret the offer, or because inconsistencies indicate there might be an error, omission, or mistake in the proposal. Examples include a mistake in calculation or measurement or contradictory statements in the proposal. An uncertainty can also be assigned whenever a portion of the standard is not fully addressed.

A deficiency is a material failure of a proposal to meet a government requirement or a combination of significant weaknesses in a proposal that increase the risk of unsuccessful contract performance to an unacceptable level. Examples of deficiencies include proposal statements that a requirement cannot or will not be met, a proposed approach that clearly does not meet a requirement, and omission of information required to assess compliance with the requirement. Deficiencies that arise from missing information are most often due to placing proposal information where evaluators cannot find it or simply failing to provide the required information.

Technical strengths and weaknesses are assigned in addition to the technical score. Your proposal might receive a Satisfactory technical score because it meets the requirements of the evaluation standard. If the evaluator believes aspects of your proposed approach mitigate potential risks, or offer a benefit to the government, that proposal section may be assigned a technical strength or multiple strengths. Alternatively, if the evaluator believes aspects of your proposed approach increase risk or the potential for unsuccessful performance, that proposal section may be assigned a technical weakness or multiple weaknesses. Note: The assignment of strengths and weaknesses is not necessarily related to whether you meet or exceed the requirement of the evaluation standard. Instead, it is based on how you propose to meet the requirement. These are judgment calls by evaluators based on their “comfort level” with how you have proposed to meet a requirement. This judgment is based solely on the information in your proposal.

Proposal Risk

There are two types of risk. One—proposal risk—is based on your proposed approach, and one is based on your past contract performance. Technical evaluators assess proposal risk if it is included in the RFP as an evaluation consideration. (Not all agencies assess proposal risk.) Performance risk is evaluated separately by a different group of evaluators.

Proposal risk is the uncertainty associated with a bidder’s proposal. Technical evaluators charged with scoring your technical proposal also assess the amount of risk associated with your proposed approach. Normally this is accomplished by assigning a proposal risk rating of high, moderate, or low. The following are standard definitions used to evaluate proposal risk:

High—Likely to cause significant serious disruption of schedule, increase in cost, or degradation of performance even with special contractor emphasis.

Moderate—Can potentially cause some disruption of schedule, increase in cost, or degradation of performance. However, special contractor emphasis will probably be able to overcome difficulties.

Low—Has little or no potential to cause disruption of schedule, increase in cost, or degradation of performance. Normal contractor effort will probably be able to overcome difficulties.

The Air Force adds a fourth category of proposal risk:

Unacceptable—The existence of a significant weakness (or combination of weaknesses) that is very likely to cause unmitigated disruption of schedule, drastically increased cost, or severely degraded performance. Proposals with an unacceptable rating are not awardable.

The Air Force also allows evaluators to assign plus (+) and minus (-) scores to the proposal risk rating. This enables a more fine-grain risk evaluation.

Proposal risk is separate from the technical score and independent of whether you meet the evaluation standard requirement. You can receive an excellent or satisfactory technical score and still receive a proposal risk rating of moderate or high. The assignment of proposal risk is a judgment call based on how you propose to meet the requirement. Like strengths and weaknesses, it is based solely on the information provided in your proposal.

In many source selections, proposal risk is viewed as equal in importance to the technical score and may be established as a separate evaluation factor. (Many current Air Force evaluations list proposal risk as a evaluation factor.)

Proposal Risk versus Technical Weaknesses

There is a subtle difference between proposal risk and technical weaknesses. A subfactor can be judged as having a technical weakness when the source selection standards are not met fully. A weakness can be anything interpreted by an evaluator that is deficient with respect to the standard. Proposal risk is based on the probability that, if the proposed course of action is followed, the desired government requirement or objective will not be attained or will not be met within the specified constraints of cost, schedule, and required performance.

In general, technical weaknesses in a subfactor create a corresponding risk to achieving program requirements. Therefore, sometimes you can be doubly penalized for a technical weakness: You receive a higher proposal risk rating in addition to the weakness.

There are, however, sources of proposal risk other than weaknesses. For example, the development approach (including schedule time, test scope, etc.) proposed by the offeror may, in the opinion of the evaluator, be unlikely to succeed even though no specific standard was written on the subject. In such a case it is possible to have a proposal risk without a corresponding technical weakness.

Summary of Technical Evaluation

Evaluators read your proposal and compare it to the evaluation standards. They assign a score to each subfactor and identify any associated strengths, weaknesses, uncertainties, or deficiencies. They also judge the risk associated with your proposal and assign it a risk rating of high (H), moderate (M), or low (L). The evaluation results for each evaluation standard are then combined to determine a composite evaluation for each technical subfactor. Typically, several evaluators independently score the same proposal section. Once they are finished, they share their individual evaluations to arrive at a single consensus evaluation. The lead evaluator for that section then writes a brief narrative summary of what he or she thinks you proposed.

The narrative is used in conjunction with the technical scoring system to indicate a proposal’s strengths, weaknesses, and risks. The narrative supplements the technical score. It describes each proposal’s relative strengths, weaknesses, and risks to the source selection authority in a way that adjectives, colors, and numbers alone cannot. Eventually, the SSAC or its equivalent uses the narratives to compare proposals. The narratives provide a reasonable and rational basis for the selection decision in addition to the technical scores alone.

It is important to note that a poor score for a single evaluation criterion can outweigh good scores for the other criteria. Likewise, an unacceptable score on one subfactor can result in the entire factor’s being judged as unacceptable. The key point here is this: There are no unimportant or trivial proposal sections. Everything counts. Sometimes the “little things” can kill you. I can recount many instances where bidders shot themselves in the foot by overlooking or underemphasizing a proposal section they viewed as trivial or unimportant. Imagine proposing the best engineering solution and losing the competition because your safety write-up missed the mark, or because you failed to take the small business plan seriously. It happens every day.

Bidders repeatedly lose programs because their competitors submitted a better proposal. More often than not, the difference between winning and losing is small. The advice I always give is to treat every part of the proposal as if it were the single factor that determines whether you win the competition. There is a natural tendency to emphasize the most heavily weighted evaluation factors. Indeed, that is important. Yet if you neglect the less highly weighted factors, you stand a very good chance of losing, especially if your competitors know better.

Figure 3-6 illustrates a case in which a poor rating on a single subfactor pulled down the overall score for the factor being evaluated. In this case, an evaluation of “yellow” was assigned to one of four subfactors, while the other three subfactors received a “green” rating. When the evaluators summed up the subfactors to arrive at a factor score, they assigned a “yellow” to the entire factor.

Figure 3-6. Combining Subfactor Scores

Winning the Technical Evaluation War

I repeatedly emphasize the point that most procurements are won by a narrow margin—that elusive “one point” that separates winners and losers. Rare is the case where there are not two or more bidders who meet the technical requirements of the solicitation. Each will likely receive a Satisfactory technical score. How, then, does the government determine a winner for the technical evaluation? Are all Satisfactory bidders in this case viewed as equal? No, winners and losers are determined by the strengths and weaknesses their proposals receive and the perceived risk of their proposed approaches. Consider the following scenarios:

Two bidders have a Satisfactory technical score, and both proposals are considered to have a Low proposal risk. However, proposal A receives several technical strengths and no weaknesses, whereas proposal B receives no strengths but two technical weaknesses. In this case, proposal A is the technical winner.

Three bidders submit technically acceptable proposals with a Low risk rating. Two bidders have no technical strengths or weaknesses. Bidder C has several technical strengths and hence wins the technical evaluation.

Three bidders submit technically acceptable proposals, and they have the same number of strengths and weaknesses. However, two proposals receive a Moderate risk rating, whereas proposal B receives a Low risk rating. In this case, proposal B wins the technical evaluation.

Strength ratings are achieved by the little things you put in your proposal that give evaluators confidence you will accomplish what you propose without negatively affecting contract performance. Weaknesses arise when evaluators lack confidence you will achieve what you propose without impacting contract performance. Here are some real-world examples:

We were bidding to unseat an incumbent contractor performing aircrew training and logistics services for an Air Force cargo aircraft. Transitioning to a non-incumbent is always considered a risk because there is no guarantee the existing workforce will remain with the new contractor. We solicited employment letters of intent for over 80 percent of the incumbent workforce (contingent upon our winning the contract) and provided a detailed explanation of how we would secure the remaining employees. We received a strength rating for hiring the incumbent workforce. Every other non-incumbent bidder proposed to hire the incumbent employees; we just proposed to do it in a way that gave evaluators confidence we would succeed.

We were competing for a contract that included a large software development effort. We provided a table showing each required software module, the required lines of code, the number of lines we could reuse from existing software, and the new lines of code that would be required for the contract. Overall, we showed that only 2 percent of the software needed to be developed from scratch. We received a strength rating for this section of our proposal and a Low proposal risk rating. The strength rating came from providing the substantiating details, not just the fact that we had to develop only 2 percent of the required software code.

Part of a bid required us to propose a major modification to upgrade courseware to match the same aircraft configuration aircrews were being trained to operate. We thought a rapid courseware modification effort would be well received because the existing differences in courseware and aircraft configuration represented a potential safety risk. We proposed a five-month schedule for this effort. We received a corresponding weakness and a Moderate proposal risk rating for our efforts. The government thought a five-month schedule was too ambitious, and we had failed to explain the measures we had taken that would enable us to finish the courseware modification in five months. We met the RFP requirement—courseware modification—and received a Satisfactory technical score, but we failed to convince the evaluators we could accomplish the modification without negatively affecting contract performance. Our overall technical evaluation for the proposal section suffered accordingly.

Everyone focuses on meeting or exceeding RFP requirements. Indeed, doing so is a prerequisite for preparing an acceptable proposal. Yet, competitive advantage goes to the bidder that maximizes the technical strengths and minimizes the weaknesses assigned to its technical proposal and concurrently achieves the lowest proposal risk rating. This proposal feat is accomplished by how information is presented in your technical proposal. It is largely the outcome of how well you play the proposal game. It is based on the structure and content of proposal narrative and how good a job you do of convincing evaluators you will be able to successfully accomplish what you have proposed. It rarely depends on whether you meet the requirement. Nearly everyone will achieve that minimum competitive threshold.

Chapter 10 provides some valuable tools and guidelines to help you win the technical evaluation war.

PAST PERFORMANCE EVALUATION

The government views a bidder’s performance on past and current contracts as one indicator of the bidder’s ability to successfully perform the work being solicited. With few exceptions, an evaluation of past performance is required for negotiated, competitive procurements. (The lowest price technically acceptable model does not evaluate past performance.) Past performance is an importance source selection criterion. It is not uncommon to see past performance account for one-fourth or one-third of the entire evaluation criteria.

The government’s contract team normally evaluates past performance. Some agencies used a special group, referred to as a performance risk assessment group (PRAG), to perform this evaluation (see Figure 3-2). The specific method used to evaluate your past performance may be described in the RFP, typically in Section M.

Sources of Past Performance Information

The assessment of performance risk is based on information about your performance on other current and recent contracts. Two elements make up the assessment: the relevance of past contracts to the effort being proposed and how well you fulfilled your past contract obligations with respect to performance, schedule, and cost.

A variety of methods are used to collect this information. The federal government maintains a central database of contract performance in the Past Performance Information Retrieval System (PPIRS). This database can be accessed via the Internet at www.ppirs.gov.

Every government prime contract with a value above a minimum threshold receives an annual evaluation referred to as a Contractor Performance Assessment Report (CPAR). Prime contractors receive copies of CPARs and are allowed to comment or appeal the CPAR rating and narrative. Once approved, CPARs go into past performance databases, including PPIRS. Some acquisition agencies maintain their own past performance database.

Often a past performance questionnaire is included in the RFP. Bidders are required to send the questionnaire to their past or current customers, who complete the questionnaire or interview and return it to the procuring agency. In addition, members of the contract team may contact the program manager or contracting officer from your past contracts, as well as the government agency that audited your cost proposal.

Many RFPs require bidders to submit a separate past performance proposal volume. Here bidders list the recent contracts most relevant to the work they are bidding on. Typically, this volume requires administrative information about past contracts and points of contact for people who administered and managed the contracts. Sometimes bidders are given the opportunity in the past performance volume to summarize these efforts and show how they are relevant to the proposed effort.

Past Performance Assessment

Government contracts personnel review and evaluate your past and current contract performance and read your past performance volume. The first part of the evaluation is to determine which past contracts relate most closely to the requirements of the solicitation. Past contracts are normally scored as Highly Relevant, Relevant, Slightly Relevant, or Not Relevant. Past efforts viewed as most relevant and most recent receive the highest weighting. Some aspects of relevancy include (1) type of effort (e.g., development, production, logistics support, repair), (2) nature of the business areas involved, (3) required levels of technology, (4) contract types, (5) materials and production processes, (6) scope and complexity of work effort, and (7) skills required to provide the service.

Past performance is scored for each past contract. Currently, past contracts receive a risk or confidence rating, and the evaluation may include strengths and weaknesses. As in the technical evaluation, strengths are aspects of past performance that give evaluators confidence you will fulfill contract requirements. Weaknesses are aspects of past performance that weaken the evaluator’s confidence in your ability to fulfill contract requirements.

Past performance scores for each contract are summarized, and a final past performance risk/confidence rating is determined. Sometimes past performance is evaluated at the subfactor level using the technical evaluation subfactors.

The following is a typical past performance rating system:

Exceptional/High Confidence/Very Low Risk—Based on the offeror’s performance record, no doubt exists that the offeror will successfully perform the required effort.

Very Good/Significant Confidence/Low Risk—Based on the offeror’s performance record, little doubt exists that the offeror will successfully perform the required effort.

Satisfactory/Satisfactory Confidence/Moderate Risk—Based on the offeror’s performance record, some doubt exists that the offeror will successfully perform the required effort. Normal contractor emphasis should preclude any problems.

Neutral/Unknown Confidence—No performance record identifiable (see FAR 15.305 (a)(2)(iii) and (iv)).

Marginal/Little Confidence/High Risk—Based on the offeror’s performance record, substantial doubt exists that the offeror will successfully perform the required effort.

Unsatisfactory/No Confidence/Very High Risk—Based on the offeror’s performance record, extreme doubt exists that the offeror will successfully perform the required effort.

The Air Force reduces the six performance ratings to five ratings by collapsing the two highest ratings into a single rating of “Substantial Confidence.” Other agencies use a simple high, moderate, or low risk rating system comparable to the proposal risk rating system.

Tips for Maximizing Past Performance Scores

You cannot change your past performance, but you can help yourself in this area. First, make absolutely sure that all the information contained in your past performance proposal (e.g., names, addresses, phone numbers, contract numbers) is correct. Double-check the accuracy of this information. If you mail out questionnaires, follow up to ensure they were received. Check again to see if they were submitted to the government on time. It your responsibility to ensure each competed questionnaire is returned to the procuring agency on time.

If you have the chance to provide a summary or narrative of past efforts, make sure the relevance of the past effort is clearly explained. Failing to explain the relevance of past contracts can cost you. Most evaluators will not understand the relevance of your past work efforts unless you explain it in the past performance section of your proposal. This could cause a highly relevant contract, with excellent past performance, to receive less weight in the evaluation than it might otherwise simply because the evaluators failed to understand its relevance.

If you have had a problem on a past contract, be honest. Identify the problem and explain what you did to fix it or recover from a bad situation. Make sure it is clear that safeguards are in place to prevent a reoccurrence of the problem on the contract for which you are bidding. This is extremely important. The government does not expect bidders to be perfect. Problems are a normal occurrence in government programs. Whether you have had problems is not the main issue; it is how you dealt with the problems and the likelihood that they will occur again that count the most.

There is no substitute for an unblemished record of on-schedule and on-budget cost performance. In the absence of a perfect history, however, do everything you can to achieve the best possible past performance rating. Actually, even if you have a perfect record, you should still do everything possible to maximize your past performance evaluation. If you are proposing key personnel who have performed on successful contracts in the past, make sure you highlight that information in your proposal. The past performance of key people also affects your ability to successfully perform the proposed effort. This can be especially beneficial if the agency to which you are bidding knows the key person proposed, or if the past effort is a good match with the requirements of the proposed effort.

COST EVALUATION

Cost is not scored, but it is evaluated. It can also receive a risk rating based on your past cost performance and the risk associated with your proposed approach. For some major acquisitions, cost risk can be a separate evaluation factor.

The type of cost evaluation the government performs depends on the type of contract. Contracting officers are responsible for purchasing products and services at fair and reasonable prices. Two types of analyses may be performed: price comparison/analysis and cost realism analysis.

Price analysis examines a bidder’s proposed price to determine whether it is fair and reasonable without evaluating its separate cost elements and proposed profit. Price analysis involves comparison with prices from other proposals. Normally, competition itself establishes cost reasonableness for fixed-price contracts.

Cost realism analysis is the independent review and evaluation of specific elements of each bidder’s proposed cost estimate. Its purpose is to determine whether proposed costs are realistic for the work to be performed, reflect a clear and complete understanding of the requirements, and are consistent with the bidder’s proposed technical approach. A cost realism analysis is required for all cost-reimbursable contracts to determine the most probable cost for each offeror.

For cost-reimbursable contracts, the cost you propose is not necessarily the same as your evaluated cost. The government may adjust your proposed cost to arrive at what is called the “most probable cost to the government.” If the government views your approach as risky, or if it does not accept as legitimate your basis of estimate, it may add cost to what you bid. Under these circumstances, you could bid the lowest cost but still lose on cost because of government adjustments. The RFP must specify how cost evaluations will be conducted, so pay close attention to this information and use it to prepare your cost proposal. In addition, some RFPs require a very elaborate cost volume and ask you to provide a detailed basis of estimate for each cost element. Follow the cost volume instructions provided in RFP Section L and take care to provide clear and convincing bases of estimates for all proposed costs (see Chapter 14).

Here is the important point: The only information available to determine the cost realism of your bid is what you provide in your cost proposal. Do a poor job of presenting and explaining your proposed costs, and you risk having the government increase your bid cost. Sometimes the adjustment can be substantial and can represent the difference between winning and losing. Likewise, you have a problem if the government detects an imbalance in your pricing. Unbalanced pricing exists when, despite an acceptable total evaluated price, the price of one or more contract line items is significantly overstated or understated because of cost or price analysis techniques.

Cost realism is more important for cost-reimbursement-type contracts, but it may be applied to fixed-price contracts with incentive or award fees. (The government will not adjust fixed prices, but it can assign risk.) Cost completeness refers to whether your cost proposal addresses all program requirements and elements.

If the government determines that your bid prices are unrealistic or materially imbalanced, you can be eliminated from the competition before you have a chance to correct the prices.

INTEGRATED ASSESSMENT

The results from each separate evaluation team are collected and integrated to create a complete proposal evaluation for each bidder. This integrated assessment includes:

Technical

– Rating/score for each technical subfactor

– Technical strengths and weaknesses identified

– Uncertainties and deficiencies listed

– Proposal risk rating for each technical subfactor

Past Performance

– Risk rating or confidence score (may be for entire effort or for each factor and technical subfactor)

– Past performance strengths and weaknesses identified

Cost/Price

– Actual proposed cost or most probable cost

– Potential cost risk assessment.

COMPETITIVE RANGE

After an initial integrated assessment of each proposal, the government decides whether to award a contract without further discussions with the bidders (see Comparative Analysis of Proposals below). If, on the basis of the initial assessment, the government believes one bidder is the clear winner and further discussion with other bidders would not likely change their decision, the government can award a contract at that point. Otherwise, the contracting officer establishes a competitive range, which consists of a rank ordering of all proposals. If your proposal is judged to be outside the competitive range, you can be eliminated from the competition at that point.

Elimination can be due to deficiencies that, in the judgment of the government, would require a substantial change to correct your proposal. Or, the government might have more qualified proposals than it can reasonably and efficiently evaluate. So, the government eliminates those at the bottom of the competitive range. In either case, the contracting officer will notify you if you are eliminated.

The remaining bidders are then permitted to clarify any ambiguous parts (uncertainties) of their proposals and address any significant weaknesses, omissions, or deficiencies identified by the government. Uncertainties and deficiencies can be derived from any evaluation factor—technical, past performance, cost, and contractual terms and conditions.

Typically, each bidder is provided a list of questions or issues to which it must respond in writing. These questions are typically referred to as evaluation notices (ENs), information for negotiation (IFN), or information for clarification (IFC).

Each bidder in the competitive range is given an opportunity to provide written responses to specific government questions. Once these “discussions” are complete, bidders are allowed to submit amended proposal sections and adjust their proposed costs based on changes to their proposal. This is accomplished by issuing a request for a final proposal revision. Once all the new proposal information is received, each proposal is evaluated a second time using the same evaluation criteria as before.

COMPARATIVE ANALYSIS OF PROPOSALS

Members of the SSEB do not select the winner. They evaluate proposals according to the evaluation criteria and standards but do not compare proposals. Evaluation results for cost, technical, and past performance are summarized by each evaluation team and passed to the SSAC, or its equivalent for non-major acquisitions. Cost is then added to create an integrated assessment for each bidder.

The advisory council also applies factor weightings to determine the actual weighted rating of technical factors. Most evaluation teams do not know the specific weights given to subfactors or factors except for procurements that use a point system. They might know that one subfactor is more important than another, but they do not usually know the actual weighting.

Figure 3-7 provides a sample display of technical evaluation results for three companies. The technical factor consists of three evaluation subfactors (T1, T2, and T3) of equal importance. The technical score is shown in the upper right corner, past performance score in the lower left corner, and proposal risk in the lower right corner. Cost is the bid cost, with cost performance risk shown in the lower left corner.

The evaluation of each proposal is summarized, along with associated strengths and weaknesses for each technical and past performance subfactor. The SSAC combines, weighs, and discusses the evaluation results of each bidder’s proposal. Trade-offs are made among technical merits, risk, cost, and strengths and weaknesses. As shown by the example in Figure 3-7, trying to determine a winner can be very difficult and involves some subjective judgment.

As shown in the example, Company C bid a low-risk technical approach and proposed the lowest price, but its logistics support approach is marginal and its cost performance risk is moderate. Company A proposed a superior system design, but its approach was evaluated as high-risk and it bid the highest price. Its past performance risk is generally low, as is its cost performance. Company B proposed an acceptable technical approach with an intermediate price, but its overall proposal risk was rated high. Strengths and weaknesses, not shown in the example, would be discussed by the source selection evaluation team, along with any contractual considerations, such as whether one bidder asked for a deviation or waiver of required contract terms.

Figure 3-7. Sample Summary Evaluation Sheet

Sometimes just one poor evaluation score can cost you the award. The one marginal (yellow) rating of Company C might keep it from winning. An acceptable rating in logistics—an acceptable technical solution with low risk and the best price—would have made it a likely winner. Part of the trade-off process is to determine whether the superior system design proposed by Company A is worth the higher risk and price. This is where strengths and weaknesses noted by the evaluation team become critically important.

Once the advisory council completes its analysis, it presents its results to the source selection authority. The SSAC summarizes the pros and cons of each proposal and makes a source selection recommendation (if the source selection authority has previously requested one).

FINAL SOURCE SELECTION

The SSA makes the final decision. The advisory council presents its analysis of the bids, including the trade-offs among different evaluation criteria, and answers questions.

The SSA’s decision is based on this comparative assessment of proposals against all source selection evaluation criteria. Nonetheless, the source selection decision represents the SSA’s independent judgment.

The SSA weighs the evaluation results and makes the final source selection decision. The source selection decision is documented (in the source selection decision document, or SSDD). The SSDD summarizes the rationale for the decision. This includes the rationale for any business judgments and trade-offs made by the SSA, including benefits associated with additional costs.

The contract is then awarded to the winner, and losers are notified. Debriefings are provided to bidders that request them, and the source selection process is complete.

A lack of understanding about how the government selects winners and losers is remarkably prevalent among companies that compete for government business. The consequences of such ignorance are calamitous. Contracts are lost and bid and proposal resources wasted because proposal teams fail to understand how their proposals will be evaluated.

Capture the competitive edge by becoming a savvy bidder. Prepare your technical proposal with an eye toward scoring technical strengths, avoiding weaknesses, and achieving a low proposal risk rating. Become familiar with the source selection process your customer uses. Stay abreast of current trends in government acquisitions and streamlining initiatives. Visit your customer’s contracts department. Ask contracting officers what you can do in your proposal to make their source selection job easier.

Know the rules of the game if you want to be a successful player. Exploit this largely untapped source of information to gain competitive advantage and win federal contracts.