We all trust the performance ratio test – but should we?

As published in PV-Magazine in April 2021

The performance ratio test is at the core of the handover from EPC to owner. Yet sometimes, even when best practice is applied – and without particularly demanding guaranteed values to be achieved – these tests fail good projects. This can lead to costly delays and wasted effort spent trying to find issues that might not exist. Everoze Partner Dario Brivio reviews the likelihood of this happening and considers ways to increase confidence in the precision of such tests, based on recent independent analysis of real-world projects.

Everoze - PR test methodologiesFor an EPC to successfully achieve provisional acceptance certification (PAC), they must demonstrate, among other things, that the project’s performance ratio (PR) meets or exceeds a guaranteed value as defined in the EPC contract. To do so, the EPC contractor performs a PR test according to the rules defined in the contract, with the test often witnessed by the owner. Both the EPC and owner are keen to successfully achieve PAC as quickly as possible, and it is often the case that the test happens within pressured contractual circumstances. For the EPC, a successful PR test means they have met important contractual expectations and liabilities. For the owner, it means the project moves into the operational phase and starts generating revenue in line with the investment assumptions. If the project fails to meet these contractual requirements, the EPC contractor will need to spend time checking and making good the works (if needed). Eventually they will repeat the test, prolonging the construction timeline and increasing the probability of becoming liable for liquidated damages for delay. As one might expect, PR tests are incredibly important and at the core of the interface between the EPC and owner. As a technical adviser, we have sometimes been surprised when projects we feel are “good” – which is to say they are well conceived, equipped with good quality components, and executed following high-quality standards – end up failing the PR test. This has led us to wonder whether we can really trust these tests, and if not, if we should be particularly skeptical or cautious. What we call “good projects” in the continuation of this article are those solar plants we have previously assessed and inspected and which, based on the experience accumulated, we would have expected to be able to pass the PR test with a robust consistency.

Failure rates

We decided to conduct an independent analysis to answer two questions. How likely is it that “good projects” might fail the PR test? And what is the optimum combination of correction factors and duration to ensure that the test is truly representative of the quality of the project, with a 95% level of confidence? To answer these questions, Everoze took one year of high-quality operational data from 10 projects which have been previously assessed in detail and are performing well. The projects are located in diverse climates (U.K., France, and the Netherlands), thus enabling an assessment of the impact of different calculation methods in different conditions. The aim of the analysis is to understand how the probability that a plant passes or fails the test varies with the test procedure. The study used real-world operational data to remove any uncertainties typical of methodologies adopting software simulation. A reduction of the operational PR values by a certain percentage (3%, 4%, and 5%), in line with typical contractual practice, was instead adopted to determine the guaranteed PR values. Four test durations were analyzed: periods of four, seven, 11, and 16 days. Thus, considering a moving period for the test, a total of 362 four-day tests can be theoretically performed in a single year, so 350 16-day tests could be also extrapolated from a 365-day total data period. However, as the data was cleaned in advance to remove any unavailability – and other periods not considered representative of the “good project” normal operation – the total number of available days for the testing period was reduced to 348. The test was considered passed (a true positive) if the measured and corrected PR (PAC PR) was above the “guaranteed PR.” The number of passed tests is then divided by the total number of tests undergone to obtain the percentage of true positives. The remaining percentage (false negatives) shows how likely it is that “good projects” fail the PR test. By changing the combination of test duration and correction factors applied to the calculation of the PR, we can identify the test formula that delivers a 95% level of confidence, providing comfort that the test is representative of the quality of a project.

Correction factors

Our first results show that monthly correction factors and temperature correction greatly improves the reliability of the test. The results of the analysis show that, when using no correction factors, even “good” projects have only a 77% probability of passing the test. Interestingly, applying the temperature correction alone reduces the precision of the test (less projects pass the test). This is only true for assets that have a lower PR in winter due to shading, which is the case for the majority of the assets considered in the study. The reason is that in winter, a significant proportion of the energy is produced below the 25 C module temperature, leading to a further decrease of the PR when corrected to 25 C. Using monthly PR correction factors increases instead the probability of a “good” project passing the test to 94%.

Everoze - PR test duration and pass probabilities

Good projects

Our data shows that one in 10 good projects will fail a four-day test. Taking reasonable commercial assumptions (a 4% reduction in the measured yearly PR to derive the guaranteed PR – 4% guarantee scenario, chart above right) and adopting the monthly correction factor methodology (MCF), then about one in 10 “good projects” fails a four-day test. This is a significant proportion that leads to costly delays due to the need to rerun the test, and wasted effort in searching for a problem that may not even exist. The pass probability increases with the duration of the test, with 11 days providing 95% confidence that “good projects” will succeed. On average, confidence increases 0.5% per day of test duration.

Best practice

But what if the contractual conditions are tougher and there is only a 3% reduction of the estimated PR to determine the guaranteed PR (3% guarantee scenario, above right)? In this scenario, even with a 16-day test duration, one in 10 good projects fail the test. Best practice cannot remove the risk of false negatives, and sometimes particular meteorological conditions deviate from the average ones considered in the monthly correction, introducing biases in the long-duration tests. Based on this analysis, Everoze recommends that developers and owners carefully consider the PR test formula within their EPC contracts. Adopting monthly correction factors and temperature correction does not always result in the expected outcome, and this should be factored into the execution timeline to allow float for repetition of the test. Everoze appreciates that a longer test duration may impact the project’s commercial value, but we believe that if this is correctly planned into development timelines, then there should be limited impact on returns to the benefit of greater certainty in passing the test.