Every percentage point of avoidable loss on a 100 kW commercial system costs real money. At $0.12/kWh over a 25-year project life, recovering 1% of annual yield adds roughly $3,000–$4,000 in revenue. Multiply that across a portfolio of 20 rooftop systems and the number becomes a business case for engineering rigor, not just good practice.
Solar system losses are the energy gaps between a PV array’s nameplate DC capacity and its real-world AC output, typically totaling 14–25%. The 15 main loss factors fall into four groups: optical (shading, soiling, IAM), module (LID, temperature, degradation), electrical (mismatch, ohmic, clipping), and availability (downtime, snow, transformer). These are the same categories used by PVsyst, PVWatts, and Solargis — the three simulation platforms that underpin most bankable energy assessments worldwide. Using solar design software that explicitly models all 15 factors is the surest way to close the gap between prediction and actual yield.
Most energy losses are not random. They follow predictable physics, respond to design choices, and can be quantified before a single module ships. The problem is that many design workflows apply a single lumped derate factor and move on. This guide breaks the lumped factor apart, assigns field-validated numbers to each component, and explains the specific lever that controls each one.
TL;DR
Solar system losses typically total 14–25% of nameplate DC capacity. Five factors — module temperature, soiling, near shading, electrical mismatch, and inverter clipping — drive roughly 80% of the derate in most climates. Fix those five first. This guide breaks down all 15 with field-validated numbers and concrete design fixes.
In this guide:
- How the 15 loss factors are categorized by family and relative weight
- The Pareto breakdown — which 5 factors drive 80% of total derate
- Per-factor definition, typical magnitude, root cause, design fix, and modeling approach
- A side-by-side comparison of PVWatts, PVsyst, and Solargis default assumptions
- Performance ratio benchmarks by system type
- Climate-specific loss profiles for hot-arid, temperate, cold-snowy, and tropical sites
How Solar System Losses Are Categorized
The 15 loss factors divide cleanly into four families. Understanding which family a loss belongs to tells you whether the design, the equipment spec, or the O&M strategy is the right intervention point.
| Family | Loss factors | Typical total contribution |
|---|---|---|
| Optical | Horizon/irradiance losses, near shading, soiling, IAM, spectral mismatch | 6–35% depending on site and soiling regime |
| Module | LID, LeTID, module temperature, nameplate tolerance, long-term degradation | 8–25% depending on technology and climate |
| Electrical | DC ohmic, module-level mismatch, inverter conversion, inverter clipping, AC ohmic | 4–12% depending on design quality and ILR |
| Availability | Snow, system downtime, transformer losses | 1–15% depending on latitude and fleet management |
Optical losses are the most site-dependent. A clean rooftop in the German Rhine valley has a fundamentally different soiling and shading profile than a ground-mount in Rajasthan. Module losses are more technology-dependent than site-dependent — swapping from p-type PERC to n-type TOPCon eliminates LID and reduces LeTID regardless of where the system sits. Electrical losses are almost entirely design-dependent: wire sizing, string topology, and inverter selection are the levers. Availability losses split between design (snow tilt, monitoring) and O&M quality.
A complete solar system loss analysis always starts with a site-specific irradiance baseline and works through all four families. The output — expressed as performance ratio (PR) — is the single number that summarizes how well a system converts available resource into delivered AC energy. PR is calculated as actual AC yield divided by the theoretical yield if the array operated at STC nameplate all year. A PR of 0.82 means 18% of potential yield was lost to some combination of the 15 factors below.
The four-family framework also clarifies where different simulation tools diverge. PVWatts applies a single adjustable derate multiplier per factor. PVsyst builds a physical cascade from POA irradiance through each loss layer to AC output. Solargis uses satellite-derived irradiance with built-in terrain correction and applies separate loss stacks. All three use the same underlying physics; the differences are in default assumptions and the depth of user control.
The Pareto of PV Losses: Which 5 Drive 80% of Your Derate
Not all 15 factors deserve equal design attention. The table below ranks the five largest loss contributors in most climates, with typical ranges sourced from PVsyst documentation, NREL field studies, and Solargis white papers.
| Rank | Loss factor | Typical range | Family | Primary design lever |
|---|---|---|---|---|
| 1 | Module temperature | 6–18% | Module | Module technology (temp coefficient), mounting gap, ventilation |
| 2 | Soiling | 1–25% | Optical | Cleaning frequency, tilt angle, site selection |
| 3 | Near shading | 0–15% | Optical | Layout, MLPE, string design |
| 4 | Electrical mismatch | 0.5–5% | Electrical | String topology, MLPE, module binning |
| 5 | Inverter clipping | 0.3–6% | Electrical | ILR selection, irradiance distribution analysis |
Temperature and soiling together account for 7–43% of total system loss depending on climate. That range is wide because both are highly site-specific. A temperate coastal system with regular rainfall might see 2% soiling and 8% thermal loss. A desert ground-mount in Arizona might see 15% soiling and 16% thermal loss. Getting these two numbers right is more valuable than optimizing any of the remaining 10 factors.
Near shading warrants a separate call-out because the design lever is non-obvious. The loss is not just the shaded percentage of the array — it is the disproportionate impact that partial shading has on unshaded modules in the same string. Using solar shadow analysis software to model per-module shading across a full TMY before finalizing string layout can shift this number from 5%+ to under 2% without changing the array footprint.
Mismatch and clipping are the two electrical losses most often ignored in residential design. Both respond directly to design decisions made at the proposal stage: string topology and inverter loading ratio. Neither requires an expensive field fix if caught at the model stage.
The remaining 10 factors — IAM, spectral mismatch, nameplate tolerance, LID, LeTID, DC ohmic, long-term degradation, inverter conversion, AC ohmic, and availability — each contribute 0.5–4% individually. Combined they add up, but none individually moves the needle as much as the top 5. Address those first, then refine the remainder. For a deeper look at solar resources and how they feed into the loss stack, see the irradiance entry in the SurgePV glossary.
1. Pre-PV Irradiance and Horizon Shading
Definition: Energy lost before light reaches the array because distant terrain, trees, or structures block part of the sky dome, reducing the global horizontal or plane-of-array irradiance available at the site.
Horizon shading is a site-level constraint that no amount of module selection or string design can overcome. The Solargis documentation and the NREL System Advisor Model both treat it as an irradiance reduction applied before the array model begins. Typical losses are 0.5–3% for open-field sites and suburban rooftops, rising to 5–10% in narrow valleys, steep north-facing slopes, or urban canyons where the sky view factor is significantly reduced (Solargis Knowledge Base; NREL System Advisor Model).
The main controller is the site’s horizon profile — the elevation angle of obstructions as a function of azimuth. A flat prairie site may have a horizon profile of under 1° in all directions. A residential site in a Denver suburb backed by a ridgeline to the west can lose 15–30% of afternoon irradiance in winter months when the sun tracks low.
To minimize this loss, evaluate the horizon profile before committing to the site. For ground-mounts, even a modest relocation of 50–100 m can shift a problematic southern horizon. For rooftops, horizon losses are usually fixed by the building’s location, but their magnitude should still be quantified and reported in the energy model rather than absorbed into a generic soiling or shading factor.
In PVsyst, the horizon is defined by importing a horizon line from a CSV or by drawing it manually in the horizon editor. The tool then calculates irradiance reduction per time step. PVWatts does not have a dedicated horizon input; users typically reduce the global irradiance input manually or use SAM’s more detailed interface. SurgePV uses satellite-derived terrain data to approximate the horizon profile, which is adequate for most residential and C&I sites. For sites in complex terrain, a field-measured horizon survey with a fisheye lens or dedicated instrument provides the most accurate input. The resulting plane-of-array irradiance baseline feeds every downstream loss calculation, so errors here propagate through the full energy model.
2. Near Shading from Objects and Inter-Row
Definition: Energy lost when objects physically close to the array — chimneys, dormers, parapets, HVAC units, adjacent rows, or trees — cast shadows on specific modules at specific times of day.
Near shading is one of the most design-controllable losses on the list, yet it is also one of the most commonly under-modeled. Utility-scale ground-mounts in open terrain typically see 0–3% annual near-shading loss, almost entirely from inter-row self-shading. Residential rooftops with chimneys, dormers, or vent stacks can see 5–15% depending on how the strings are laid out relative to the shading objects (Sandia PV array shading studies).
The physics of near shading are disproportionate when modules are wired in series strings. A shadow covering 10% of one module’s area can cut the entire string output by 30–60% due to bypass diode behavior. This is the mismatch amplification effect, and it is why the design choice of module-level power electronics (MLPE) — optimizers or microinverters — has such a large impact. Field data from Tigo Energy shows that partial-shade mismatch drops from 5%+ on string inverter systems to roughly 1% with MLPE deployed on affected strings.
MLPE Impact on Shading Loss
With module-level power electronics, partial-shade mismatch drops from 5%+ to approximately 1% on affected strings, per Tigo Energy field data. The economics of MLPE installation often pay back within 2–3 years on shaded rooftops.
To minimize near-shading loss: first, model shade impact per module per hour across a full TMY before finalizing layout. Second, orient strings so that shaded modules are not in series with unshaded modules. Third, specify MLPE only on the strings actually affected by shading — deploying it system-wide when shading is confined to one roof face wastes budget.
In PVsyst, the near-shading model uses a 3D scene with defined shading objects and calculates electrical loss with a detailed string model or the “module strings” electrical effect option. PVWatts applies a single annual shading fraction without string-level modeling. SurgePV runs a physics-based per-module, per-hour shading simulation that feeds directly into the electrical mismatch calculation, with near shading and inter-row separated as distinct line items. See the full explanation at automated shading analysis.
3. Soiling
Definition: Energy lost due to the accumulation of dust, pollen, bird droppings, and other particulates on the module surface, which reduces transmittance of light to the solar cells.
The US average soiling loss is approximately 5% annually, per NREL field data. That average masks wide geographic variation: sites in high-rainfall temperate climates (Pacific Northwest, Germany, UK) typically see 1–2%, while sites in arid regions with low rainfall (US Southwest, Middle East, northern India) can see 5–25% without active cleaning (NREL Soiling Loss Map; Kimber et al.).
Soiling rate is controlled by three factors: local particle loading in the air (dust, pollen, pollution), rainfall frequency and intensity, and module tilt. Steeper tilt angles self-clean better because rainfall runoff carries particulates off the surface. Modules tilted at 5° or less in arid climates accumulate soiling rapidly and may require manual cleaning every 4–8 weeks to stay within acceptable loss limits.
NREL cleaning studies quantify the impact of cleaning frequency directly. Annual cleaning (1 event per year) reduces average soiling loss to approximately 1.5%. Two cleanings per year reduce it to approximately 1.3%. Three cleanings per year reduce it further to approximately 1.2%. The diminishing returns above two cleanings per year are important for O&M cost modeling: the incremental revenue from a third cleaning is often less than its labor cost except in high-irradiance, high-value markets (NREL soiling studies; PVsyst defaults).
In PVsyst, soiling is entered as a monthly soiling loss profile (percentage per month), which allows seasonal variation — heavier pollen in spring, heavier dust in dry summers. PVWatts uses a single annual fraction. SurgePV allows the user to enter a monthly soiling profile tied to the site’s climate zone. For sites without measured soiling data, the NREL Soiling Loss Map provides US county-level median soiling rates as a starting point.
4. Snow
Definition: Energy lost when snow covers module surfaces, blocking light from reaching the cells. Unlike most losses, snow loss is both discontinuous and recoverable: once snow slides or melts, output returns to baseline.
Annual snowfall-related losses range from 1–5% for sites in the US Midwest to 5–12% in high-latitude or high-altitude sites (Canada, Scandinavia, Alpine regions). Monthly losses in December–February can reach 10–30% at affected sites (NREL Marion/Riley snow studies).
PVWatts Default for Snow
PVWatts’ default derate factor is 0% for snow. Always adjust this for sites above 40°N latitude or at elevations above 1,500 m where annual snowfall exceeds 50 cm. Ignoring snow loss at a Minnesota site typically understates annual loss by 3–7%.
Snow loss is controlled by tilt angle, surface material, and mounting configuration. Modules tilted at 30° or steeper shed snow faster than low-tilt systems. Anti-soiling coatings with low surface friction also accelerate shedding. Carport or ground-mounted systems with heated edge rails can be specified for critical applications.
In PVsyst, snow loss is modeled with a monthly snow loss profile and a threshold irradiance below which the module is considered covered. The model also accounts for partial clearing. PVWatts requires manual input of a snow loss fraction, which most users leave at 0 — the single most common modeling error for northern US sites. SurgePV uses climate zone data to flag sites above 40°N for snow loss and provides a default monthly profile based on NREL snowfall data.
5. IAM (Incidence Angle Modifier) and Reflection
Definition: Energy lost because at oblique angles of incidence, a greater fraction of incoming light is reflected off the module glass surface rather than transmitted to the solar cells.
At STC (light hitting the module at exactly 0° angle of incidence), reflection losses are near zero. As the sun moves to lower angles — early morning, late afternoon, winter months in temperate climates — reflection increases. Annualized, IAM losses for a fixed-tilt system are typically 3–4.5%, per IEC 61853-2 and the ASHRAE/DeSoto model. Tracking systems have lower IAM losses because they continuously orient the module to minimize the angle of incidence (IEC 61853-2:2016; De Soto, Klein and Beckman, 2006).
IAM loss is primarily controlled by the anti-reflective (AR) coating on the module glass. High-quality AR coatings based on porous silica can reduce reflection losses by 1–2 percentage points across all angles. Bifacial modules with AR coatings on both faces also benefit from rear-side IAM improvements for diffuse light contributions.
Module glass texture and cell encapsulant also play secondary roles. The IAM characteristic is published by module manufacturers as part of IEC 61853-2 test data, though many manufacturers still provide only the ASHRAE b0 coefficient rather than a full angle-by-angle profile.
In PVsyst, IAM is modeled using either the ASHRAE model (single coefficient b0), the user-defined curve from measured data, or the physical model based on glass refractive index. PVWatts applies a built-in IAM correction that is not user-adjustable. SurgePV applies the ASHRAE model using the b0 coefficient from the module database; for modules with published IEC 61853-2 data, the measured profile is used when available.
6. Spectral Mismatch
Definition: Energy lost (or gained) because the spectral distribution of real-world sunlight differs from the AM1.5 standard spectrum used to define STC power ratings, and because different PV technologies respond differently to those spectral shifts.
For crystalline silicon modules, spectral mismatch loss is typically ±1–2% annually. It is a loss in clear-sky high-altitude conditions (blue-shifted spectrum) and a gain in overcast/diffuse conditions (red-shifted spectrum). Thin-film technologies — particularly CdTe — are more spectrally sensitive: mismatch can reach ±3–4% annually, per IEC 61853-3 and NREL spectral correction methodology.
Pro Tip: CdTe in Diffuse Climates
CdTe modules outperform c-Si in overcast and diffuse-light conditions because their spectral response peaks align better with the red-shifted spectrum of cloudy skies. In markets like the UK, Germany, or the US Pacific Northwest, this can mean a real 1–2% yield advantage over c-Si for the same nameplate rating — a genuine factor in module selection, not a marketing claim.
Spectral mismatch is controlled by module technology selection and, to a lesser degree, site climate. Installers designing for high-diffuse climates who are comparing c-Si to CdTe should factor spectral performance into the yield model rather than assuming identical generation.
In PVsyst, the spectral correction is applied using the Sandia module model (which includes spectral correction factors as a function of air mass and precipitable water) or through the user-defined spectral correction profile. PVWatts does not apply a spectral correction. SurgePV applies spectral correction for supported module technologies using climate zone data, with the correction factor drawn from the module’s Sandia database parameters where available.
7. Module Nameplate Tolerance
Definition: Energy lost when modules are shipped below their rated wattage due to manufacturing variation within the tolerance band stated on the datasheet.
Modern bifacial and TOPCon modules are typically shipped with 0% negative tolerance — manufacturers guarantee at or above nameplate, and premium binning means many modules arrive 1–2% above rated. Older generation c-Si modules with ±2.5% tolerance produced some fraction of units at nameplate minus 2.5%, which could add up to a meaningful fleet-level loss when entire containers trended low (DNV PV Module Reliability Scorecard).
This is one of the few loss factors that is almost entirely controlled by procurement specification. Specifying 0/+3% tolerance (positive only) guarantees no nameplate shortfall. Most Tier 1 manufacturers offer positive-only tolerance as a standard or premium option. The incremental cost is small relative to the 25-year revenue impact.
For modeling, PVsyst applies a module quality loss factor that defaults to 0% for positive-only tolerance and up to 2% for ±2.5% tolerance if the user believes the fleet will trend toward the lower bound. PVWatts applies 1% as a default nameplate loss. SurgePV uses 0% as the default for modern modules with positive-only tolerance and allows the user to adjust for legacy equipment. Specifying the tolerance correctly matters most for projects using second-hand or remanufactured modules, where shipment quality is less predictable.
8. LID and LeTID
Definition: LID (Light-Induced Degradation) is a first-hours power loss caused by boron-oxygen defects in p-type c-Si, activated when the module is first illuminated. LeTID (Light and Elevated Temperature Induced Degradation) is a slower, deeper degradation in PERC modules driven by defect activation at elevated cell operating temperatures over the first 1–2 summers.
LID affects p-type monocrystalline modules at approximately 1–1.5% and multicrystalline at approximately 0.5%. N-type technologies (TOPCon, HJT) have no boron-oxygen pairs and therefore zero LID. LeTID affects PERC modules at 1–6% before stabilization, with the magnitude depending on cell thickness, firing temperature, and hydrogen passivation quality. Post-regeneration PERC cells — treated with a targeted illumination-and-temperature cycle — recover most of this loss, with residual LeTID under 1% (NREL “Understanding LID of c-Si Solar Cells; Fraunhofer ISE LeTID research).
| Technology | LID | LeTID |
|---|---|---|
| Mono c-Si p-type (standard) | 1–1.5% | Not applicable |
| Multi c-Si p-type | ~0.5% | Not applicable |
| PERC (p-type) | 1–1.5% | 1–6% (pre-regeneration) |
| TOPCon (n-type) | 0% | 0% |
| HJT (n-type) | 0% | 0% |
| CdTe | 0% | 0% |
The design fix is straightforward: specify n-type modules where LID and LeTID are a meaningful concern. For projects in hot climates where PERC modules will spend significant time above 50°C cell temperature, LeTID can represent a 2–4% yield loss in years 1–3 that is not captured in standard degradation curves. The module degradation glossary entry has a full breakdown of stabilization curves for major PERC manufacturers.
In PVsyst, LID is entered as an additional loss fraction in the module quality section. LeTID is modeled separately as a time-variable degradation function. PVWatts applies 1.5% LID as part of its bundled derate. SurgePV applies LID from the module specification database and allows the user to input LeTID correction for PERC modules, with a default based on manufacturer stabilization data.
Model Every Loss Factor Before You Submit the Quote
SurgePV runs the full 15-factor loss stack — shadow analysis, thermal derate, clipping, degradation — in one workspace.
Book a DemoNo commitment required · 20 minutes · Live project walkthrough
9. Module Temperature
Definition: Energy lost because PV cells convert light to electricity less efficiently as their operating temperature rises above the 25°C STC reference temperature.
The Pmax temperature coefficient ranges from −0.30%/°C (HJT) to −0.45%/°C (some older c-Si modules). Standard PERC modules are typically −0.35% to −0.40%/°C. The annualized loss is 6–12% in temperate climates and 10–18% in hot climates, making this the single largest loss factor on the list for the majority of global installations (PVsyst; King-Boyson-Kratochvil thermal model).
Cell Temperature Formula
T_cell = T_ambient + (NOCT − 20) × (G / 800). Where NOCT is Nominal Operating Cell Temperature (°C), G is irradiance (W/m²), and 800 W/m² is the NOCT reference irradiance. A module with NOCT 44°C at 35°C ambient and 900 W/m² operates at 35 + (44 − 20) × (900/800) = 62°C — 37°C above STC reference.
In Phoenix or Dubai, module temperature losses alone can exceed 15% annually on PERC string systems. Switching to HJT (−0.26%/°C) from PERC (−0.38%/°C) on a 500 kW system in those climates represents roughly 1.5–2% additional annual yield — a 10-year revenue impact that justifies the premium module cost in most cases.
Temperature loss is controlled by: technology selection (HJT lowest, older BSF mono highest), mounting configuration (sufficient ventilation gap reduces NOCT by 5–10°C for rooftop systems), and site selection (altitude gives modest air-cooling benefit). Dark roofing membranes that absorb additional heat beneath the array can add 3–5°C of module temperature on still days.
In PVsyst, the thermal model uses U-constant (heat loss coefficient) and U-wind (wind-dependent) parameters specific to the mounting configuration. Default values are provided for open-rack, close-mounted rooftop, and BIPV. PVWatts uses a fixed temperature correction based on the King-Boyson-Kratochvil model with climate-adjusted parameters. SurgePV applies the King-Boyson-Kratochvil model using hourly temperature and wind data from the satellite-derived TMY, with mounting configuration adjustments available for rooftop vs. open-rack vs. BIPV.
10. Module-Level Electrical Mismatch
Definition: Energy lost when modules in a string or array produce different currents, forcing higher-output modules to operate below their maximum power point to match the weakest module in the series circuit.
String-level mismatch from manufacturing variation alone is typically 0.5–2% annually. Aurora Solar cites 2% as a standard design default. When partial shading is present and no MLPE is installed, string mismatch can reach 5%+ on affected strings (Aurora Solar default; Tigo Energy mismatch studies).
The primary root cause is current mismatch: when modules in a string produce different currents — due to shade, soiling on individual panels, manufacturing tolerance, or physical damage — the string current is limited to the lowest-producing module. Bypass diodes activate to route current around deeply shaded modules, but they produce staircase I-V curve behavior that means the string operates at a suboptimal voltage.
Good solar design software addresses this at the string topology level: orient strings north-south on rooftops to minimize shade variation within strings, keep shaded and unshaded modules in separate strings where possible, and model the electrical effect of the specific shading pattern rather than applying a generic mismatch factor. Specifying MLPE (DC optimizers or microinverters) on shaded strings effectively removes mismatch loss from those strings by giving each module its own MPPT.
For unshaded uniform arrays, mismatch loss is controlled by module binning — grouping modules of similar current output into the same string. Premium projects specify tight current binning (±0.5%) from the factory. This is standard practice for large C&I projects and is increasingly offered without surcharge by Tier 1 manufacturers.
In PVsyst, mismatch is modeled as a user-input loss fraction (default 2%) applied uniformly, or as a detailed string-level calculation using the electrical effect shading model. SurgePV calculates mismatch from the per-module shading results and the string configuration, separating shade-driven mismatch from manufacturing mismatch.
11. DC Ohmic and Wiring Losses
Definition: Energy lost as heat when DC current flows through resistive conductors — string cables, combiner cables, connectors, and fuses — between the modules and the inverter input.
Well-designed systems target DC ohmic losses of 1–1.5% of system output. PVsyst defaults and NEC 690.8 guidance both point to 1.5% as the acceptable design ceiling for string systems. Poorly designed systems with undersized home-run cables or excessively long string runs can reach 3% or more (PVsyst defaults; NEC 690.8; NREL design guides).
Ohmic loss is purely a design parameter. It is calculated as I²R, so it scales with the square of current. Higher-voltage strings produce lower current for the same power, which is why modern utility systems operate at 1,500 V DC rather than 1,000 V — the lower current at the same power level cuts ohmic losses in home-run cables by roughly 44%.
The design fix is to size conductors so that voltage drop from any module to the inverter input does not exceed 2% under worst-case (ISC) conditions. PVsyst allows the user to define cable cross-section and length for each circuit segment, then calculates the resulting ohmic loss. PVWatts applies 2% as a default wiring loss. SurgePV applies standard cable sizing assumptions based on string current and run length, with a user input for home-run cable cross-section on C&I designs. The DC line losses glossary entry covers conductor sizing methodology in detail.
Connector resistance is a secondary but non-trivial contributor. MC4 connectors add approximately 0.01–0.03 Ω per pair when new; degraded, corroded, or mismatched-brand connectors can add 0.1–0.5 Ω, producing local hotspots and measurable power loss. Specifying single-brand connectors throughout and including connector inspection in the commissioning checklist is a no-cost design fix.
12. Long-Term Module Degradation
Definition: The gradual annual decline in module power output over the system lifetime due to UV-induced discoloration of encapsulant, delamination, oxidation of silver contacts, and other slow material degradation mechanisms.
The median degradation rate for crystalline silicon modules is 0.5–0.7% per year, per NREL’s landmark 2013 analysis by Jordan and Kurtz (“Photovoltaic Degradation Rates: An Analytical Review”) updated in 2022 with data from over 11,000 systems. Modern TOPCon and HJT modules from Tier 1 manufacturers are converging toward 0.4%/yr or lower in accelerated aging tests (Jordan & Kurtz, NREL, 2012; DNV PV Module Reliability Scorecard).
| Year | Output at 0.5%/yr | Output at 0.7%/yr |
|---|---|---|
| 1 | 99.5% | 99.3% |
| 10 | 95.1% | 93.2% |
| 25 | 88.2% | 83.9% |
The 4.3-percentage-point spread between 0.5%/yr and 0.7%/yr compounds significantly over 25 years. For a 100 kW system generating 150,000 kWh/year at year 1, the cumulative difference in generation between the two scenarios is approximately 430,000 kWh over 25 years — roughly $52,000 at $0.12/kWh. This is why specifying a guaranteed maximum degradation rate in the module purchase agreement matters, not just the nominal warranty figure.
Degradation is primarily controlled by module technology and encapsulant chemistry. Modules with EVA encapsulant in hot, humid climates (Southeast Asia, coastal Brazil) tend to degrade faster than those with POE encapsulant. High UV exposure accelerates UV-induced yellowing. Module mounting orientation matters minimally for degradation but affects soiling accumulation, which can be misattributed to degradation in performance analysis.
In PVsyst, degradation is applied as a linear annual loss factor. PVWatts does not model year-by-year degradation — it outputs a single year’s production which the user must derate separately for a multi-year cash flow. SurgePV applies annual degradation across the full project lifetime in the generation and financial tool, allowing the user to model the impact of different module-tier degradation rates on NPV and IRR. See the annual degradation rate glossary entry for methodology.
13. Inverter Conversion
Definition: Energy lost as heat during DC-to-AC conversion in the inverter, expressed as the difference between DC input power and AC output power at each operating point.
Modern string and central inverters achieve peak conversion efficiency of 98–98.6% at their optimal operating point, per the CEC weighted efficiency database. CEC weighted efficiency — calculated across a distribution of operating points that reflects typical solar irradiance distribution — is typically 97–97.5% for Tier 1 string inverters. The net annual conversion loss is approximately 2.5–3% (CEC weighted efficiency database).
Inverter efficiency varies with load level. At 25% of rated power (early morning, late afternoon), efficiency drops to 94–96% for most designs. At 50% rated power, efficiency is 97–98%. At 100% rated power, some inverters drop below their peak efficiency due to thermal limits. This load-dependent behavior is why CEC weighted efficiency is a better predictor of annual loss than peak efficiency.
The primary design lever here is inverter selection — specifically, choosing inverters with high weighted efficiency across the actual operating point distribution for the site. Hot climates that cause frequent inverter derating due to high ambient temperature effectively reduce weighted efficiency below the nameplate CEC rating. Specifying inverters with adequate thermal headroom for the installation environment prevents this secondary efficiency reduction.
In PVsyst, inverter efficiency is modeled from the manufacturer-supplied efficiency curve (power vs. efficiency) with optional temperature derating. PVWatts applies a single inverter efficiency input that defaults to 96% weighted. SurgePV uses the CEC weighted efficiency value from the inverter database and applies temperature derating for sites above 35°C annual average ambient.
14. Inverter Clipping (DC Oversizing)
Definition: Energy lost when the DC array output exceeds the inverter’s maximum AC power capacity, causing the inverter to limit its input voltage and clip the excess DC power.
Clipping loss depends on the inverter loading ratio (ILR), which is the ratio of DC array nameplate to inverter AC nameplate. At ILR 1.20, clipping is typically 0.3–1% annually for most US and European climates. At ILR 1.30, it rises to 1–3%. At ILR 1.40 or above, clipping can reach 3–6% (NREL Klise et al. “Optimizing Solar PV System Performance; Solargis; PVsyst).
| ILR | Clipping loss | Effect on LCOE (illustrative) |
|---|---|---|
| 1.10 | under 0.3% | Inverter oversized; higher $/W cost |
| 1.20 | 0.3–1% | Typical US optimal range |
| 1.30 | 1–3% | Acceptable if LCOE-justified |
| 1.40 | 3–6% | Requires explicit LCOE analysis |
The economic logic of oversizing is sound: inverters cost significantly less per watt than modules, so running a larger array through the same inverter reduces the weighted cost of the combined system. Clipping removes the peak production that would otherwise occur on the brightest days of the year — days where energy may also be least valuable if wholesale prices are low at midday. The optimal ILR is found by modeling LCOE across a range of ILRs using the site’s TMY irradiance distribution.
In PVsyst, clipping is modeled precisely from the inverter efficiency curve and the DC power distribution — each time step where DC power exceeds inverter rated input is counted as clipped. The loss is reported separately in the loss cascade. PVWatts does not explicitly model clipping; it caps DC output at inverter capacity but does not report the clipped energy. SurgePV models clipping from the hourly DC power distribution and the inverter specification, reporting it as a separate line item in the loss waterfall. See the inverter clipping and inverter loading ratio (ILR) glossary entries for methodology.
15. AC Ohmic, Transformer, and Availability
Definition: Three availability-and-grid-connection losses: AC cable ohmic resistance from inverter to point of interconnection (~1%), low-voltage to medium-voltage transformer copper and iron losses (~1–2%), and system downtime due to inverter faults, grid outages, maintenance, and monitoring response time (~0.5–3%).
AC cable losses are typically 1% for well-designed systems and are calculated the same way as DC ohmic losses — I²R across the AC conductor run length and cross-section. LV/MV transformer losses consist of load-dependent copper losses and constant no-load iron losses; combined efficiency is typically 98–99% per IEC 60076, giving a loss of 1–2% (PVWatts; IEC 60076; NREL fleet availability).
PVWatts Availability Default
PVWatts applies a 3% default availability loss. Well-monitored systems with rapid fault-response achieve 0.5–1% annual availability loss. The difference between 3% and 1% is worth $6,000–$8,000 in 25-year revenue on a 100 kW system at $0.12/kWh. Remote monitoring with automated alert routing is the most cost-effective way to close this gap.
Availability loss is controlled primarily by monitoring quality and O&M response time. Systems with string-level monitoring and 24-hour fault alerting achieve availability losses of 0.5–1% because faults are detected and dispatched within hours. Systems without monitoring or with slow O&M response can lose 2–3% annually as inverter faults go undetected for days or weeks.
In PVsyst, availability is applied as a user-input annual loss fraction or a monthly profile. Transformer losses are entered separately in the AC circuit definition. PVWatts applies a single annual availability factor (default 3%). SurgePV applies the PVWatts default of 3% as a conservative starting point, with user inputs for transformer efficiency and availability percentage based on the O&M plan. Projects with contracted monitoring and rapid dispatch can justify reducing availability input to 1%, which should be documented in the energy model assumptions.
Total System Loss Stack-Up: PVWatts, PVsyst, and Solargis
The table below shows all 15 factors with typical default values across the three major simulation platforms. Use this as a cross-check when validating energy models or comparing proposals from different designers.
| Loss factor | PVWatts default | PVsyst typical | Solargis typical |
|---|---|---|---|
| Horizon/irradiance shading | 0% (manual input) | 0.5–2% | 0–1% terrain-corrected |
| Near shading | 3% (bundled) | 0–7% modeled | 1–3% |
| Soiling | 2% | 1–5% monthly profile | 2–5% regional data |
| Snow | 0% | 0–5% monthly | 0–5% seasonal |
| IAM | Built-in (non-adj.) | 3–4.5% physical model | 3–4% |
| Spectral mismatch | 0% | ±1–2% (c-Si) | ±1% |
| Nameplate tolerance | 1% | 0–2% | 0–1% |
| LID | 1.5% (bundled) | 0.5–1.5% | 0.5–1.5% |
| LeTID | 0% | 0–3% (user input) | Not standard |
| Module temperature | Climate-adjusted | 6–18% modeled | 6–18% modeled |
| Module mismatch | 2% | 0.5–2% | 1–2% |
| DC ohmic | 2% | 1–2% | 1–2% |
| Long-term degradation | Not modeled | 0.5–0.7%/yr | 0.5–0.7%/yr |
| Inverter conversion | 4% (inv. eff. 96%) | 2.5–3% | 2.5–3% |
| AC ohmic + transformer + availability | 3% | 1.5–4% | 1.5–3% |
PVWatts applies its loss factors multiplicatively, not additively. The 14% stated default is calculated as the product of (1 − loss1) × (1 − loss2) × … across all factors, not the sum of individual percentages. This means a 14% total derate represents approximately 16–17% in additive terms. Field performance data from IEA-PVPS Task 13 shows that real-world systems average 17–22% total loss in additive terms — consistent with PVsyst and Solargis detailed models when site-specific soiling, thermal, and availability inputs are used. The gap between PVWatts 14% and actual 17–22% is almost entirely explained by simplified defaults for temperature, soiling, and availability.
When comparing energy models from competing proposals, the most important cross-check is to verify that temperature loss, soiling, and availability are entered as site-specific values rather than PVWatts defaults. A proposal claiming 22% more generation than a site-calibrated model almost always traces back to one of those three factors.
Performance Ratio Benchmarks by System Type
Performance ratio is the cleanest single metric for comparing loss stacks across systems in different climates. Because PR is normalized by available irradiance, a German system and an Arizona system can be meaningfully compared on PR even though their absolute yields differ by 60%.
| System type | PR range | Key drivers |
|---|---|---|
| Residential rooftop | 0.78–0.85 | Near shading, temperature, inverter efficiency, soiling |
| C&I rooftop | 0.78–0.83 | Soiling, temperature, availability, AC ohmic |
| Utility fixed-tilt | 0.80–0.85 | Temperature, soiling, inter-row shading, availability |
| Utility single-axis tracker | 0.78–0.83 | Temperature, soiling, tracker availability, clipping |
Source: IEA-PVPS Task 13 — Performance and Reliability of Photovoltaic Systems
Trackers have a counter-intuitive PR range that overlaps with or falls slightly below fixed-tilt. This is because trackers produce more energy in absolute terms (15–25% more yield) but also experience more temperature loss (modules face the sun more directly at midday, driving higher cell temperatures) and introduce tracker availability as a new loss factor. The PR denominator — POA irradiance — also increases with tracking, so the ratio does not always favor trackers even when absolute yield does.
Red flags: a PR below 0.75 in a temperate climate almost always indicates a commissioning error, an undetected shading problem, underperforming inverters, or degraded modules. PR above 0.87 in any real-world system typically indicates that the modeled irradiance is too low (understating the available resource makes PR appear higher) rather than genuine outperformance. Both flags warrant investigation before accepting the model.
The performance ratio benchmarks glossary entry has the full IEA-PVPS Task 13 dataset by country. To translate PR benchmarks directly into revenue projections, use the generation and financial tool to model yield against site-specific irradiance.
Climate-Specific Loss Profiles
Applying a loss template from the wrong climate zone is one of the most common and costly energy modeling errors. The four profiles below give starting-point loss assumptions for the four major climate categories.
Hot-Arid (US Southwest, India, Middle East, Australia)
| Loss factor | Typical range |
|---|---|
| Module temperature | 12–18% |
| Soiling | 5–25% |
| IAM | 2.5–3.5% (high irradiance reduces relative IAM) |
| LID/LeTID (PERC) | 1.5–3% first year |
| Availability | 1–2% |
| Typical PR | 0.76–0.81 |
High cell temperatures and soiling dominate this profile. The IAM loss is proportionally lower in percentage terms because the denominator (available irradiance) is high. Soiling cleaning schedules of 4–8 weeks are often required to stay within budget loss targets.
Temperate (Germany, UK, France, US Northeast)
| Loss factor | Typical range |
|---|---|
| Module temperature | 6–8% |
| Soiling | 1–2% |
| Snow | 1–5% |
| Spectral mismatch | ±1% |
| Near shading | 3–8% (urban rooftops) |
| Typical PR | 0.80–0.85 |
Temperature and soiling are both low. Near shading on urban rooftops becomes the dominant variable. Annual rainfall self-cleans most temperate systems to 1–2% soiling with minimal intervention.
Cold-Snowy (Scandinavia, Canada, US Upper Midwest)
| Loss factor | Typical range |
|---|---|
| Snow | 5–12% |
| Module temperature | 4–6% (cold ambient reduces thermal loss) |
| IAM | 3.5–4.5% (low winter sun angles) |
| Availability | 1–2% (winter fault access harder) |
| Typical PR | 0.78–0.83 |
Snow dominates the winter months. Temperature losses are lower than temperate climates because cold ambient air actually cools modules below 25°C at low irradiance levels, producing a positive thermal contribution. IAM losses are higher because winter sun angles are very low, increasing reflection.
Tropical (Brazil, Southeast Asia)
| Loss factor | Typical range |
|---|---|
| Module temperature | 8–12% |
| Soiling | 2–4% (frequent rain self-cleans) |
| Spectral | +1–2% (CdTe advantage in diffuse) |
| Availability | 1–2% |
| Typical PR | 0.78–0.82 |
High humidity and frequent cloud cover moderate both soiling and peak temperatures somewhat compared to hot-arid climates. The spectral profile of tropical diffuse light favors CdTe over c-Si in terms of energy yield per watt rated.
Pro Tip: Hot-Arid Climate Modeling
The biggest modeling mistake in hot-arid markets is applying a temperate loss template. Thermal loss in Phoenix, Riyadh, or Ahmedabad runs 12–18% vs. 6–8% in Germany. That 6–10 percentage point difference directly maps to a 7–12% generation overstatement — enough to make an otherwise marginal project appear bankable.
How SurgePV Models All 15 Loss Factors in One Workspace
The following is a workflow description, not a feature list. The goal is to show how the 15 factors connect across the design-to-proposal process.
Step 1 — 3D rooftop model in solar designing. The designer uploads a satellite image or CAD file and builds a 3D obstruction model: chimneys, parapets, HVAC units, adjacent buildings, ridgelines. This obstruction geometry feeds directly into the shadow engine. Horizon shading and near-shading are defined at this stage. Getting the 3D model right is the foundation for accurate optical loss modeling downstream.
Step 2 — Shadow analysis. SurgePV runs a physics-based shading simulation for every module, every hour, across a full TMY. Near shading and inter-row shading are calculated separately. The output is a per-module hourly irradiance matrix that captures the exact shading signature of the layout. String mismatch from shading is calculated from the per-module results using the actual string topology defined by the designer.
Step 3 — Generation and financial tool. The hourly irradiance matrix from the shadow analysis is loaded into the generation engine. Satellite-derived TMY irradiance with terrain correction is available as an alternative starting point for sites where a measured irradiance dataset is not available. The user inputs the soiling profile, degradation rate, availability percentage, and DC/AC ohmic assumptions. IAM, temperature, and spectral corrections are applied from the module specification database. Inverter conversion and clipping are modeled from the inverter efficiency curve and the DC power distribution. The tool outputs annual yield, PR, and a loss waterfall chart with all 15 factors as separate line items.
Step 4 — Export to solar proposals. The yield model, PR, and loss assumptions flow into the client report. The loss waterfall chart is included as a production-quality visual. Clients can see exactly what drives the projected yield — a more credible and transparent deliverable than a single production number.
Clara AI flags values that appear climatically anomalous — for example, a soiling input of 2% on a site in Rajasthan, or an availability input of 0.5% on a system without remote monitoring.
One honest caveat: SurgePV does not substitute for a full PVsyst bankable energy assessment on utility-scale projects where lender due diligence requires a third-party P50/P90 analysis with independent meteorological review. For residential and C&I design-to-proposal workflows — which represent the majority of installer volume — the 15-factor stack is fully modeled in the solar software platform in one workspace without requiring a separate simulation license.
Conclusion
Solar system losses are not a fixed constant — they are a consequence of design choices, equipment specifications, and O&M planning. The 15 factors described in this guide respond to decisions made before and during installation, not after.
Three concrete next actions:
-
Run your next design with explicit inputs for all 15 factors. Do not accept a single lumped derate. Assign individual percentages to temperature, soiling, shading, mismatch, and availability based on the site’s climate and the system’s monitoring plan. The difference between a calibrated model and PVWatts defaults is typically 3–8% in annual generation — enough to change the financial model outcome.
-
Compare your actual system PR against the benchmarks in this guide. A PR below 0.75 in a temperate climate or below 0.76 in a hot climate is a diagnostic signal, not a normal outcome. Pull monitoring data, isolate the underperforming circuit, and trace the loss to its source.
-
Book a demo to see the 15-factor loss waterfall alongside the financial model. The value of a detailed loss stack is not the accuracy alone — it is the ability to show clients exactly what drives the projected yield, which makes proposals more credible and reduces post-installation disputes.
Using solar design software that models the full loss stack from shading to AC delivery is the most direct way to close the gap between nameplate capacity and real-world revenue.
See the Full Loss Stack in Your Next Design
SurgePV models all 15 loss factors — shadow analysis, thermal derate, clipping, degradation — and outputs a loss waterfall chart alongside the financial model.
Book a DemoNo commitment required · 20 minutes · Live project walkthrough
Frequently Asked Questions
What is the typical total loss for a solar PV system?
Most grid-tied systems lose 14–25% of nameplate DC capacity. PVWatts uses a 14% default; field data from IEA-PVPS Task 13 shows real-world systems average 17–22%. The gap is usually temperature, soiling, and availability losses that simplified tools understate.
What does PVWatts’ 14% default derate factor include?
PVWatts bundles soiling (2%), shading (3%), mismatch (2%), wiring (2%), connections (0.5%), LID (1.5%), nameplate tolerance (1%), and availability (3%). Snow and spectral losses are not in the default. The total is computed multiplicatively, not additively.
What is the difference between LID and LeTID in solar panels?
LID is a first-hours power loss from boron-oxygen defects in p-type c-Si, typically 1–1.5%. LeTID is a slower, deeper loss in PERC modules triggered at elevated operating temperatures over the first summer, ranging 1–6% in unstabilized cells. Both are eliminated by specifying n-type modules (TOPCon, HJT).
How does soiling affect solar panel output?
Soiling reduces module transmittance. The US average is around 5% annually per NREL field data, but arid sites with low rainfall can reach 5–25%. Cleaning frequency has measurable impact: NREL data shows annual cleaning reduces soiling loss from 1.9% to 1.5%.
What is inverter clipping and when does it become a problem?
Clipping occurs when the DC array produces more power than the inverter can convert to AC, so the excess is lost. It becomes economically significant above ILR 1.30, where clipping losses typically exceed 1–3%. Optimal ILR for most US markets is 1.20–1.30 when modeled against LCOE.
How does module temperature affect solar system output?
Solar modules lose power as temperature rises above 25°C. The Pmax temperature coefficient for c-Si modules is −0.30% to −0.45% per °C. Annualized loss is 6–12% in temperate climates and 10–18% in hot climates. HJT modules have the lowest coefficient (−0.26%/°C).
What performance ratio should I expect from my solar system?
Residential rooftop systems typically achieve PR 0.78–0.85; C&I rooftop 0.78–0.83; utility fixed-tilt 0.80–0.85; utility tracker 0.78–0.83. PR below 0.75 in a temperate climate usually indicates a shading, wiring, or commissioning problem. Source: IEA-PVPS Task 13.



