Activity
Mon
Wed
Fri
Sun
Jan
Feb
Mar
Apr
May
Jun
Jul
Aug
Sep
Oct
Nov
What is this?
Less
More

Memberships

Solar Operations Excellence

170 members • Free

6 contributions to Solar Operations Excellence
Top tips (reverse tip-top) to narrow your LCOE – from an operations perspective
I’d like to open a practical discussion focused on what truly reduces LCOE over the lifetime of a solar asset. No buzzwords. No glossy presentations. Just the operational decisions that consistently make or break performance. The logic is simple: the more optimized your production costs are, the stronger your IRR. Revenue strategy and trading merit a separate topic. Here, I’m interested specifically in the operational layer: Where do we gain the most efficiency? Where do we silently lose it? And maybe the more uncomfortable truths: Where do EPC shortcuts cost you for the next 25 years? What does “bankable performance ratio” even mean in practice? Which suppliers survive the warranty period and which… don't. I’m looking to gather what actually works in the field, across regions, climates, EPC models, and asset sizes. So, from your experience: What are the concrete practices that consistently reduce LCOE?And where do operators commonly leave money on the table? Let’s go real :)
1 like • 25d
@Călin Sas and @David Moser: this is the only real evidence I have seen showing direct impact of EPC quality on O&M: https://solargrade.io/articles/2025-solargrade-pv-health-report/
1 like • 24d
@Călin Sas I did not notice that David Penalva is here too! 😀
PV + BESS design and monitoring
Hello everyone, I think this is a long shot, but does anyone in this group work with DC coupled PV + BESS? We are interested in the PV + BESS design aspect, but also the operations and monitoring. For example: - What tools does the industry use for designing such systems? Is it a siloed process (e.g., PVsyst for PV and maybe an excel spreadsheet for the BESS) or are there any integrated tools out there? - What about the monitoring? Is that still siloed? - Separate KPIs? Thanks, Marios
2 likes • 25d
@Fredy Canizares thank you for sharing this! It sounds like a lot of DC/AC AC/DC conversions and wiring, so I wonder what are the main reasons to opt for such design as opposed to DC-coupled? Is it lower complexity and/or higher reliability? For sure it is lower yield and maybe higher costs? I assume curtailed energy can be stored even in an AC-coupled system? Thanks again!
3 likes • 24d
@Fredy Canizares: These are brilliant insights, thank you very much for taking the time to explain!
What's is your idea..
What would you change about solar industry, if you given the chance?
2 likes • 25d
To stop recycling 10+ year old contracts and move on to performance guarantees (assuming stakeholders are involved from the beginning of the project).
Is overrating of solar modules a problem for O&M?
Overrating means that the actual power output of a solar module is lower than its rated value. Measurements by SecondSol and Fraunhofer CSP show deviations of up to three watts below the nominal power – officially mentioned in the power production tolerance, but in practice often systematically undercut due to price pressure. Have you already observed this issue in field?
3 likes • 27d
@Stefan Wippich , we also saw deviations of up to 5%, see Figure 3 here: onlinelibrary.wiley.com/doi/full/10.1002/pip.3615 These modules were purchased from the open market so they might not be representative of what is installed in utility scale. Large module procurement contracts typically include provisions for adjusting the rating following tests by a 3rd party lab.
Does the Monitoring Match the Modeling?
Over the last couple of weeks, I’ve been diving deep into AlsoEnergy’s documentation and auditing the expected output of our systems against the PVSyst models we generated that proved they’re financially viable assets. My findings so far have shown significant inconsistencies in the way that sites have been set up (not Also’s fault; we’ve handled this internally) compared to the models, which has created no small source of frustration and confusion for our Asset Management, Engineering, and Financial departments. A quick breakdown: - The PVSyst model uses data from product manufacturers (solar modules, inverters, etc) along with historical weather data and shading maps to provide a reasonably accurate projection for how a system will perform. - AlsoEnergy sites get set up by adding in the same equipment, along with the weather sensors installed at site, and it creates estimates in real time based on weather data and equipment specs. - AlsoEnergy has several different options for modeling systems, each requiring different information to be accurate, or substituting various assumptions. Essentially, this seems like double work, attempting to recreate the same model that the system was built on, using a different platform, to act as the baseline from which the Weather-Adjusted estimates are generated. But there’s an important distinction: the PVSyst model is also a Weather Adjusted, but again, it uses historical weather data instead of momentary / current data. Where the Disconnect Happens: This is where many portfolios run into trouble. PVSyst is static—it captures design intent at a single point in time. AlsoEnergy (and other monitoring platforms) are dynamic—they reflect as-built reality and constantly changing site conditions. If the digital twin inside the monitoring platform doesn’t perfectly mirror the configuration and assumptions in PVSyst—things like stringing, inverter clipping limits, module binning, soiling losses, or albedo—it can lead to two parallel truths about performance that never fully align.
1 like • Nov 7
@Joshua Hamsa, I really enjoyed this information, thank you. I posted about it 2 days ago on linkedin: https://www.linkedin.com/feed/update/urn:li:activity:7391636790425743360/ We need to come up with a standardized reporting approach. The IEC 61724 talks about predicted and expected energy. The problem is that the expected energy is supposed to be calculated by re-running PVsyst with actual weather data. We all know that PVsyst was not designed for operations. Because of that, you are forced to run a "proxy" model. After discussing with ~20 PVMAC stakeholders (software providers, owners, IPPs, O&M providers), we grouped their approaches in 3 categories, and this is where things can get confusing. We are working on a process and these "definitions" are not finalized but here they are to give you an overview: 1) Independent Proxy (IndProxy): This is a model (another software, or any pvlib model that you do not necessarily fit on data) that you simulate independent of PVsyst. You use the as-built information, derate assumptions, actual weather, etc and you create a model that has nothing to do with PVsyst: the information or most inputs could be the same, but it is not fitted on PVsyst. 2) Calibrated Proxy (CalProxy): This is a model (e.g., PVUSA, or PVWatts) that you fit on your PVsyst time-series to extract some coefficients. Then you re-run with actual weather to achieve weather adjustment. 3) FieldProxy: This is an independent model (e.g., based on ML) that is fitted on field time-series, and has no information about PVsyst and captures actual performance. Each approach has its pros and cons. For example, with a FieldProxy, you will not be able to diagnose degradation, because you fit a model on field data that already include degradation. Or with CalProxy, you might not do well with nonlinearities. On top of these, we need to be clear about what is included in the expected yield model. Where is the model bias (e.g., model error or hourly to subhourly bias), do we account for issues that are not due to plant underperformance (e.g., curtailment, grid unavailable etc)? This is easier to explain with a picture, so if you are interested, please check this presentation's first 3-5 slides:
3 likes • 30d
@Stephen Lynch I totally agree with what you say. I would expect that performance engineers would not rely on PVsyst though, otherwise it is an apples-to-oranges comparison. This is where proxy models (or recalibrated models as you say) come in. Both models (PVsyst and recalibrated) are useful, but for different purposes and there are different metrics depending on purpose: 1) Financial purposes: use baseline energy performance index (BEPI) which is the ratio of predicted energy (e.g., PVsyst) to measured energy. 2) Performance purposes: use EPI which is the ratio of expected energy (e.g., whatever the recalibrated model is, please see my post above that breaks those in 3 categories) to measured energy. We have standards (e.g., IEC 61724), but they are subject to interpretation so this is why we need to work together to develop best practices and minimize confusion and calculation uncertainties.
1-6 of 6
Marios Theristis
3
37points to level up
@marios-theristis-3162
Leading the PV Performance Modeling and PV O&M Analytics programs at Sandia National Laboratories

Active 5d ago
Joined Nov 6, 2025
United States
Powered by