Over the last couple of weeks, I’ve been diving deep into AlsoEnergy’s documentation and auditing the expected output of our systems against the PVSyst models we generated that proved they’re financially viable assets. My findings so far have shown significant inconsistencies in the way that sites have been set up (not Also’s fault; we’ve handled this internally) compared to the models, which has created no small source of frustration and confusion for our Asset Management, Engineering, and Financial departments.
A quick breakdown:
- The PVSyst model uses data from product manufacturers (solar modules, inverters, etc) along with historical weather data and shading maps to provide a reasonably accurate projection for how a system will perform.
- AlsoEnergy sites get set up by adding in the same equipment, along with the weather sensors installed at site, and it creates estimates in real time based on weather data and equipment specs.
- AlsoEnergy has several different options for modeling systems, each requiring different information to be accurate, or substituting various assumptions.
Essentially, this seems like double work, attempting to recreate the same model that the system was built on, using a different platform, to act as the baseline from which the Weather-Adjusted estimates are generated. But there’s an important distinction: the PVSyst model is also a Weather Adjusted, but again, it uses historical weather data instead of momentary / current data.
Where the Disconnect Happens:
This is where many portfolios run into trouble. PVSyst is static—it captures design intent at a single point in time. AlsoEnergy (and other monitoring platforms) are dynamic—they reflect as-built reality and constantly changing site conditions.
If the digital twin inside the monitoring platform doesn’t perfectly mirror the configuration and assumptions in PVSyst—things like stringing, inverter clipping limits, module binning, soiling losses, or albedo—it can lead to two parallel truths about performance that never fully align.
One says: “The plant is underperforming.”
The other says: “It’s performing exactly as modeled.”
Both can be right—depending on which model you’re comparing against.
Why This Matters:
For operators, this discrepancy muddies communication between departments. Asset managers base revenue projections on the financial model. Engineers validate against PVSyst. Operators and technicians see live performance from the monitoring platform. If those three worlds don’t agree, the result is unnecessary noise—misallocated investigations, mis-tagged downtime, and even false underperformance flags.
Bridging the Gap:
There’s no single fix, but a few principles can help close the loop between modeling and monitoring:
- Align the assumptions early.Make sure the PVSyst loss factors, tilt/azimuth, and module counts are reflected exactly in the monitoring platform setup.
- Treat PVSyst as “design intent” and AlsoEnergy as “as built.”Use commissioning and punch-list data to reconcile differences—module swaps, string re-routes, inverter replacements, etc.—so the monitoring baseline reflects the true physical plant.
- Maintain a versioned digital twin.Update the monitoring model whenever the system is modified. A 0.5° tilt change or new inverter firmware can shift expected output enough to matter.
- Normalize expectations with consistent weather data.For true performance benchmarking, use the same weather dataset across both platforms—either by importing measured weather into PVSyst or exporting modeled irradiance into the monitoring system.
- Communicate across teams. Engineering, Asset Management, and Finance should all share a unified “expected performance” source of truth, even if that means documenting the translation between models.
Final Thought:
At the end of the day, monitoring should match modeling—but only if we ensure the two speak the same language. As the industry matures, the gap between “modeled energy” and “measured energy” will keep shrinking, not just through better tools, but through tighter collaboration between the people who use them.
If we can make our models not just accurate but aligned, we’ll spend less time explaining the numbers—and more time improving them.