GBA Logo horizontal Facebook LinkedIn Email Pinterest Twitter X Instagram YouTube Icon Navigation Search Icon Main Search Icon Video Play Icon Plus Icon Minus Icon Picture icon Hamburger Icon Close Icon Sorted
Guest Blogs

Measuring Passive House Energy Performance

Monitoring data from two Passive House projects show the gap between predicted results and actual energy use

Passive House projects are on the rise. The number of projects certified by the Passive House Institute US and Germany's Passivhaus Institut in North America both are growing, but PHIUS projects are gaining at a faster clip, according to data from the Pembina Institute.
Image Credit: All images: Passive House Institute U.S.
View Gallery 7 images
Passive House projects are on the rise. The number of projects certified by the Passive House Institute US and Germany's Passivhaus Institut in North America both are growing, but PHIUS projects are gaining at a faster clip, according to data from the Pembina Institute.
Image Credit: All images: Passive House Institute U.S.
The cumulative square footage of Passive House projects in North America. The cumulative number of Passive House residential units in North America. Most of the bars in this graph (all of the bars except the blue bar) refer to a Passive House building in New York City. The last four bars of the graph compare monitored energy use to modeled energy use (as predicted by three different modeling runs). The blue bar shows the modeled energy use (using WUFI Passive software) of an imaginary building of the same size as the New York City building — one that meets the minimum requirements set by ASHRAE 90.1. Most of the bars in this graph (all of the bars except the blue bar) refer to a Passive House building in Hillsboro, Oregon. The last three bars of the graph compare monitored energy use to modeled energy use (as predicted by two different modeling programs). The blue bar shows the modeled energy use (using WUFI Passive software) of an imaginary building of the same size as the Hillsboro building — one that meets the minimum requirements set by ASHRAE 90.1. Three of the lines in this graph (all of the lines except the dotted blue line) refer to the Passive House building in New York City. The dotted blue line refers to the modeled energy use (using WUFI Passive software) of an imaginary building of the same size as the New York City building — one that meets the minimum requirements set by ASHRAE 90.1. The numbers in parentheses refer to the year (2015 or 2016). Curiously, the creator of the graph did not place the months in chronological order. Three of the lines in this graph (all of the lines except the dotted blue line) refer to the Passive House building in Hillsboro, Oregon. The dotted blue line refers to the modeled energy use (using WUFI Passive software) of an imaginary building of the same size as the building in Hillsboro — one that meets the minimum requirements set by ASHRAE 90.1.

After a period of growth in the ’70s and ’80s, and a brief hiatus in the ’90s, passive building principles and metrics are making an impressive comeback in North America. Passive principles were developed 40-plus years ago by pioneers including William Shurcliff, Rob Dumont, and Joe Lstiburek — to mention just a few. Today, these principles are broadly seen as critical for a renewable energy future.

Market transformation is in progress and policy makers have taken notice across the country. The last few years, according to data obtained by the Pembina Institute, show an exponential growth rate in passive building certifications in North America.

At the core of this development has been the Passive House Institute US (PHIUS). In 2002, I built the first house in the U.S. to follow the German Passivhaus standard as a design guideline. The house was built in Urbana, Illinois. We built more homes as affordable housing, designed training curricula, and created the CPHC (Certified Passive House Consultant) and builder training programs. We’ve also facilitated sharing experiences through our annual conferences. These efforts helped to build a robust community of professionals that is now the driving force behind the growth of Passive House building.

The German Passivhaus Institut (PHI) briefly partnered with PHIUS to offer their certifications in the U.S., but the organizations parted ways in 2011 over disagreements about the appropriate passive standards approach. Starting in 2012, PHIUS significantly changed its certification protocols in order to assure climate concerns were addressed; to incorporate accepted industry practices recommended by the U.S. Department of Energy; and to introduce a third-party verification of the quality of implementation (working in concert with RESNET).

In North America today, two very different approaches to passive standards are used in the market, with the vast majority of units certifying through PHIUS. Beginning in 2014, PHIUS launched a research effort to develop the PHIUS+ 2015 Standard. Based on data from projects across North American, the PHIUS+ 2015 standard was developed specifically for North America’s varied climate zones. Since that launch, PHIUS+ certifications have experienced exponential growth, indicating that the program was successful in removing barriers that had limited adoption.

A smaller number of projects and units are being certified to the European Passivhaus standard promoted by PHI. This certification method also is experiencing growth but at a much slower rate (see the graph at the top of the column, and Images #2 and #3 below).

A performance-based standard

Passive building is a performance-based standard. With the first measured data coming in from multifamily buildings, we were anxious to know how well we (the community, the certifiers, and the tools) were doing. This was with an eye to becoming even better at assuring not only the quality of passive building construction, but also to becoming better at predicting and ensuring actual performance — and then making sure our buildings maintain those great levels of performance over time.

With a market that is starting to go gangbusters, reliable measured data is highly valuable from many different perspectives. For one, if we could show that passive standards can guarantee performance not to deviate more than ±10% (I am probably lax here, I’d like to see more like ±5%) from what was modeled, it might actually convince our critics and modeling naysayers that there is something to the methodology and that the modeling is worth it.

And what a potential this could have for the financing sector of high performance, energ- efficient homes and buildings! Fannie Mae and Freddie Mac will currently only underwrite 75% of predicted (modeled) energy savings, which seems to indicate that they are expecting that models are not particularly accurate (and in fairness, a lot of building energy models haven’t been very accurate). And lastly, measured results might finally answer the question between passive house folk: Which one of the two very different certification approaches to the standard is more accurate in predicting actual performance? Wouldn’t that be something useful to know?

The data is coming in

A handful of our first certified passive multifamily projects are now in and have been occupied for one year or more. Measured data is available. It is by no means a representative proof-of-concept study (although such a study is underway — a project funded by the PHIUS Industry Advisory Council will monitor 30 projects of all typologies in all relevant North American climate zones). But the data sets do offer good first insights as to where things are headed: How well do our tools work to assure performance in the field? Where are they not working? And where is there room for improvement?

The main questions guiding analysis are:

(1) How do passive building standards compare to an ASHRAE 90.1-2010 code baseline? How much better than code are the modeled predictions?

(2) Does the predicted modeled performance align in terms of total site energy use intensity (for normalization purposes) and how do the two passive building standards, models, and protocols used in the market today (PHIUS+2015 and WUFI Passive, PHI and PHPP) compare and/or differ?

(3) Does the measured performance align with the predicted performance in terms of total site EUI and how do the two passive building standards, models and protocols used in the market today (PHIUS+2015 and WUFI Passive, PHI and PHPP) compare and/or differ?

(4) How significant are the performance gaps, if any?

(5) If there are performance gaps, how do we explain them and what can we do to improve the issues to close that gap in the future?

For this blog we’ll limit the discussion to two examples. The projects are located in New York City and Hillsboro, Oregon (Climate Zones 4 and 4 Marine). We graphed what the two passive models had predicted for the projects to consume. We graphed what has been monitored and, for comparison, we also graphed the predicted (modeled) results if the project had been modeled and designed to meet ASHRAE 90.1-2010. (This can be potentially confusing and should not be compared to the monitored data, only to the other modeled data. It can be assumed that a building designed to meet ASHRAE 90.1, once it has been built and monitored, would also show a performance gap; it’s just that we don’t know what that would be for this particular project. Note that the modeling software used to model the imaginary building — that is, the building complying with the minimum requirements of ASHRAE 90.1 — was WUFI Passive.)

Results are varied

The two passive models predicted surprisingly different results for the total expected site energy savings over the imaginary building complying with the minimum requirements of ASHRAE 90.1. WUFI Passive, using PHIUS+2015 modeling protocols, predicted a ±30% improvement over the ASHRAE 90.1-2010 model. The Passive House Planning Package (PHPP) using PHI modeling protocols predicted an improvement over ASHRAE 90.1-2010 of 50-60%. This is a difference in predicted improvement of almost a factor of two between the two models.

At first glance one might think that the PHI standard is just that much more stringent, but keep in mind that the building specifications for the modeled Passive House buildings are exactly the same. The difference can only be due to either the difference of algorithms calculating the energy savings or due to modeling assumptions that each calculation protocol prescribes. (An independent NYSERDA study recently compared PHIUS and PHI standards to ASHRAE 90.1-2010. The study is about to be published. A preview was presented at this year’s NYPH conference in NYC. It confirmed results very similar to those that we were seeing in our study).

The first example project (the one in New York City) consumed significantly more energy than either of the passive models had predicted — one-third more than PHIUS had predicted and more than twice of what PHI had predicted — but performed overall 17% better than the imaginary building meeting the minimum requirements of ASHRAE 90.1. (See Image #4 below).

This project did not go through full certification and only had the design pre-certified. This points to the fact that onsite quality assurance during construction, and thorough commissioning and testing of all systems at completion are indispensable to actual performance. In this particular case, the team was already experienced in building very efficient projects but not to this level of performance. The team might have underestimated the importance of additional passive building specific quality assurance check points onsite and omitted them. This project also did not have a monitoring system and all that was available to assess performance were utility bills.

Monitoring and feedback systems for passive level buildings are emerging as a very useful and maybe needed tool for building operators and managers to run their buildings appropriately and to identify where something might have gone wrong to be able to correct for it once the building is under operation.

To explain the performance gap on this project, further investigation is needed. It could be that manufacturers overpredict performance, which is then used in a model. Some newer technologies are still fairly uncommon, and could have problems with installation. Other quality assurance related issues, such as lack of onsite verification, missing or inaccurate blower door tests, higher ventilation rates, and a lack of final commissioning could play a role. Such outstanding concerns seem to be at the root of the performance gap.

Second project results

The second project (the one in Hillsboro, Oregon) was the best performer, beating the PHIUS+2015 predictions by 2% while it still consumed about 30% more than PHI predicted. (See Image #5 below).

The PHIUS+ model is just a bit conservative and overpredicts actual measured results slightly. This is the result we like to see! But here again, circumstances matter. Success was likely. This team was the best-prepared team of these three pioneer teams, with a very experienced CPHC, an experienced builder, and a committed developer, and a project model that integrated the process with all team players involved.

Passive principles were not just added on to business as usual. The PHIUS+ onsite verification program had evolved by the time it was under construction. The project followed all check points as required by the certification protocol onsite. The team also specified a rather sophisticated monitoring system with occupant feedback. This system allows for very detailed data and performance analysis as well as continued tuning of the building operations. (See Image #4 below.)

What it means

In conclusion:

(1) We at PHIUS may not be able to guarantee passive performance to be within 10% of the modeled results just yet, but we are working on it.

(2) Interestingly, occupants are better than their reputation; they were not major contributors to the performance gaps.

(3) Both passive models predict significantly better performance for the Passive House building than an imaginary building meeting the minimum requirements of ASHRAE 90.1, but the predictions differ by almost a factor of two between the two certification protocols. PHI claims almost twice the amount of savings as PHIUS for an identical building. The PHIUS+2015 modeling protocol is significantly closer to measured reality than the PHI modeling protocol (by about 20-30%).

(4) PHIUS+2015 algorithms and assumptions appear accurate in predicting the envelope and passive-measures performance. In the two cases where the projects underperformed, it was possible to trace the performance gap back to mechanical issues, and/or quality assurance issues during the construction process. To identify potential reasons for the gaps we adjusted the initial PHIUS+ model for potential problems: for example, the model was adjusted for a reported higher ventilation rate the building has been running at and suspected lower efficiencies of systems. Those plausible explanations and changes to the model made it a closer match to the measured results (see the adjusted PHIUS+ model, shown as the red bar in Image #4 below).

(5) Quality assurance and verification are critical to success. The experience of the team members (CPHC, builder, and verifier) and the accuracy and trustworthiness of the tools they have available to them to back-check their assumptions against reality are indispensable to achieving passive levels of performance.

I thank all of the multifamily teams for sharing their data with us. This defines another important step in the passive building evolution: Measured data feedback loops help to define this phase of our committed effort to improving and honing our tools so that we can confidently close the remaining gaps!

Coming up soon: In September at the 12th Annual North American Passive House Conference in Seattle, James Ortega, certification staff at PHIUS, and Marc Rosenbaum will team up in a key session to further investigate and explain the impact of systems design and manufacturer’s claims of efficiencies on the accuracy of our modeled predictions.

Katrin Klingenberg is the executive director of PHIUS.

26 Comments

  1. GBA Editor
    Martin Holladay | | #1

    How much energy is saved by the Passive House approach?
    Katrin,
    I'm grateful for the data you share in this blog. It helps answer a question I posed in a recent blog ("Does a Passive House Use 90 Percent Less Energy?").

    In that article, I wrote, "I look forward to more data on this issue, ... so that we can all get a good handle on a basic question: How much lower are the energy bills of Americans who live in a Passive House than Americans who live in a new code-minimum house?"

    According to the data you have shared, the multifamily building in New York City used about 17% less energy than a code-minimum building of the same size, while the multifamily building in Hillsboro, Oregon used about 28% less energy than a code-minimum building of the same size.

    So my ballpark estimate -- "I think that it’s fair to say that a Passive House residence in the U.S. will use between 24% and 54% less energy than a new code-minimum house" -- may have overestimated the energy savings that can be expected by following the Passive House approach.

  2. RedDenver | | #2

    Be careful of small sample size
    We should be careful not to draw too many or too strong of conclusions based on only 2 test cases.

  3. STEPHEN SHEEHY | | #3

    Sample size
    Given that buildings must undergo significant modelling and a lengthy certification process, why don't PHIUS and PHI also require that every project report actual energy data? Even total energy use, converted if necessary to kwh/year, would be much more meaningful than data from two (or twenty) projects. Actual energy use matters. Projected energy use doesn't.

    Data from every project would reduce any tendency to bias in selection criteria.

  4. user-723121 | | #4

    PH Energy Savings
    Thanks, Kat

    A larger sample size would be needed to form any opinion as to the energy savings of Passive house over standard code built homes. The PH energy savings of the 2 examples listed seem too small to me. I would venture to guess our house (attic R-100, walls R-15, 3 ACH50) in the Twin Cities uses 1/2 of the energy for heating and cooling on a square foot basis than neighboring homes. This was possible with attic air sealing, adding R-60 attic insulation and basement insulation and air sealing. Also we have summer shading with trees and good winter passive gain.

    The multifamily data does not equate very well to a stand alone house if walls are shared.

  5. GBA Editor
    Martin Holladay | | #5

    Response to Doug McEvers
    Doug,
    You may well be right that the amount of energy your home uses for heating and cooling is 50% of that of neighboring homes.

    My analysis (and Klingenberg's) considered all residential energy use, not just heating and cooling; moreover, the comparison was to a new code-minimum house, not "neighboring homes" (i.e., older homes that are close by).

  6. jameshowison | | #6

    Comparing Modeled code vs Monitored Passive?
    Question in response to Martin's question: I think your question is based on a comparison between the modeled code buildings and the monitored passive house buildings? Do you think that's actually a useful comparison? Is there any reason to think that code built multi-family don't also perform much worse than their models?

    One consistent issue would be quality of implementation or rather the criticality of implementation details, presumably implementation is key for both code and passive houses, but likely the relative penalty for difficulties in implementation is higher for passive?

    I look forward to the comparison that you ask for, Martin (energy bills in code minimum vs energy bills in PHIUS) but comparing modeled vs monitored doesn't get you tha.

    --James

  7. GBA Editor
    Martin Holladay | | #7

    Response to James Howison
    James,
    For years, I have been making the argument that, contrary to the claims of some Passive House advocates, these buildings do not use 90% less energy than a code-minimum building.

    Every time that I bolster my case with the best available data, a Passive House advocate says, "Your analysis isn't good enough."

    I agree that it's possible that the WUFI Passive modeling used to estimate the energy use of a code-minimum multifamily building might be wildly off. However, Klingenberg makes the argument in the article on this page that the WUFI Passive modeling is fairly accurate.

    I think that Passive House advocates are running out of wiggle room. But if someone wants to build two side-by-side buildings, one built to meet minimum code standards, and the other to the Passive House standard, and monitor the energy use of the two buildings, I would sing the study's praises and would eagerly publish the study's findings.

  8. Expert Member
    MALCOLM TAYLOR | | #8

    Monitored energy use
    The problem with monitored energy use comparisons is occupant behaviour. This should favour Passive House projects, as it's probably a good assumption that their occupants are more conscious of, and work harder to limit, their energy consumption.

  9. STEPHEN SHEEHY | | #9

    Occupant behavior.
    If we monitor every certified structure, we'll end up with average occupant behavior, which is what we should want. Malcolm is probably correct that the average PH occupant is more concerned with energy use than the overall average, but we'll still get an accurate check on how modelling compared with reality.
    Does anyone seriously claim that passive houses use 90% less energy than a code minimum house? Once you get heating and cooling taken care of, what is there in a passive house that reduces energy used in heating water, cooking, watching tv, doing laundry, and everything else?

  10. rockies63 | | #10

    Costs
    In all the calculating and energy modeling done by clients, builders and manufacturers in order to achieve a Passive House designation hardly anyone mentions the money that is required in order to purchase, install and operate the highly efficient systems and products in these homes.

    The article mentions that for both projects the "best" people were hired to design, energy model and then build the houses but that mistakes must have been made during construction resulting in poorer energy performance than was expected.

    If the best people are involved throughout the project and the building only achieves 17% better performance than standard construction then at what point does the money invested in a Passive house become an unnecessary waste? Do you really need to spend $10-20 thousand dollars on special windows, furnaces, air handlers and insulation packages just to save a bit more energy?

  11. ethan_TFGStudio | | #11

    I won't wade into the
    I won't wade into the 'payback' question other than to assert that quality and comfort have value too. Do you question the value of buying a warm comfortable winter coat for 20% more when a thin coat might be adequate? That being said, I think there is one fallacy in the article with pointing out. I see no evidence that certification of the NYC project would solve the problem. It seems that there is not enough evidence to pinpoint the reason for failure.

  12. user-723121 | | #12

    Building Envelope Efficiency
    The point I am trying to make is modest envelope improvements can yield significant reductions in energy use. The casual reader of this article may come away with the idea Passive House is not energy efficient, Passive House is very energy efficient from a heating and cooling standpoint. I was in a completed PH and one under construction at the 2007 Passive House Conference. I saw the attention to detail in the house under construction and it was superb. We visited the Smith House on a chilly November morning and when the attendees crowded inside the temperature rose quickly and dramatically, water was soon offered. Gary Nelson with his infrared camera did show a slight bit of thermal bridging through the 14" TJI's in the walls, nothing major but it highlighted the capability of modern diagnostic equipment.

    A rough rule of thumb in a cold climate is the energy use for heating in BTU's per square foot per degree day will be about the same as the ACH50.

  13. Expert Member
    MALCOLM TAYLOR | | #13

    Comfort
    With the recent tightening of building code requirements for both air-sealing and insulation, the difference in "comfort" between new code compliant houses and Passive house ones has narrowed to the point it may not be relevant anymore.

  14. user-1109130 | | #14

    Conclusions
    I really appreciate the data shared in this post. It seems that one of the conclusions drawn is that actual performance doesn’t match the modeled performance for these two PH projects. It’s curious to conclude therefore that actual performance for the Code versions would match the modeled performance based on Katrin’s comment that the modeling is fairly accurate. While anecdotal, I consistently see multi-family projects all over my city with significant thermal bridging, no attention to air-sealing, poorly installed insulation, and never a blower door test. Code requirements are often (usually) quite different from the reality of construction.

    The claims of 90% savings for PH thrown around for years have always made me cringe. We need more data and less hyperbole. Still I would caution folks from drawing a conclusion from this tiny sample pool, and from applying this data to the performance of a single family residence.

    I published an analysis two years ago on my blog comparing the modeled performance to the actual performance of my own Passive House (http://insituarchitecture.net/blog/2015/08/08/is-it-working). It’s just one house and I intend to do a more current analysis, but one year’s data for my own house demonstrated that it exceeded the modeled performance.

    As an Architect and CPHC, I consider myself an advocate for high performance building and the energy modeling is just another tool. Most of my recent projects, while modeled in PHPP, have stepped away from meeting some or all of the PH criteria. I look forward to evaluating the actual performance of these projects in addition to the perceived comfort; like everyone I’m seeking that sweet spot.

  15. rockies63 | | #15

    So is the problem the Energy Software or the Builders?
    I'm all for making houses as energy efficient as possible, and building scientists have been consistently providing better information over the decades in order to do that. Govt's have also tightened up regulations over the years and started construction programs such as the R-2000 houses, net-zero houses, etc. Manufacturers have also been making better and better windows, furnaces and other products to make homes more comfortable and energy efficient.

    The latest movement aimed at decreasing energy usage is the Passive House movement. What concerns me with these two sample houses is that both the builders and designers were obviously striving to implement the standards and practices of the Passive House movement and yet they both failed.

    I assume that everyone involved with the projects knew that the houses were going to be tested and the results reported. The energy modeling was done and it said that a certain level of energy savings could be expected. I can only assume that the builders used the best products available. The "implied flaw" then must be that the builders didn't take enough care to implement the design details or that the designers didn't know what the design details should be.

    Aren't there standard examples of house construction details provided by the Passive House Institute? Not every design situation can be anticipated, of course, but surely if something unexpected comes up there must be a way to find out how to do it right.

    If the best people with the best products can't build the house so that it meets the expected level of energy savings then what hope does the average builder have to achieve Passive House standards? Maybe the Passive House standards are too excessive. Maybe "good enough" is good enough.

  16. GBA Editor
    Martin Holladay | | #16

    Response to Jeff Stern and Scott Wilson
    Jeff and Scott,
    I took away a different conclusion. The energy modeling was pretty good.

    At the New York City project, the modeling software that Klingenberg favors (WUFI Passive) predicted 20.8 kBTU/sf*yr. Actual uses was 27.7. In other words, the occupants used 33% more energy than modeled -- quite a bit, but a result that's not too unusual (because of the vagaries of occupant behavior).

    At the Hillsboro project, the modeling software that Klingenberg favors (WUFI Passive) predicted 22.3 kBTU/sf*yr. Actual uses was 21.8. That's 2% less than predicted -- in other words, spot on.

    I think that Katrin Klingenberg's main point is that the software she favors, WUFI Passive, does a better job of prediction that the software favored by Wolfgang Feist (PHPP).

    My comment zeroed in on a point that Klingenberg wasn't focusing on: the fact that the Passive House approach saves less energy (compared to a code-minimum building) than many people think.

    To the extent that the Passive House approach saves energy -- and it does -- it is to a large degree due to the quality control aspects of the program. It is extremely valuable to have oversight and testing to make sure that specifications are followed. That's lacking at most code-minimum buildings (and, for reasons that remain unclear, may have been partially lacking at the multifamily project in New York City discussed by Klingenberg). If we could all find a way to ensure that builders actually complied with code requirements, we'd discover that code-minimum buildings are pretty good.

  17. JC72 | | #17

    @ Scott
    Here's the relevant text Martin was alluding to with regards to the NYC property.

    "This project did not go through full certification and only had the design pre-certified. This points to the fact that onsite quality assurance during construction, and thorough commissioning and testing of all systems at completion are indispensable to actual performance. In this particular case, the team was already experienced in building very efficient projects but not to this level of performance. The team might have underestimated the importance of additional passive building specific quality assurance check points onsite and omitted them. This project also did not have a monitoring system and all that was available to assess performance were utility bills.."

  18. rockies63 | | #18

    If the Builders Don't Know How to do it
    It would seem then that the entire process of designing a house and then evaluating it using WUFI Passive software is basically pointless unless the builder is extremely experienced with the Passive House methods of construction.

    To John Clark, you state that "The team might have underestimated the importance of additional passive building specific quality assurance check points onsite and omitted them." Wasn't the entire point of evaluating the design using Passive House standards meant to assure a certain level of quality? I mean, if you're going to go through all the trouble of having the design "pre-certified" then you probably expect the builders to follow through. If the builders were unable (or unwilling) to implement the Passive House building specific quality assurance checkpoints then why bother spending the money on pre-certification?

    I find in the rush towards "Passive House ideals" that most people have become obsessed with computer simulations, energy modeling and mathematical calculations but nobody seems to really know how to build the building. Unless the builders are properly educated there is no point to the Passive House movement.

  19. Expert Member
    MALCOLM TAYLOR | | #19

    Fault
    I agree with Ethan. There isn't enough evidence to start blaming the builders for the poor performance of these buildings. There are a number of equally credible explanations that could lay the blame elsewhere.

  20. JC72 | | #20

    @Scott
    Just an FYI...I had pasted the relevant text from the article. Those aren't my words, but ya I see your point. I can only surmise that the owner didn't want to spend the additional money for someone to follow behind the builder checking off the boxes. Of course this assumes that the author of the study is correct in that the builder didn't perform their own QA work.

    Sometimes getting certified for XYZ is simply about bragging rights or ticking off some box required of a public policy wonk. It's all very peacock-ish (word?) if you know what I mean.

  21. Bronwyn Barry | | #21

    Convenient numbers
    As the author confirms, neither of these projects were certified via either PHIUS or PHI. I doubt ASHRAE 90.1 modelers were able to verify or update their model either so the information in this blog should be read with caution. It looks like the PHI numbers Klingenberg uses here are old, preliminary PHPP modeling data that do not accurately reflect the completed building, as is required for PHI certification. Since neither of these buildings were intended or built to meet PHI's standards, it’s not surprising that the predicted numbers don’t align with their measured performance. Representing them as such is naive at best and potentially slanderous at worst.

    Klingenberg's statement that "PHI claims almost twice the amount of savings as PHIUS for an identical building" further extends the fabrication. PHI has made no such claims on either of these buildings.

    I doubt the owners of these buildings intended that their data be used to cast aspersions on either ASHRAE or PHI’s standards. Simply comparing the Wufi Passive model and the measured performance of these buildings would have been more than sufficient. I’d be surprised if Fraunhofer cares to be associated with this much additional and unnecessary bluster. It would be great if PHIUS could finally start promoting their standard without the constant need to reference and compare itself to PHI’s. Right now, it appears that PHIUS+ cannot stand alone.

    I sincerely hope Marc Rosenbaum shies away from being associated with such a partially plausible data set, unless it is severely edited to remove the distracting and unnecessary noise.

  22. Bronwyn Barry | | #22

    Alternate perspective on the PHIUS certification numbers
    For a more holistic perspective on the number of Passive House projects being certified in North America and globally, I recommend this article: https://www.linkedin.com/pulse/passive-house-standard-quietly-surges-ahead-passive-house-california.

  23. GBA Editor
    Martin Holladay | | #23

    Response to Bronwyn Barry
    Bronwyn,
    I'm so glad that I don't have a dog in this fight (the PHI-PHIUS fight).

    It's never-ending.

  24. Bronwyn Barry | | #24

    Hell hath no fury
    It really is sad, isn't is it? In the six years since Katrin was decertified, I have yet to see ONE PHIUS presentation where she or her staff didn't take a poke at PHI. You'd swear there were no other building standards out there - or something...

  25. girwin | | #25

    A guess, not a conclusion!
    Absent more granular information, my guess as to explaining the discrepancy between each model, and the relatively "poor" improvement over code is as follows:

    Since these buildings are multi-family, their energy use is less sensitive to shell performance than single-family dwellings; more so as they become larger. As such, the heating demand reductions brought by Passive House vs. code are less significant than lighting, appliances and plug loads. Since there are numerous families in these buildings, it is probably a question of how closely the models matched occupant behavior more than how closely the models match actual shell performance, internal heat gains notwithstanding. In my experience, PHPP tends to assume far less energy use from occupant behavior than is typical in the US. WUFI's assumptions are, I believe, based on statistical analysis of actual American energy use, so they would tend to track reality in the US better. In a multi-family situation, absent an active education and incentive program, individual energy use is likely to vary widely, and to stray from idealistic "Passive House behavior." The relative "flatness" of the monthly energy data compared to code (particularly in Oregon building) would suggest that the differences lie more in baseline energy use than in seasonal demands. In the New York building, the variations in the "summer bump" suggest differing internal heat gain levels affecting cooling demand.

    I would also add that many of my Passive House projects (typically single family) HAVE shown impressive energy reductions over code, on the order of 80-90%. Here's one for example (http://midorihaus.com). Note that is is occupied by two very engaged and diligent owners! In their case, actual energy use is lower than what PHPP predicts.

  26. girwin | | #26

    Midori Haus Energy Use
    Here's a direct link to the energy use info for the project I mentioned above. http://midorihaus.com/?s=energy

Log in or create an account to post a comment.

Related

Community

Recent Questions and Replies

  • |
  • |
  • |
  • |