MPG for homes? Not yet.

If I had penny for every time I heard someone use the “MPG for homes” analogy in the energy-rating world, I’d have gleaming copper gutters on my house. It’s a powerful comparison that I’ve used myself, since most people are familiar with and trust (Volkswagen aside) the EPA’s fuel ratings for automobiles. If you’re anything like me, then you rely on MPG ratings to make buying decisions, which is one of the reasons the Volkswagen manipulation scandal was so disturbing. Our trust was shaken.

If we want to create an MPG for homes, it needs to be trustworthy. This is not easily accomplished, as accurately assessing the energy efficiency of a home is surprisingly difficult. Traditionally, we have relied on asset (i.e. home structure) ratings like RESNET’s Home Energy Rating System (HERS) and the DOE’s Home Energy Score (HES). These scores use complex energy modelling software and detailed in-person assessments of homes to generate scores that reflect efficiency. As these systems rely on human input and models—neither of which are 100% accurate—they aren’t perfect, but they are the best we currently have, at least for assessing structural efficiency.

Unfortunately, the effort and costs associated with in-person assessments have prevented these traditional asset ratings from scaling rapidly. In response, a number of organizations have tried to develop inexpensive and scalable scores derived from public record data. The ability to assess homes at scale using this data was the goal of research initiated by Kate Goldstein and Michael Blasnik, but their intention was to identify homes that had a high probability of being good efficiency retrofit candidates. This idea is now being applied to the real estate market.

The highly respected Rocky Mountain Institute (RMI) recently released a report titled “An MPG for Homes” which explores the use of market forces to promote residential efficiency. I highly recommend reading it. The report highlights recent developments in the use of public record scores on some of the largest real estate portals in the United States. (owned by The Zillow Group) and Redfin now showcase public record scores developed by Utility Score and Tendril respectively, and Utility Scores will soon be accessible on Zillow, making public record scores available for millions of homes. The hope is that sharing these scores will encourage homeowners to make purchasing decisions based on efficiency, just like the MPG rating for cars.

How do they work?

The two largest players in this space—Utility Score and Tendril1—use energy models and data based primarily on public records (e.g. tax assessment and real estate data) to estimate energy consumption and costs for a given home. These estimates are then used to generate a score between 1 and 100, where 100 represents homes with the lowest energy costs.

UtilityScore_RealEstateUtility Score on (image credit:

Public record scores do not require an on-site visit and are calculated programmatically, which drastically reduces cost and increases scale. Utility Score alone has calculated scores for 84 million single-family homes—over 70% of all occupied homes in the U.S. This is far beyond the approximately 1.8 million homes scored with HERS and HES to date.

The RMI report states that “the availability of this data will have an immediate positive impact for consumers,” and cites a study that finds 81% of people who expect to buy a new home in the next 2 years believe that energy efficiency “would cause them to choose one home over another.” This research implies that sharing efficiency scores can have a real impact, but what happens if the scores shared aren’t accurate?

The problem

I’ve analyzed public record scores for thousands of homes, and found only the weakest of correlations between these scores and established asset assessments like HERS and HES. Although my analysis has not focused on Utility Score or Tendril, I’m certain they are only slightly more accurate than throwing darts for any given home. 2

How can I be so sure? Because the inputs to the energy models are lacking for most homes. The public record data used to generate these scores is not always reliable, and there simply isn’t enough detail in the data—even if it were clean—to generate a score with consistent “MPG” accuracy. Inadequate data necessitates assumptions in the models that may align with regional trends, but results in poor marksmanship for individual homes. For example, without good data we have to guess at the levels of insulation in a home based on its construction year, location, or other factors. We’re able to make an educated guess (Tendril calls these “smart defaults”) by looking at what similar homes in that region typically have, but we don’t usually know how much insulation a particular home has (or how well it was installed). Since insulation is a key driver of energy consumption, just this one data point can sink the accuracy of a score. We then repeat this process for HVAC efficiency, air leakiness, window types—you get the idea. Mix in some incorrect building size and age data, and it becomes clear that we’re not likely to see accurate scores for most homes.

It’s not the models used by these organizations that are the issue, it’s the lack of accurate information we have on specific homes. These scores can provide reasonable approximations of energy efficiency for homes that match their assumptions, and this may occur for a significant percentage of homes. The underlying research behind the model defaults combined with the glacial pace of efficiency improvements in the housing stock help assure this. If home size and year constructed in the public record data are accurate and most other model assumptions align with reality, a public record score can hit the bullseye. Unfortunately, this is not likely to happen consistently enough to maintain consumer trust—especially when some individual scores miss the dartboard entirely.

Does it matter?

Like RMI, I want to see energy data on real estate listings. The idea is that disclosing energy information will lead to demand for efficient homes, which will encourage investments in home efficiency in a virtuous cycle. The isn’t just theory; academic researchers in the Netherlands found that homes that disclosed good energy scores sold for a 3.6% premium over similar homes with poor scores. As RMI states in their report, “homebuyers can use this information to compare homes and as a data point for negotiation with sellers.” It’s unclear what the long-term impact of sharing inaccurate scores might be, but there are three possible outcomes for individual homes today:

  1. A home receives a better score than it merits. Here a home is rewarded for fictional efficiency. The seller wins, as the house looks good on paper (or pixels), while the buyer potentially pays a premium for efficiency that doesn’t exist.
  2. A home receives a worse score than it merits. In this scenario, a home is not recognized for its efficiency and may sell for a discount because of this. The homebuyer gets a deal, while the seller may lose out.
  3. Finally, in some cases—when we have accurate data and most model assumptions match reality—a public record score will accurately reflect the efficiency of the home. Everybody wins, but this happens rarely enough that it becomes noise instead of feedback that drives demand for efficient buildings.

Homebuyers might trust the scores for some time (regardless of any disclaimers about accuracy), but may eventually devalue or ignore the scores altogether. It could prove difficult to win back consumer trust in the future when reliable scores are widely available.

It’s possible that disclosure of inaccurate scores on real estate sites will motivate owners of efficient homes to pay for a comprehensive asset rating like HERS or HES, which can be used to set the record straight. The end result might seem positive (more homes with reputable ratings), but what will be the effect of forcing homeowners to pay to correct a metric that was made public without their consent?


The idea of widely expanding the availability of home efficiency information on real estate sites is a good one, but the data we share should be reasonably accurate for all homes. Here are some alternatives to the public record model that sacrifice scaling-speed for accuracy:

  • Make existing asset scores like HERS and HES visible in the real estate market. The Home Energy Labeling Information Exchange (HELIX) project aims to do just this by creating a central database that can store and provide certified asset ratings to local Multiple Listing Services (MLS) and real estate websites like Zillow and Trulia. This project will complement and streamline the work already being done by MLS across the country to expose home efficiency information.


HELIX inputs/outputs (image credit:

  • Increase the number of homes scored through local ordinances and legislation, like the bill proposed in Massachusetts that would require homeowners to disclose an approved asset score to potential buyers. The city of Portland Oregon has passed a similar ordinance, which requires disclosure of Home Energy Scores beginning January 1, 2018.
  • Make sure every home that gets an energy audit has a reputable asset rating calculated as part of the assessment. It’s a missed opportunity to go into a home and gather nearly all the data needed to generate a score without actually producing one. This is a provision in the Massachusetts bill mentioned above.
  • Share operational data (i.e. consumption and cost info from utility bills) and/or scores produced with this data instead of asset ratings. Disclosure of utility bill information to prospective homebuyers is already required in Alaska, Hawaii and Montgomery County Maryland, and has been done successfully in New York State and Chicago since 1987. This approach could scale rapidly if we had utility company cooperation, which admittedly is no small thing.

Although scores based on public record data are not accurate enough to provide a true MPG for homes, they do have value. For example, they can be used to estimate average residential energy consumption and costs for a particular region, identify homes that are likely to be good candidates for efficiency upgrades, or estimate the regional impact of weatherization efforts. These scores can be useful for planning and cost/benefit analysis, but not for influencing individual real estate transactions—at least not with the data we have available today.

RMI is right that we need to disclose energy information to homebuyers. Residential efficiency is not improving quickly enough, and we must leverage markets to create demand for efficient homes. It’s possible that public record scores will disrupt the market in a positive way, but there is real risk that we will do more harm than good by sharing scores that aren’t ready for mass consumption.


1 Update on December 11, 2017: Per RMI, Tendril has announced they are shutting down their public record energy scoring services.

2 I have compared HERS and Utility Scores for a small sample of efficient homes (i.e. where HERS index < 100). Although not a conclusive test, the results do support the dart analogy. Utility Scores are available for free at


OPEN Scores on the Rise

A quick update on my March blog post about benchmarking the performance of our recently insulated apartment. We moved out at the end of June, but not before calculating one last OPEN score. Here’s the updated graph of our monthly scores and energy consumption:


As you can see, the cellulose insulation installed at the end of January led to improved OPEN scores through the spring, leveling out around 70—a nice jump from the 63 we averaged prior to the insulation work. Had we stayed in the apartment, our scores would have resumed their upward trajectory once we returned to heating season in the fall. Although we won’t be able to quantify the full impact of the cellulose due to the move, we know it was significant; our home is already operating more efficiently than 70% of homes in the U.S. I suspect our OPEN score would have plateaued around 75 if we had completed a full year in the apartment post-cellulose.

We used just 2,570 kWh and 629 therms of natural gas (used for heat and hot water) over the last 12 months of our occupancy, for a total cost of $1,424. The owner of the apartment is now using its efficiency as marketing for prospective tenants. The next occupants will save money and we (collectively) will avoid ~1 metric ton of CO2e emissions (roughly equivalent to driving a car 2,400 miles) each year for the life of the home.

Not bad for a few hundred dollars and a bit of coordination work.

OPENing Our Home

Our apartment in Cambridge Massachusetts is similar to many in the region—just under 1,000 ft2 of space in a three family home built in the early 20th century. It’s a wonderful building with charm and character, but literally no insulation. It’s been a cold and generally uncomfortable space during much of the winter. Fortunately for us, that changed this past January when we had cellulose insulation blown into the exterior walls through the Mass Save program. We had over $3,300 worth of insulation work done and paid just a fraction of the cost; if your state offers free or discounted energy audits and efficiency work, you should seriously consider it.

I’ve been tracking our energy consumption since we moved in just under 2 years ago, and calculating OPEN (operational energy) efficiency scores each month since our first year. Our average score has been a 63 (out of 100), which is above average but not great considering the steps we have taken to reduce our electricity use—killing vampire loads, using LED bulbs, shutting down our wireless router when not in use, etc. These steps certainly improved our score (and saved us money), but it’s just not possible to have an efficient home—even an apartment—in the northeast without insulation. Energy required for space heating dominates here; it accounts for approximately 68% of the energy used in our apartment, with 22% used for hot water and 12% for electricity (nationally these averages are 41.5%, 17.7% and 40.8% respectively).

My wife and I agree that the apartment feels more comfortable since the insulation was blown in, but I’ve been looking forward to seeing the impact the insulation work would have on our OPEN score. I want to be able to quantify the improvement rather than relying solely on our (likely biased) qualitative assessment. After receiving our first utility bills since the work was done and calculating a score, this is what we found:


After just a single winter month, our OPEN score jumped from an average of 63 to a 67! As you can see from the graph, our score over the last year (dark blue line) has been fairly consistent, bouncing between a low of 61 and a high of 64 prior to the insulation. It will be interesting to see how the score progresses over the next few months, but it’s already clear that there is a quantifiable improvement in our home’s energy performance. It’s gratifying to see that the money and time spent on the insulation work is having a definitive impact.

Calculate OPEN scores for your home here:

Author’s note: The OPEN score and energy consumption graph in this post was updated on March 29th 2017 to correct for a data entry error. Text referring to incorrect scores has been updated as well.

Is Above Average Good Enough For You?

Did you ever receive a grade of “B” in a high school math class? How would have felt if you had? Would you have changed your behavior, maybe studying more (or less)? The answer is likely different for each of us; some might have felt happiness or relief, others disappointment and a determination to work harder. The grade would have affected each of us differently.

Might this be true for home energy scores as well?

Here at Resynergy Systems, we’re not only interested in sharing OPEN operational efficiency ratings with homeowners to help them understand how their homes are performing, we’re also interested in understanding the effect of sharing a rating with a homeowner—does the score itself impact behavior?

We had a theory—based on research from MIT—that sharing an operational rating during a home energy assessment would influence a homeowner’s decision to invest in efficiency upgrades. In order to test this theory, we decided to conduct an online experiment based on self-reported intention.

There were some surprises in our findings, but the results strongly supported our hypothesis that homeowners who received an operational rating would be more likely to invest in energy efficiency.

What did we do?
Our main hypothesis was that homeowners who receive a less than stellar operational score for their home would be more likely to invest in efficiency upgrades (e.g. air sealing or insulation work) than those who do not receive a score. We also thought this effect might be stronger for homeowners who were concerned about their energy consumption for environmental or cost reasons. We wanted to conduct a randomized control trial to confirm or reject our theory.

We discussed our ideas with Trisha Shrum, an environmental economist from Harvard (and co-founder of DearTomorrow) who provided feedback and guidance on structuring the research. She also pointed us to Amazon’s Mechanical Turk, an online marketplace used by academics and businesses for research and virtual work services. “MTurk” is a global network of web-connected workers, who review and can decide to accept Human Intelligence Tasks (HITs)—virtual work requests—for pay. In our case, we paid Turk workers a small amount of money to participate in a survey we constructed for this research.

The survey walked each participant through a hypothetical home energy assessment (i.e. audit), with a real-time randomization process that allowed a subset of participants to receive an OPEN score for their home as part of the assessment. We settled on sharing a conservative, above average score of 64 (out of 100, where 100 indicates most efficient) for the study, which we presented visually:

OPEN_score_homes_sample_v1OPEN rating comparison graphic

Only U.S. residents were allowed to participate, and our goal was 400 responses. We had 401 responses in total, with 16 that were incomplete or failed to meet our data quality standards. This left us with unique and clean responses from 385 participants in 47 different states (if you are curious, we missed Idaho, Nebraska and South Dakota). Just under half (48%) of our participants received the OPEN score during their virtual assessment (all receiving the same score of 64, as shown above). These respondents were also asked a series of questions to help us understand how they felt about the score. The remaining participants (52%) did not receive a score and were not aware that the other participants had received one. All respondents were asked the same set of questions about their willingness to invest in efficiency after the audit, and all were presented with the same efficiency upgrade options with deeply subsidized costs: leave the home as is ($0), air seal the home ($100) or insulate and air seal the home ($500). We then asked a series of questions about energy use attention and concern, along with a handful of demographic questions.

OPEN rating research structure

What did we find?
We first looked at the entire population of respondents. As expected, we found that homeowners receiving an OPEN score were more likely to say they would invest in air sealing or insulation work, with a moderate increase of 4.2%. Although we could not confirm our hypothesis as this result was not statistically significant (p=0.433), it was promising.

We then analyzed three subgroups within the overall population based on their responses to our survey questions. These groups represent respondents who indicated they:

  • Pay close attention to utility bills (the “attention” group)
  • Are very concerned about reducing energy use (the “concern” group)
  • Feel utility bills are expensive (the “expensive” group)

We believe these subgroups represent homeowners that are concerned about their energy consumption for economic or environmental reasons. When we investigated the effect of a score on these groups, we found that participants in each were over 9% more likely to say they would invest in efficiency when they received an OPEN score. However, the results were only statistically significant (α level .05) in one group: those that identified as paying close attention to utility bills.

A little analysis revealed that the “attention” group encompasses most of the other two subgroups. Of the 250 respondents falling into the “attention” group, 83% also identified as being very concerned about reducing energy use at home, felt their utility bills were expensive, or both. (This is slightly higher than the 75% of respondents in the “expensive” group that belong to either or both of the other two groups.) We believe it is these homeowner concerns that are driving the attention paid to utility bills.

Turk_Respondents_v5_whole_groups Breakdown of respondents concerned about their energy use

The effect of an OPEN score on the “attention” group was statistically significant and compelling. This group represents 65% of our total sample population, and these homeowners were 11.8% more likely to say they would invest in efficiency upgrades when receiving a score (p=0.046).

group % more likely to invest when OPEN score received
 p-value   size (n)
% in Attention group
Attention 11.8% 0.046 250 100.0%
Expensive 11.0% 0.085 250 72.4%
Concern 9.4% 0.169 126 91.2%

These results provide preliminary evidence that receiving an OPEN score may motivate homeowners to invest in efficiency—especially if they pay close attention to their utility bills (i.e. are concerned about energy consumption) to begin with.

The survey provided other important insights:

  • 72% of respondents wanted to know how their energy use compared to other homes. This suggests that people are both interested, and do not currently know, how their energy use compares to others.
  • 82% of respondents indicated they were completely comfortable (57%) or somewhat comfortable (25%) sharing utility information with an energy auditor. This contradicts the belief that homeowners are concerned about privacy and reluctant to share energy data.
  • 90% of respondents who received a score indicated they would take action to improve their OPEN rating if they received a score of 25 or less. Lower scores likely provide a stronger incentive for homeowners to reduce consumption and invest in energy efficiency.

Does it matter?
These findings demonstrate that simply sharing an efficiency score with a homeowner may encourage investment in energy efficiency. Although our experiment was based on an operational score, it is not unreasonable to think we would see similar results from an asset score if shared in a context meaningful to homeowners. One benefit of an operational score, however, is that it is a reflection of a homeowner’s actual energy use and not simply the result of energy modelling of the physical structure.

These results may be especially significant for residential efficiency programs, as the people most affected by the OPEN score indicated they pay close attention to their bills; these are likely the homeowners requesting energy audits and taking advantage of efficiency program offerings. Efficiency programs should consider targeting this population, as sharing an operational score with this group may have substantial benefits with relatively little cost.

Our findings also point to the idea that an OPEN score does not have the same meaning for all people. Although this needs further study (and data), our research suggests that receiving an OPEN score may not have the same effect on homeowners who are unconcerned about energy consumption. In fact, people who indicated they do not pay close attention to their utility bills and received the OPEN score of 64 were 10.9% less likely to say they would invest in efficiency in this study. These results were not statistically significant (p=0.3), but in the same way that a “B” grade has different significance for individual students, it is possible that less-concerned homeowners view a score of 64 as a positive rather than negative. Again, this needs further study, but these results are interesting and unexpected.

Our initial research results are promising, and we plan to continue our study by providing a randomized set of homeowners their actual OPEN ratings, based on their own energy use, during home energy assessments. We expect to find that homeowners receiving an unsatisfactory energy rating will be more likely to spend real money on efficiency improvements to their home, and that the determination of unsatisfactory is subjective. The actual score received will strongly influence the likelihood of taking action, but the magnitude of the effect will depend on the mindset of the person receiving it.

The idea that sharing an operational efficiency rating may influence homeowner behavior has implications beyond home energy assessments. Other related research questions we want to answer:

  • Will sharing OPEN ratings as part of outreach efforts motivate homeowners to request home energy assessments?
  • What is the effect of sharing an OPEN rating with a homebuyer or renter as part of a real estate transaction?

More to come.


Author’s note: If you have questions, ideas for further research or would like to collaborate with us on research in the field, please contact us.

In Defense of a Home Energy Label

The Boston Herald recently published my Letter to the Editor, written in response to their editorial disparaging Massachusetts legislation that would require a home energy audit and disclosure of an energy label (i.e. rating) prior to a sale. I disagree with the editorial, but the Herald earned my respect for their willingness to publish an opposing view. They edited my letter slightly, and since I prefer the original (I’m completely unbiased of course), here it is:

The Herald editorial on July 3rd was a disappointment. It misleadingly criticizes Senate legislation that would require homeowners to disclose energy assessment results to prospective homebuyers. The claim made by the Herald that buyers will “simply filter out” low-performing properties has no basis in fact, and contradicts the claim made in the same editorial that homebuyers already “have an idea” if a home is efficient before buying.

Disclosing an energy label—an intuitive score based on the physical characteristics of a home—provides important information to homebuyers and leverages the market to improve efficiency. Few of us would buy a car without knowing its MPG rating, yet we spend far more money on our homes without knowing how much energy they will use. Much like Energy Star ratings for appliances, home energy labels help consumers make better informed decisions, which leads to more efficient homes. And research by the University of North Carolina and the Institute for Market Transformation (IMT) found that owners of energy efficient homes are 30% less likely to default on their mortgages.

We must reduce residential energy consumption if we have any hope of meeting our legally binding Global Warming Solutions Act (GWSA) targets. Requiring energy audits and home energy labels is a step in the right direction. Hyperbolic editorials and labeling citizens as “climate zealots” helps not at all.

Volkswagens, Homes and the Limitations of MPG Assessments

The recent Volkswagen emissions scandal is a tragedy on many levels, and it has highlighted some of the flaws in the systems we use to evaluate the efficiency of our cars and trucks. In particular, the use of standardized laboratory tests to measure emissions and fuel economy has proven troublesome. Sadly, it’s not difficult to imagine other automobile manufacturers taking similar actions or engineering their vehicles to perform exceptionally when new in order to excel in laboratory tests. If the goal of the system is to produce cars that score well in testing scenarios, that’s what will be produced.

Many experts in the residential energy efficiency space like to use the MPG analogy to make the case for one-time standardized building efficiency tests. I expect the analogy has lost a bit of luster with the VW action, but the idea is that these standardized tests can be used as an “asset” assessment of homes in order to create MPG-like ratings that can be shared with homeowners, homebuyers and renters.

I don’t mind these analogies (I’ve made them myself), and I fully support the use of asset assessments – they are extremely important in helping us understand structural efficiency and providing information that can be used to differentiate homes (and new automobiles when everyone is playing by the rules).

What’s missing from these assessments however, is the actual performance of the home or vehicle over time. What happens to efficiency (and emissions) when parts in our car start to wear, we don’t maintain the heating and cooling equipment in our home, or different occupants and drivers enter the equation?

Generally speaking, we don’t know the answer to these questions, and this is a problem. If our goal is to reduce emissions and energy consumption, how can we test a car or home once and assume it’s efficient as long as it’s functional? We can’t, which is why we need to evaluate the performance of our homes and vehicles throughout their lifetimes.

The Unexpected Benefits of Going Solar (or, A Hymn to the Autumnal Equinox)

It’s not difficult to find commentary on the many benefits of installing solar panels on your roof. Social, environmental, financial – we’ve got you covered. If you need further incentive to go solar, there are additional benefits that are quite important if less well known.

My good friend Jeff recently had a 5.75 kW solar photovoltaic (PV) system installed on his house as part of a Power Purchase Agreement (PPA) with Sungevity. His motives were both environmental and financial – he’s glad to help the environment by reducing his carbon footprint, but probably wouldn’t have put panels on his roof if it didn’t also make financial sense. Like many of us, he’s idealistic but practical.

The benefits to Jeff and his family have far exceeded dollars and carbon however, and these benefits have come primarily in the form of increased awareness. As soon as the panels went live (an event surrounded by much anticipation in his household), Jeff became acutely aware of just how much electricity he consumed in his home. He now understands how many kWh he uses on an average day at different times of year and how drastically his usage spikes during warm summer days when the air conditioning is working hard to keep his house comfortable.

Those warm and sunny summer days are a mixed blessing. Although he generates the most electricity from his panels on clear summer days, it’s not usually enough to overcome the energy consumed by the central air conditioner. He still expects to generate more energy than he consumes on an annual basis, but it’s not just electricity generated in the summer that will get him there. He knows this, and is beginning to understand which appliances use the most energy and has started taking actions to minimize his energy use. Prior to the panels, he paid his utility bills and didn’t worry much about consumption since the bills were reasonable. Now it’s become a bit of a game – can he generate more energy than he consumes? How can he reduce his consumption to get him there?

Jeff and his family also now appreciate the patterns of the weather and the sun in a way they never had before. He knows how to calculate solar noon, where solar south is, and how the sun travels over his property. He understands how the length of our days (and the angle of the sun) expand and contract around the solstices. He has become intimately connected to the sun – in much the same way humans were connected to that great star before the advent of artificial lighting and cheap energy.

I like to think that Jeff’s newfound knowledge is not much different from that of farmers and our not-so-distant ancestors – people who knew the sun was life and the study of its patterns life-affirming. Most of us living in modern societies are disconnected from our energy consumption and the simple fact that life is dependent on the sun. Technology has made this knowledge unnecessary, but not unimportant.

The benefits of putting solar on your roof are many. Do it for the environmental reasons. Do it because it makes financial sense. Do it because it makes you feel good. Whatever your reasons, don’t be surprised if you find yourself with a newfound awareness of the energy consumed in your home and your relationship with the sun.

Energy Efficiency Research Confirms: Measure What You Manage

A group of academics is stirring up trouble in the world of energy efficiency. Researchers at the University of Chicago and University of California Berkeley recently released a controversial working paper that highlights the apparent cost-ineffectiveness of the federal Weatherization Assistance Program (WAP) in Michigan. The research found that energy efficiency upgrades in homes that participated in the program were poor financial investments—costing roughly twice the value of energy saved over time. This finding contradicts the long-held belief that efficiency is one of our best financial investments.

The research has been critiqued soundly by ACEEE, NRDC, Martin Holladay from Green Building Advisor and discussed in detail by the Illinois Institute of Technology. Some criticisms of the work point to the research itself; other criticisms rightly focus on the broad assertions made from the study (by the press and the researchers themselves). Many of the criticisms are fair, but I believe we can still learn from this work.

What Exactly Did the Study Find?
The study’s core findings are these:

  1. Homes that participated in the WAP program reduced energy consumption by an average of 10 – 20%. This is important information, as the impact from weatherization efforts is not often quantified rigorously. A 10 – 20% reduction in energy consumption is not trivial, and is consistent with a previous study by researchers at Lawrence Berkeley Labs.
  2. The costs of the weatherization interventions were approximately double the value of the energy saved over time. The researchers determined that weatherization in this particular program was a poor investment even when carbon-reduction costs were considered.
  3. There was no evidence of the often discussed “rebound effect” for homeowners that participated in the program; creating more efficient envelopes and heating systems did not lead home occupants to turn up the heat and negate savings.
  4. It was difficult to get homeowners to participate in the weatherization program, even though it was free. Average participation rates in the study were 1%, while homeowners receiving encouragement in the form of house visits and phone calls participated at a paltry 6% rate.
  5. The energy models that underlie the energy savings projections for the program (in this case NEAT, the National Energy Audit Tool) significantly over-estimated energy savings associated with efficiency interventions. The savings projected by the models were approximately 2.5 times the savings actually observed.

It’s important to remember that these findings are for a single federally funded program focusing specifically on low-income households. The study examined a small slice of a much larger and more complex system. It would be foolish indeed to assume that all efficiency efforts have poor returns and are not worth the initial investment—especially when we have research to the contrary from industry experts, government and academia. Please refer to the critiques referenced above for more on this theme.

With that said, we can learn from this research. As someone that has studied energy efficiency and works in the industry, my first instinct was to criticize the paper and simply dismiss it. Fortunately my second instinct was to read the study in detail, with a mind as open as it would allow. What I discovered is that there is value in this work, although perhaps not exactly what the authors intended.

We Need to Measure and Verify Actual Performance
The research highlights the need for us to objectively measure the performance of our buildings and the impact of efficiency improvements. It’s not enough to weatherize a home and congratulate ourselves on a job well done. We need to measure the actual energy consumed over time to ensure homes are performing as expected. We can’t do this with modeled energy consumption or projections of energy use. If our goal is to reduce energy consumption, we need to measure it.

And speaking of models, we would do well to remember that they are necessary simplifications of the real world and never perfect. Model inaccuracy is nothing new. Systems expert Donella Meadows wrote in Limits to Growth (The 30-Year Update) that “All models, including the ones in our heads, are a little right, much too simple, and mostly wrong.” This particular study found that NEAT models overestimated natural gas consumption by 25%, and several other studies have found that models designed to predict energy consumption based on the physical characteristics of the home are often inaccurate. Models are useful tools, but we should not assume they can replace the measurement and benchmarking of actual home performance.

Subsidizing Efficiency Doesn’t Work
The researchers in this study randomly assigned homes to a group that was encouraged to participate in the WAP program. Encouragement consisted of over 32,000 phone calls and nearly 7,000 home visits. Even with this encouragement, participation in this free program was only 6%, at a cost of more than $1,000 per weatherized household. We see similar statistics in my home state of Massachusetts, where the Mass Save program allows residents to sign up for free energy audits and steeply subsidized insulation and weatherization work. Statewide participation rates in 2013 and 2014 were approximately 2.9% and 3.1% respectively. It’s reasonable to assume that low participation rates are at least partially responsible for proposed state legislation that would require an energy audit for every home sold in the state. I fully support this legislation as I do not believe subsidizing energy efficiency is working quickly enough.

Why is it that we can’t give away free efficiency upgrades? These aren’t bad investments as the economists suggest (efficiency is without question a good investment if it’s free), it’s simply that energy prices aren’t high enough to motivate most of us to take action. The price of energy in the U.S. is an incomplete form of feedback—one that doesn’t include the social or environmental costs of fossil fuel extraction. We take action based on the information we have. If energy is affordable, why spend effort or money to conserve?

We need to provide clear and timely feedback to consumers and leverage the market to increase demand for efficiency. In order to do this, subsidies on fossil fuels need to be removed. A price on carbon must be implemented. We should measure and benchmark the performance of homes in near real-time. And every home in this country should have an efficiency rating to help home buyers and renters objectively assess the efficiency of different homes. Any combination of these actions would drastically increase demand for efficiency.

Let’s Work Together
The research presented by the economists in this paper is not perfect, but is valuable. We can learn from it, just as the academics can learn from some of the many criticisms of the paper if they so choose. Let’s hope that efficiency experts and academics can work together in the future to improve energy efficiency research. Personally, I want to know where we can improve and where we should focus our efforts—even if it makes me a little uncomfortable to hear it.