New EPA SILs: Part 2 - Technical Basis
As reported in the last issue of NSR Law Blog, on April 17, 2018, the EPA released its new “Guidance on Significant Impact Levels for Ozone and Fine Particles in the Prevention of Significant Deterioration Permitting Program.”
The new guidance presents an updated method of determining significant impact levels (SILs). This issue of NSR Law Blog looks at that approach in more detail for what it says both about these SILs and about the future direction of the NSR programs.
ANALYSIS
As stated in the guidance, EPA lays out the technical basis for its new approach in a 51 page technical addendum, “Technical Basis for the EPA’s Development of the Significant Impact Thresholds for PM2.5 and Ozone”,
released on the same day. This is supplemented by a “Peer Review Report for the Technical Basis for the EPA’s Development of Significant Impact Thresholds for PM2.5 and Ozone,” and represents a significant effort on the part of EPA and its staff to present a technical and scientific justification for the SILs. As EPA states in the Introduction, “a ‘significant impact’ (in quotes) refers to a level of air quality change that can be used in the permit analysis of the ambient impacts from a facility to determine if it ‘causes or contributes’ to a violation of the applicable national ambient air quality standard or PSD increment.”
EPA begins by laying out its fundamental concept, as follows:
In order to understand the nature of air quality, the EPA statistically estimates the distribution of pollutants contributing to ambient air quality and the variation in that air quality. The statistical methods and analysis detailed in this report focus on using the conceptual framework of statistical significance to calculate levels of change in air quality concentrations that have a “significant impact” or an “insignificant impact” on air quality degradation. Statistical significance is a well-established concept with a basis in commonly accepted scientific and mathematical theory. ….
The EPA has decided that an “insignificant impact” level of change in ambient air quality can be characterized by the observed variability of ambient air quality levels. …. The EPA’s technical approach, referred to as the “Air Quality Variability” approach, relies upon the fact that there is inherent variability in the observed ambient data, which is in part due to the intrinsic variability of the emissions and meteorology controlling transport and formation of pollutants, and uses statistical theory and methods to model that intrinsic variability in order to facilitate identification of a level of change in DVs [design values, based on the last year or 3-year period, depending on NAAQS] that is acceptably similar to the original DV, thereby representing a change in air quality that is not significant
Technical Basis at 5 & 6. The bulk of the analysis then focuses on the proper confidence interval that should be selected using a statistical approach called “bootstrapping.” Bootstrapping was selected because it can be used: (1) when the underlying distribution of the sample statistic is unknown; and (2) the derivation of the corresponding estimates is computationally infeasible or intractable.
Based on an analysis of ozone and PM2.5 data collected from 2000 to 2016 (17 years), EPA generated a large number of resampled datasets for DVs at each monitor. These DVs were then used to determine confidence intervals (CIs) that provide “a measure of the inherent variability in air quality at the monitor location,” which may be driven by either meteorological or emissions conditions. Based upon a full assessment of the CIs developed in this dataset, EPA ultimate settled on a CI of 50% as representing the appropriate “bounds of a change in air quality that can be considered an ‘insignificant impact’ for the purposes of meeting requirements under the PSD program.” Technical Basis at 7.
The balance of the Technical Basis document presents an overview of the US sampling system, an overview of the statistical approach, causes of variability, and similar concerns. The datasets were developed as follows:
PM2.5 annual. All data resampled by quarter, quarterly means established, annual means, and DV set as the average of the three annual means. All values were rounded to a tenth ug/m3.
PM2.5 24-hour. All data resampled by quarter, the resampled data ranked and the 98th percentile value selected based on the number of daily measurements for the year. The DVs were then computed as the average of the three annual 98th percentile values and rounded to the near ug/m3.
Ozone. All available data were used, even if outside the ozone season. The MDA8 values were ranked to find the 4th highest value. The DVs were then computed on the average of the three annual 4th highest MDA8 values, which were truncated to the nearest ppb value.
This process was repeated 20,000 times. From the 20,000 estimates, the mean, median, standard deviation, maximum, minimum, 25%, 50%, 68%, 75% and 95% CI were computed and retained.
Ozone Results
The results from the resampling agreed well with the original results. There is an increase in the absolute variability of the results with an increase in the baseline DVs, but there is not an apparent trend in the relative variability. In EPA’s view, this indicates that a central tendency value for the relative variability in the DV for the ozone NAAQS is stable across levels of ozone concentration and a representative value can be multiplied by the level of that NAAQS to obtain a concentration that may be “appropriately” characterize variability for sites with air quality that “just complies” with the NAAQS. EPA summarized the results of this analysis as follows:
PM2.5 Results
The PM2.5 data was more variable, with the 24-hour values greater than the annual, which was not unexpected due to the higher variability expected with a 98% percentile value. The dataset was also more affected by outlier values in the original dataset, which tended to skew the values to the right. Despite these issues, the mean and median bootstrap values almost perfectly matched the baseline DVs. As with ozone, EPA concluded that the relative variability was relatively insensitive to the baseline concentration. EPA summarized the results of this analysis as follows:
EPA then undertook a spatial and temporal variability analysis. This analysis showed no “large scale” variations (e.g., regional), but did find that there could be some local scale issues. The peer review document suggested some refinements, which EPA did conduct, but ultimately concluded that even with those refinements, there does not appear to be significant regional variation. Based on this conclusion, EPA used the 50% CI from the entire US ambient monitoring network to calculate SIL values.
COMMENTARY
EPA’s Guidance and Technical Basis documents set forth a much stronger basis for the derivation and use of SILs then we have seen up to this time. The initial SILs were based on certain assumptions about the impacts of sources based on the cruder models and modeling network of the time. While they have held up reasonably well over the years, EPA’s new approach, which looks to inherent variability and when statistics can tell us that an observed change is “statistically significant,” represents an enhanced level of technical analysis and sophistication.
The work that EPA put into showing that the relative variability is substantially unaffected by the baseline concentrations is of particular benefit. This strongly suggests that there is a certain level of “noise” in the data, that level of noise is relatively consistent given current monitoring and statistical techniques, and that data observed within that “noise” threshold are probably not indicative of meaningful change. EPA’s analysis results in an approach that closely agrees with the de minimis exception goal articulated by the D.C. Circuit in Alabama Power v. Costle.
The only cavil to the approach is that if all changes are on the “high” side of the equation, over time one might expect to see the mean and median slowly trend upward. However, given the technical robustness of EPA’s approach, any such increase should be seen in monitoring data and can be corrected by other Clean Air Act tools. Accordingly, it should not be a basis for rejecting EPA’s approach out of hand.