Development of the SCEC Master Model

- A SCEC Highlight -

Written for Group A by Edward (Ned) Field

 Although there has been some confusion over the years as to what the master model was suppose to be, the original proposal was actually quite specific:

    "... the goal of SCEC is to integrate research findings from various disciplines in earthquake-related science to develop a prototype probabilistic seismic hazard model (master model) for southern California"

    "... the master model represents a constantly updated scientific representation of seismogenic structures and earthquake processes ... developed into forms applicable to earthquake hazard mitigation in the public and private sectors."

    "In essence, it will be the solid earth dynamics equivalent of global atmospheric and ocean circulation models, and will reflect a consensus distilled from the most current scientific thinking."

    "Through appropriate interaction and feedback, the requirements of the master model will guide data acquisition and interpretation."

As such, construction of the PSHA master model has been the focus of Group A, and the principal manifestation has been in the form of the Phase X documents. (Click on the following image to read more).

Figure_01_SCEC1_MM.jpg

 

PSHA has 2 components: 1) the source model (specification of where and when earthquakes are likely to occur); and 2) the ground motion model (attenuation relationship).

The Phase II report represented the first large-scale effort to integrate a broad range of information into a single seismic-hazard source model (see the Earthguake Forecasting highlight talk by Jackson, or http://www.scec.org/research/RELM for background info on Phase II).

With lessons learned from that and subsequent studies (especially related to the implied earthquake deficit problem), Phase IV is returning to this issue of PSHA source model development.

(I'm going to skip Phase III for now and return to it later).

The Phase IV effort, previously referred to as Working Group 2000 (WG2K), is now known as RELM (a working group for the development of Regional Earthquake Likelihood Models)

Note - name does not specify publication date, organization, or a particular region.

The RELM effort does reflect a desire and need for increased interaction between SCEC and the USGS (USGS logo has been elevated from a member institution to co-institution).

RELM will also inevitably form a bridge between SCEC 1 & 2.

But most importantly here - RELM embodies a fundamentally different approach than that taken in previous working groups - that is, developing and testing a RANGE of viable models. (Click on the following image to read more).

Figure_02_RELM_overview.jpg

 

We are encouraging unfettered creativity in constructing the models, even those that are speculative, as long as they can be published in a peer reviewed journal.

The models thus far slated for analysis include - RELMmodels.html

e.g.      those based on geology alone

those based only on seismicity (w/ large upper mag limits)

and perhaps some stress-transfer type models.

Evaluating a range of models is in line with SCEC's role of developing and testing the ingredients - leaving the final consensus building to those whose mandate it is to generate official hazard maps. In the parlance of PSHA, we're exploring potential branches of the logic tree.

There are several related efforts needed to achieve the goals of RELM - read RELMproducts.html

Many different groups (e.g., academia, USGS, CDMG, consultants) have traditionally been involved in each of these, and we are striving to combine forces to avoid duplication of effort, as well as to achieve consistency and standardization.

It's going to take some time, which is why RELM will likely extend into SCEC-2, but I think it's more important to do it right than to do it quickly.

We want to carefully establish standards that will live on well into the future.

Although we want to think big on all of these (set standards that are flexible to enough to expand and grow with new data and ideas in the future), we need to prioritize development so that immediate needs are addressed first. We don't want ongoing initiatives to stall out due to over ambition. Thus, we are striving to have: a version 0 for each within months (something put together that can be reviewed by the community); a version 1 in about a year (reviewed version available on line); and subsequent versions that live on indefinitely.

More detailed info on RELM can be found at: http://www.scec.org/research/RELM

 

 

The other working-group effort toward building the master model is Phase III.

Figure_01_SCEC1_MM.jpg

The goal of the SCEC Phase III Report was to investigate the extent to which site effects can be accounted for in probabilistic seismic hazard analyses of southern California. (see http://www.scec.org/phase3)

The report is composed of a collection of 14 papers, all of which are now in press as a special, separately bound issue of BSSA, that should be published by the end of the year.

There are a lot of very interesting findings.

The top three are listed here:

(1) The most detailed geological maps are not warranted for microzonation; the Wills et al. map is.

(This shouldn't necessarily discourage detailed mapping efforts, as it's possible we've been led astray by data limitations.)

 

(2) Basin depth is a significant factor, even for PGA (but may be a proxy).

(This implies up to a factor of two difference between the edge and deepest parts of the LA basin (other things being equal) - very significant for engineering design where a difference of more than 10% raises eyebrows. In fact, for PGA it's more important that detailed geology - a finding unique to Phase III, and one that will certainly influence building code revisions (current codes do not consider basin depth).)

 

(3) Uncertainty (sigma) remains high after all site corrections.

 

(This is a very important and relevant conclusion with respect to the master model and the future of seismic hazard analysis. It has to do with the intrinsic variability of site amplification with respect to source location.)

 

Figure_08_Olsen_simulations.jpg, adapted from Kim Olsen's Phase III paper, exemplifies the intrinsic variability of site amplification with respect to source location nicely.

What's shown are site amplification maps from 3D finite difference simulations for nine different earthquakes.

Note the significantly different amplification pattern among the earthquake scenarios, even for identical ruptures on the SAF - except where rupture starts on opposite ends of the segment.

This means that the attenuation-relationship approach of representing ground motion is reaching a point of diminishing returns.

Yes, some important improvements can and should be made, but it's doubtful that we will ever dramatically reduce sigma - a fact at odds with the prevailing perspective in the engineering community.

Our best hope for more accurate estimate of earthquake ground motion are via waveform modeling (as in Figure_08_Olsen_simulations.jpg). However, such capabilities are in their infancy, especially in terms of having reliably methodologies that can be applied routinely in engineering practice.

This led to discussion at last year's (1999) SCEC meeting about whether Figure_09_1999_MM.jpg is now a more appropriate master model:

The two approaches to hazard analysis are shown: PSHA vs waveform modeling.

They're complementary, and we need both. PSHA gives composite hazard and thereby identifies most hazardous scenario(s), via disaggregation, for a given location. Synthetic waveforms could then be generated for these events for use in nonlinear structural analysis in engineering design (sometimes called "performance based design", but I'm told by some that the latter is not the correct usage, as PSHA can be used in performance based design). We can envision such a clickable, community master model sometime in the future.

Engineers are waiting for reliable waveforms before moving to nonlinear dynamic analyses, whereas seismologists are waiting for the demand before developing the modeling capabilities.

SCEC-2 could break this gridlock and thereby drive a revolution in seismic hazard analysis.

I can provide more arguments in favor of this approach if asked for.

Using Figure_09_1999_MM.jpg as the new master model may be offensive to some in that it's too applied.

 

Figure10_2000_MM.jpg is my latest attempt to construct an all-inclusive paradigm

Boiling it down to the common interest of scientists and engineers alike - The Ultimate Goal is to understand deformation over time and space (e_ij(x,t)), particularly during large earthquakes.

This is a utopian vision - we may never get there, but we'll do good things in trying for it anyway.

It necessitates a practical approach:

  • Collect data
  • Construct VARIETY of physically based models
  • Translate these in to Hazard Models
  • Educate and Reach out (NOT SHOWN)

As in RELM, don't force consensus through compromise; encourage out of box thinking.

This includes everyone (even those not interested in hazard analysis)

However - hazard analysis is where the rubber hits the road.

Again, in SCEC-2 (and after RELM) we should shift working-group level emphasis toward waveform modeling and thereby help drive a revolution in hazard analysis.

That's not to say that we should abandon PSHA - it's needed too, and there are remaining issues to resolve (some of which can be addressed by waveform modeling). However, the level of resources devoted to PSHA in SCEC-2 (and after RELM) should be on a par with those devoted to waveform modeling in SCEC-1 (again, a shift in emphasis).

Should we call this a "Master Plan" rather than a "Master Model"?