seismology - زلزله شناسی-2
Seismogram synthesis
These simple geometric arguments can be extended to the computation of amplitudes provided that there are no sharp discontinuities in the velocity as a function of depth. More exact representations of the amplitudes and wave shapes that solve the full-wave equation to varying extents can be constructed with the aid of powerful computers; these methods are collectively known as seismogram synthesis, and the seismograms thus computed are known as synthetics. Synthetics can be computed for elastic or dissipative media that vary in one, two, or three dimensions.
The fundamental rationale for computing synthetics is rooted in the notion of the seismogram as an observable entity, and the distributions of Lamé parameters and density within the Earth as unknowns. In other words, it is the province of seismological imaging to estimate the three-dimensional variation of elasticity and density from observations of seismic-wave propagation. Beyond elucidation of basic structure, the variation of elasticity and density is a proxy for the variation of the chemical and thermal properties within the Earth. These in turn provide constraints on the long-term evolution of the Earth as a physical system.
Seismogram synthesis is an example of a forward problem; given a mathematical representation of the Earth and a model of the seismic source, attempts are made to compute synthetic seismograms (or some observables like travel time). For every forward problem, there is a concomitant inverse problem. The general statement of the inverse problem is: Given a set of observations of some measurable seismic disturbance at the Earth's surface, what can be said about that part of the Earth through which the disturbance passed and the characteristics of the source of the disturbance? The source and structural inverse problems are dichotomous in that something must be known about the source before the seismogram can be inverted for structure, and vice versa. This is not a problem in crustal or exploration imaging, where the properties of the artificial source are for the most part known and controlled. It is a problem when natural sources like earthquakes are used and specific steps must be taken either to isolate source and structure inversions or to do them jointly.
The development of inversion methods is a major area of research in seismology. The kinematic predictions of ray theory provide a useful starting point for a discussion of seismogram interpretation and inversion.
Crustal imaging
In a typical experiment for crustal imaging, a source of seismic energy is discharged on the surface, and instruments record the disturbance at numerous locations. Many different types of sources have been devised, from simple explosives to mechanical vibrators and devices known as airguns that discharge a “shot” of compressed air. The details of the source–receiver geometry vary with the type of experiment and its objective, but the work always involves collecting a large number of recordings at increasing distance from the source. Figure 4a shows a seismogram known as a T(x) resulting from one such experiment conducted at sea where an airgun source was fired and recordings were made by devices towed several meters below the sea surface from a research vessel. The strength of an arriving disturbance can be judged approximately by how dark the record appears.
Fig. 4 Crustal imaging. (a) A T(x) seismogram recorded off the continental margin of East Greenland as part of a two-ship experiment designed to study the structure of the crust. (b) The T(x) seismogram data are transformed to τ(p) for analysis and for beginning the travel-time inversion. The part of the τ(p) data corresponding to the turning rays is organized into the strong band of energy in the seismogram on its lower right side. (c) By inversion and iterative forward modeling, an estimate of the change in compressional-wave velocity with depth is derived. (d) Rays that propagate through the structure derived from analysis of the seismogram take complex paths.
This seismogram is complex, exhibiting a number of distinct arrivals with a variety of shapes and having amplitudes that change with distance, though they do not simply lose amplitude with increasing distance. Although this seismogram clearly does not resemble the structure of the Earth in any sensible way and is therefore not what would normally be thought of as an image, it can be analyzed to recover estimates of those physical properties of the Earth that govern seismic-wave propagation. Some simple deductions can be made by inspection. For instance, in a flat Earth the travel time equation is hyperbolic in distance. In the distance ranges up to about 15 km (9 mi), the observed T(x) seismogram in Fig. 4a indeed exhibits a number of segments which have a roughly hyperbolic shape. This can be taken immediately to indicate that the Earth here probably contains at least a few simple layers of uniform velocity. Some segments, particularly those developed at ranges greater than about 20 km (12 mi), do not show a hyperbolic form. In fact, their curvature is in the opposite sense to that expected for a hyperbolic reflection form. This indicates that the arrivals have propagated through a structure in which there is a continuous increase in velocity with depth, and are therefore turning or diving rays as discussed above.
The formal procedure of travel-time inversion involves deriving the turning-point depth for each ray parameter by using a formula for zmax that is obtained from Eq. (5) by an integral transform known as the Abel transform. This approach has been used extensively by seismologists, often very successfully. Methods have been developed that allow observational uncertainties to be included in the procedure so that the inversion calculates all of the possible υ(z) functions allowed by the data and their associated uncertainties, thereby giving an envelope within which the real Earth must lie. The approach has several difficulties principally associated with the nonlinear form of the X(p) equation. Although the desired quantity, υ(z), is single-valued (the Earth has only one value of velocity at any given depth), X(p) can be multivalued. X(p) also has singularities that are caused when the quantity under the square root goes to zero. Thus, care must be taken not to use rays that do not turn but are reflected at interfaces. In practice, separating out only those arrivals needed for the inversion can be very difficult. Furthermore, X(p) data are not actually observed but must be derived from observations. Methodologies that can overcome these problems attempt to linearize the equation with respect to small variations in X(p) and obtain an inversion that includes solution bounds derived from data uncertainties by linear programming.
Much of the difficulties have been further overcome by a coordinate transformation as shown in Eq. (10), where τ(p) = T − pX.
(10)
Fig. 5 Forward modeling; single and double arrows indicate the two types of reflected arrivals from the top of the axial magma chamber beneath the East Pacific Rise. (a) A T(x) seismogram obtained on the spreading center of the East Pacific Rise, used to obtain a velocity depth function by inversion and travel-time modeling. (b) A full synthetic seismogram, calculated and compared to the observed data as a way of resolving more detail in the structure than could be obtained from the travel times. In both parts, particular seismic phases [turning rays (T) and reflected rays (R)] are indicated; the vertical axis is a reduced time scale. (From E. Vera et al., The structure of 0.0 to 0.2 m.y. old oceanic crust at 9°N on the East Pacific Rise from expanded spread profiles, J. Geophys. Res., 95:15529–15556, 1990)
The term τ(p) is known as the intercept time, and it can be thought of as the time that a vertically traveling ray takes to propagate upward from the turning point of a ray with ray parameter p. Although Eq. (10) still represents a nonlinear relationship between data τ(p) and desired Earth model u(z), the square root no longer presents a problem, and this leads to considerable stabilization of the inversion. The observed seismogram can be easily transformed to τ(p) by a process known as slant stacking, in which a computer search is made for all arrivals having a particular horizontal slowness; those data are summed; and using the appropriate X and T data, τ(p) is computed (Fig. 4b). Most importantly, like the desired α(z) function these τ(p) data are single-valued, so that deriving the υ(z) function amounts to a point-by-point mapping from τ(p) to α(z).
Much of this theory has been known for quite some time, but the τ(p) approach to inversion and analysis of seismic data did not see extensive application until relatively recently. One reason is that the quality and sparseness of many early seismic experimental data did not allow them to be treated with τ(p) methods. Data often comprised just a dozen or so seismograms obtained by using explosive charges and a few recording stations, the records from which were made in analog form usually on paper records. The example seismogram (Fig. 4a) is typical of modern marine seismic data in comprising several hundred equally spaced traces. The seismic source consisted of an array of airguns. Seismic arrivals are recorded on another vessel towing a hydrophone array more than 2 km (1.2 mi) long containing several thousand individual hydrophones. Recording on this ship is synchronized to the shooting on the other vessel. Ship-to-ship ranges are measured electronically and written onto the same system that records the seismic data together with other information such as shot times. This experimental method leads to the dense sampling of the seismic wavefield that is required for the correct computer transformation of the observed data into τ(p). The process is extremely demanding of computer time; it has come into common use only since relatively inexpensive, fast computers have made high computational power available to most researchers.
Fig. 6 Imaging by seismic tomography. (a) The type of ray pattern that can be used to conduct a tomographic inversion of travel-time data. (b) Travel-time partial derivatives that are closely related to the ray density and hence to the ability of the method to resolve structure.
Figure 4c shows the result of analysis of the seismogram of Fig. 4a. It shows a profile of compressional-wave velocity against depth in the Earth at a location on the continental margin off East Greenland. The upper part of the crust where sedimentary layers are present is particularly complex, showing both uniform velocity layers, ones that show a gradational increase in velocity with depth, and one (at a depth of 3 km or 1.9 mi) in which the velocity is less than that of the layer above. This α(z) profile was constructed by the combination of several methodologies, including travel-time inversion from the τ(p) data in Fig. 4b and forward modeling of travel times and seismic amplitudes. Figure 4d depicts the rays that would propagate through such a structure; it is possible to recognize reflected arrivals that bounce off the interfaces between layers, and diving rays that smoothly turn in layers in which the velocity increases smoothly with depth.
Information from seismic waveforms
Real seismic disturbances have a finite time duration and a well-defined shape (see insert in Fig. 1). In passing through the Earth, any seismic disturbance changes shape in a variety of ways. The amount of energy reflected and transmitted at an interface depends on the ray incident angle (or equivalently on its ray parameter) and the ratio of physical properties across the boundary. It is sensitive both to the medium's compressional-wave and shear-wave velocities and to the ratio of densities. This is because, as well as splitting the arrival into reflected and refracted waves, boundaries act to convert waves from one mode of propagation to another. Thus, some of the compressional-wave energy incident on a boundary is always partitioned into shear energy.
Fig. 7 Schematic representation of a standard multichannel seismic profiling experiment and the postcruise data processing. The first three rows show the ship towing an array of airguns and a recording streamer. The airguns fire at fixed intervals, sending acoustic energy (the heavy lines) into the water and the oceanic crust. The first (top) pulse represents the reflection of acoustic energy from the interface between the water and the sediment, and the second (bottom) pulse represents the reflection from the sediment–basement interface. The panels to the right show the T(x) seismograms recorded by each of the three hydrophones [channel offset (x) 1, 2, and 3] for shots 1, 2, and 3, with time increasing downward. The steps in the postcruise signal processing are denoted by (1)–(4) in the corner of the T(x) plots. Since the geometry of the shots and the recording streamer are maintained as the ship steams along, the T(x) seismograms of the three shots can be summed or “gathered” to reduce the effect of noise [plot (1)]. In (2), the common-depth-point gather is corrected for the effects of velocity moveout by fitting short sections of hyperbolas to the pulses. The results for each channel can then be gathered again to reduce noise further, as in (3). This is done continuously as the ship steams along, eventually producing an image of the interfaces, as in (4). (After J. Mutter, Seismic images of plate boundaries, Sci. Amer., pp. 66–69, February 1986)
Formulas for reflection and transmission coefficients are substantially more complex than those for the angles, and they are derived from considerations of traction across the boundary. At some incident angles the reflected energy becomes particularly strong. One point at which this happens is when the refracted wave travels horizontally below the interface, giving rise to a propagation mode known as a head wave, and essentially all the incident energy is reflected. This phenomenon is analogous to total internal reflection in optics. On the seismogram shown in Fig. 4a, there are several places where amplitudes become strong, and some can be related to critical-angle reflection, as this phenomenon is known in seismology.
Other strong regions correspond to places where energy is focused by the effect of changing subsurface gradients, which cause many diving rays to be returned to the surface near the same regions. Figure 4d was drawn by tracing a series of rays through the structure in Fig. 4c, with each ray incrementing in ray parameter by a fixed 0.005 s/km (0.008 s/mi). There are several regions where the density of rays returned to the surface is relatively sparse, while in other regions the density is very high (around 6 km or 3.7 mi, for instance). This pattern of amplitude variations can be used to refine models obtained by travel-time inversion and modeling to provide estimates of properties such as density, shear velocity, and attenuation (the amount of energy dissipated into heat by internal friction); however, it is often difficult to separate velocity and density uniquely, and estimates of acoustic impedance, the product of velocity and density, are more commonly derived.
Waveform information can be employed either by forward modeling or by using the T(x) seismogram (Fig. 4a) directly in the inversion procedure. In the forward methodology a synthetic seismogram is calculated on a computer; a model structure, usually derived from travel-time inversion, is used as a starting point, and the effect of propagation of a real seismic source through the structure is computed. The synthetic is then compared to the observed data, misfits noted, and adjustments made to the model until a satisfactory fit is obtained (Fig. 5). Computation of synthetic seismograms is enormously consuming of computer time, and the various methods in use all apply some computational shortcuts that make the problem tractable. Like inversion of travel-time data from τ(p), the advent of modern computers has made the computation of synthetics more realistic, though only supercomputers allow this to be done routinely.
These very fast machines may also allow waveform data to be incorporated more directly into the inversion of seismic data. Clearly the desired aim is to make use of all the information that the entire seismogram contains to yield estimates of all the physical properties that contribute to the observed waveform of the seismogram. The problem is a highly nonlinear one, and can be regarded as an optimization problem often addressed with a nonlinear least-squares procedure in which computed and observed data are systematically compared and adjusted until a fit is obtained that satisfies some optimization criterion. The computational demands of such a procedure are such that the use of supercomputers is essential. The automatic inversion of the complete seismogram to recover all the physical property information that affects the seismic waveform is one of the most challenging areas of research in crustal seismology.
Two- and three-dimensional imaging
A volume of the crust can be directly imaged by seismic tomography. In crustal tomography, active sources are used (explosives on land, airguns at sea) so that the source location and shape are already known. Experiments can be constructed in which sources and receivers are distributed in such a way that many rays pass through a particular volume and the tomographic inversion can produce relatively high-resolution images of velocity pertubations in the crust (Fig. 6). Crustal tomography uses transmitted rays like those that pass from a surface source through the crust to receivers that are also on the same surface.
Fig. 8 Seismic profiling. (a) Stacked reflection profile obtained by using the common depth point (CDP) profiling method described in Fig. 7. The record represents the acoustic response of the Earth to near-vertical-incidence seismic energy along a traverse. It is analogous to the seismogram in Fig. 4a, which represents the response of the Earth at one location to seismic energy at a wide range of angles. (b) Reflection seismic image produced by migrating the data from a; migration of reflection seismograms can be treated as the equivalent of inversion of refraction seismograms.
Fig. 9 Results of several different seismic investigations of the spreading center of the East Pacific Rise. (a) Reflection seismic image. (b) Velocity structure obtained by tomographic inversion. (c) Compilation of several velocity depth profiles obtained from inversion, travel-time, and synthetic seismogram modeling of refraction seismic data.
Because most applications of tomography make use of rays that are refracted in their passage through a structure, they provide representations of the crust expressed in terms of smoothly varying contours of velocity, or velocity anomaly with respect to some reference. Many interfaces within the crust are associated with relatively small perturbations in velocity and occur on relatively small spatial scales, making their imaging by tomographic techniques essentially impossible. The finely layered strata of a sedimentary basin, for instance, cannot be imaged by such an approach.
The most successful approach that has been devised to address the detailed imaging of crustal structure involves the use of reflected arrivals and is obtained by a profiling technique. In the reflection profile technique, energy sources and hydrophone (or geophone) arrays are the same as those used to create seismograms used for travel-time inversion; but in the profiling experiment, both are towed from the same vessel or moved along the ground together.
To produce an image from field recordings, they are first regrouped (gathered), corrected for the effect of varying source–receiver distance, and summed (stacked). The regrouping is made so that arrivals that are summed have come from the same reflection points and the time corrections are made assuming that all the arrivals obey hyperbolic travel times (see Fig. 7) and Eq. (4). Although the latter assumption is not completely correct, it is an acceptable approximation for the relatively small source–receiver offsets (the distance between the shot position and the receiver) used in this type of experiment. After summation, the resultant seismogram has very high signal-to-noise characteristics and appears as if it were obtained in an experiment in which source and receiver were fixed at zero separation and moved along the profile line. Such a record is shown in Fig. 8.
Mathematically, if a recording of the wavefield at the surface has been obtained, and since it is known that propagation obeys the wave equation, then the equation can be used to move the wavefield back to its point of origin; that is, it can be extrapolated back down into the Earth to the place where it began to propagate upward. Having done this, and applying some condition that allows the strength of the reflection to be determined, it becomes possible to recover an essentially undistorted image of the structure. A conceptual similarity to travel-time inversion can be recognized in this methodology. In travel-time inversion, formulas are used for X(p) or τ(p) to determine the turning-point depth of a ray. This could be restated by saying that the calculation extrapolates back down the ray from the surface to its turning point. Reflection imaging uses the wave equation to extrapolate the entire reflected wavefield back to its points of reflection, and can be considered the inversion methodology appropriate for the reflected wavefield.
Several wavefield extrapolation methods have been developed, all of which involve several stages of manipulation of the data, and all are very demanding of computer time. Figure 8 shows one example in which data obtained in shallow water in a sedimentary basin showing complex deformation structures have been imaged to reveal a variety of geologically interpretable structures. The velocity field in the structure must be known very well (and may not be) for the imaging to be successful. Most extrapolation methods involve some form of approximation to the full-wave equation to make the computations tractable. Most, for instance, allow only mild lateral variations in velocity. To do the job in a complete sense requires use of the full-wave equation and operating on prestack data.
Advances, typically led by industry requirements, have seen surveys conducted to provide three-dimensional images, usually for petroleum reservoir evaluation. Survey lines are conducted on orthogonal grids in which the line spacing is as little as 50 m (160 ft) in both directions. Large exploration vessels carry two long hydrophone arrays set apart by 50 m (160 ft) on large booms or deployed on paravanes. Correct imaging of structure from such recordings requires a fully three-dimensional wavefield extrapolation procedure, and this has been developed successfully by using the power of supercomputers. Once the migrated image has been made, it can be manipulated in a smaller computer to observe structure in ways that cannot be achieved by conventional means. Horizontal slices can be made to show buried surfaces, diagonal cuts can be made to examine fault structure, and the whole image can be rotated to view the volume from any chosen direction.
The correct three-dimensional imaging of the Earth, either the whole Earth by tomographic means or crustal structure by tomography and reflection seismic imaging, represents a frontier research area in seismology. See also: Geophysical exploration; Group velocity; Phase velocity; Seismic exploration for oil and gas; Wave equation
Discoveries
Although the theory of plate tectonics allowed Earth scientists to recognize that the global mid-ocean ridge system represents the locus of formation of oceanic lithosphere, it gave little direct insight into the processes operating at these spreading centers. At a few scattered locations throughout the world, small slivers of oceanic crust known as ophiolites have been thrust into exposure onto continents as a result of collisional tectonics. Their study by geologists led to the proposition that large, shallow, relatively steady-state magma chambers are responsible for producing the crust at all but the slowest-spreading ridges. Efforts directed toward seismic imaging of spreading centers since 1980 produced some unexpected results. See also: Mid-Oceanic Ridge; Ophiolite
Reflection imaging of the fast-spreading center at the East Pacific Rise combined with two ship refraction seismic measurements in 1985, and later tomographic imaging, showed that a magma body indeed exists beneath the axis but that it is small and discontinuous (see Fig. 9). The magma body is so small, in fact, that the term chamber seems barely applicable. It is typically not more than 2–3 km (1.2–1.8 mi) wide, and the inferred region of melt (judged from the thickness of layers determined from the results of inversion and modeling of refraction seismic data) with greatly reduced velocities may be only a few hundred meters. This region is embedded in a broader zone of reduced velocities that may represent a region containing traces of melt but is more likely to be solid rock in which the temperature has been raised by proximity to the region of melt. The base of the crust, the Mohorovičić discontinuity (Moho), is a strong reflector formed very near the rise axis. See also: Moho (Mohorovi^ ić discontinuity)
Fig. 10 Reflection seismic images of deep continental crust offshore Britain showing a variety of reflecting interfaces in and beneath the crust. (From D. H. Mathews and M. J. Cheadle, Deep reflections from the Caledonides and Variscides west of Britain and comparison with the Himalayas, in M. Banazang and L. Brown, eds., Reflection Seismology: A Global Perspective, Geodynamics Ser., vol. 13, 1986)
Fig. 11 Regional and teleseismic seismograms. The top three traces (seismometer installation dbm2) are the up–down, north–south, and east–west components of the regional seismogram from a 1-ton chemical explosion in the Adirondack Mountains of New York State (recorded at a distance of 107 km or 66 mi for 2 min). The regional phases Pg and Sg are indicated. These waves are guided by the crust over long distances and thus are very complicated. The bottom three traces (seismometer installation 0262) are the three components of the teleseismic seismogram produced by an earthquake in Iran (June 20, 1990; recorded at Palisades, New York, at a distance of 84°, or more than 9300 km or 5780 mi from the epicenter for 2 h). In addition to the P and S arrivals that have propagated through the Earth's mantle, the large-amplitude low-frequency arrivals later in the record are surface waves that have propagated in the Earth's outer layers. Detailed information is given in the text.
Images of continental crust and its underlying mantle have proven to be equally provocative (Fig. 10). Reflection profiling conducted on land in many regions of the world and on the shallow continental seas around Britain have shown that the deep crust is often characterized by an unreflective upper region and a band of strongly laminated reflections forming the lower third or so of the crust above a Moho that is quite variable in nature. The best available images show distinct events from within the mantle section itself. Mantle reflections may result from deep shear zones with reflectivity enhanced by deeply penetrating fluids. As exploration of the upper mantle has continued, it has become clear that structure in the mantle can be imaged by reflection methods to very great depths—perhaps even to the base of the lithosphere. It is possible that reflection methods will eventually be used to investigate the structure of the lithosphere as a whole, thereby complementing studies based on a different class of seismic methods that already have been developed. See also: Lithosphere
Global and regional seismograms
The basic unit of observation in global and regional seismology is a seismogram, but unlike their counterparts in crustal reflection and refraction, most seismometers used for larger-scale structural studies are geographically isolated from their neighbors. Thus the observational techniques and most methods of analysis used in global seismology have evolved quite differently from crustal seismology. Generally speaking, much more of each seismogram must be retained for analysis. This so-called spatial resolution gap is being closed slowly by the development of new portable instrumentation suitable for recording natural sources. Once these instruments are available in quantity, seismologists will be able to record seismic energy at closely spaced sites that will illuminate in much finer detail structures deep within the Earth.
Seismograms are classified according to the distance of the seismometer from the source epicenter. Those recorded within about 50–100 km (30–60 mi) of a large source are generally complex not only because of intervening structure but because the “dimensions” of the source are close to the propagation distance and different areas of stress release on the fault write essentially different seismograms. The term near-field is given to these seismograms to signify that the propagation distance for the seismic energy is less than a few source dimensions. Understandably, these seismograms are most useful for examining the details of the earthquake rupture. Beyond the near field to distances just past 1000 km (620 mi), the seismograms are dominated by energy propagating in the crust and uppermost mantle. These so-called regional seismograms are complicated, because the crust is an efficient propagator of high-frequency energy that is easily scattered; but there are still discernible arrivals. These seismograms are used to examine the velocity structure and other characteristics of relatively large blocks of crust. The domain beyond 1000 km (620 mi) is called teleseismic. Seismograms written at teleseismic distances are characterized by discrete and easily recognized body phases and surface-wave arrivals. These are relatively uncontaminated by crustal structure, and instead they are more sensitive to structure in the mantle and core.
Figure 11 shows both regional and teleseismic seismograms; since seismic-wave motion is vector motion, three components (up-down, north-south, and east-west) are needed to completely record the incoming wavefield. For the regional seismogram, the first arrival (Pg) is the direct P wave, which dives through the crust; the second arrival (Sg), beginning some 15 s after the Pg, is the crustal S phase. Even though explosions are inefficient generators of shear waves, it is common on regional seismograms to see substantial shear energy arising from near-source conversions of compressional to shear motion. The arrival time of the Pg phase can be picked easily, but the arrival time of Sg is somewhat more obscure. This is because some of the shear energy in the direct S is converted into compressional energy by scatterers in the crust near the receiver. This converted energy, traveling at compressional-wave speeds, must arrive before the direct S. The extended, very noisy wavetrains following the direct P and S are called codas; they represent energy scattered by small heterogeneities elsewhere in the crust. It is difficult to analyze these codas deterministically, and statistical procedures are often used. One result from the analysis of these and other seismograms is that the continental crust is strongly and three-dimensionally heterogeneous and scatters seismic energy very efficiently.
For the teleseismic seismogram, the teleseismic record is 60 times longer than the regional seismogram, and has been truncated only for plotting purposes. Plotted at this compressed time scale, it looks quite similar to the regional seismogram, but in fact there are important differences. The direct P and direct S waves are clearly evident; the S waves are larger on the horizontal components and larger than the P waves, a common feature for natural sources such as earthquakes. The very large-amplitude, long-period arrival starting some 20 min after the S are the surface waves. These waves are generally the most destructive in a major earthquake because of their large amplitude and extended duration. Between the direct P and the direct S are compressional body waves which have shallower take-off angles, and have bounced once (PP) or twice (PPP) from the surface before arriving at Palisades (the analogy here, though inverted, is like skipping a rock over water). The high level or energy between the S wave and the first surface wave is due to other multiply surface-reflected and -converted body waves, which are very sensitive to the structure of the upper mantle.
For purposes of analysis, the travel times of major phases have been inverted tomographically for large-scale structure. The International Seismological Centre (ISC) in England has been collecting arrival-time picks from station operators around the world since the turn of the century. Nearly one thousand stations report arrival-time picks consistently enough so that their accuracy can be judged without viewing the original seismograms. More than 2 million of these P-wave arrival times have been inverted tomographically for the structure of the lower mantle, the core, and the core–mantle boundary. These inversions show that the core–mantle boundary and the mantle just above it are very heterogeneous. Tomographic inversions of special P phases that transit the boundary show that the boundary may have topography as well. Estimates of the amplitude of this topography range from a few hundred meters to several kilometers, but further work must be done.
Direct P phases dive deeply into the mantle and are not very sensitive to upper mantle structure unless the earthquakes are within 25° or so of the seismometer. Multiple bounce body phases are more sensitive to upper-mantle structure, but these must be picked carefully from the teleseismic records. This has been done for several thousand seismograms situated in the United States and Europe, and the resulting models give very finely detailed views of the structure of the mid to upper mantle. These methods are sensitive enough to resolve the remnants of old subducted slabs in the lower mantle and the large keels of high-velocity material which seem to underlie most old continents.
Fig. 12 Example of waveform inversion applied to fundamental and higher-mode surface waves crossing the western Pacific. The solid lines are data, and the broken lines represent synthetic seismograms computed for a model of the Earth obtained by waveform inversion. The numbers and letters to the left of each trace correspond respectively to earthquakes and stations used in the inversion. The distance between the epicenter and the recording station in degrees is given to the right of each trace. The seismograms are plotted as a function of reduced time, in which the time axis is translated by an amount proportional to the epicentral distance divided by a chosen velocity (Δ/3.8). This causes arrivals having the same velocity to line up vertically. The fits of the synthetic to the observed are very nearly “wiggle-for-wiggle” at nearly all distances, indicating the power of the waveform inversion technique.
Going beyond tomographic methods requires analyzing more of the seismogram, for it is in the details of the wave shapes that information about major structures and the boundaries between them can be found. For example, surface waves disperse (different periods travel at different velocities) with wave speeds that depend on the fine details of the crust and upper mantle. An example of this dispersion can be seen in the Iranian seismogram (Fig. 11). Surface waves with longer periods arrive first and thus travel faster than surface waves at shorter periods. Measuring surface-wave dispersion is difficult but feasible, particularly for shallow earthquakes such as the Iranian event where the surface-wave arrivals dominate the seismogram. Seismologists began to measure dispersion in the late 1950s; some of the first applications of computers to seismological problems were the calculation of dispersion curves for oceanic and continental upper mantle and crust.
Measuring dispersion and other wave properties has evolved into a class of inversion procedures in which the entire seismogram is inverted directly for Earth structure. Such waveform inversion methods became feasible when computers became powerful enough to allow synthesis of most of the seismogram (see Fig. 12 ). After many years of operation, even the sparse global digital arrays collect enough data to contemplate tomographic inversions. The most successful of these experiments have combined the seismogram-matching techniques of waveform inversion with a generalization of the tomographic approach to obtain models of the three-dimensional variation of upper mantle structure. The vertical integration shown in Fig. 13 is a useful way of describing the geographic (two-dimensional) variation of a three-dimensional structure. Dark areas are fast relative to an average Earth, while light areas are slow. The darkest regions correspond to the oldest continental crust, known as cratons, while the lightest areas correspond to regions of active rifting either at mid-ocean ridges or at incipient oceans such as the Red Sea. See also: Waveform
Imaging the seismic source
The other half of the imaging problem in global seismology is constructing models of the seismic source. The simplest description of a so-called normal earthquake source requires two orthogonal force couples oriented in space. One force couple occurs on opposing sides of the rupture or fault surface and can be understood as the stress on either side of the fault induced by the earthquake rupture. The other couple is normal to the first and is required to conserve angular momentum. Implicit in this representation is the assumption that the earthquake is a point in both space and time (the point-source approximation), so that this type of source representation is known as a point-source double couple. Although there are some minor complications, the orientation of this double couple, and hence the orientation of the fault plane and the direction of rupture, can be inferred by analyzing the polarity of the very first P and S waves recorded worldwide. Thus this representation is known as a first-motion mechanism. It is not very difficult to make these observations provided that the instrument polarities are known, and the ensuing “beach-ball” diagrams are commonplace in the literature.
First-motion representations of seismic sources are the result of measurements made on the very first P waves or S waves arriving at an instrument; therefore they represent the very beginning of the rupture on the fault plane. This is not a problem if the rupture is approximately a point source, but this is true in practice only if the earthquake is quite small or exceptionally simple. An alternative is to examine only longer-period seismic phases, including surface waves, to obtain an estimate of the average point source that smooths over the space and time complexities of a large rupture. This so-called centroid-moment-tensor representation is an accurate description of the average properties of a source; it is routinely computed for events with magnitudes greater than about 5.5. Because an estimate for a centroid moment tensor is derived from much more of the seismogram than the first arrivals, it gives a better estimate of the energy content of the earthquake. This estimate, known as the seismic moment, represents the total stress reduction resulting from the earthquake; it is the basis for a new magnitude number MW. This value is equivalent to the Richter body wave (mb) or surface-wave magnitude (MS) at low magnitudes, but it is much more accurate for magnitudes above about 7.5. The largest earthquake ever recorded, a 1960 event in Chile, is estimated to have had an MW of about 9.5. For comparison, the 1906 earthquake in San Francisco had an estimated MW of 7.9, and the 1964 Good Friday earthquake in Alaska had an MW of 9.2.
While first-motion and centroid-moment-tensor representations are useful for general comparisons among earthquakes, there is still more information about the earthquake rupture process to be gleaned from seismograms. Types of analysis and instruments have been developed that demonstrate that some earthquakes, especially larger ones, are not adequately described by a first-motion or the averaged centroid-moment-tensor representations and require a more complex parametrization of the source process. These additional parameters may arise from relaxation of the point-source approximation in space or time or perhaps even of the double-couple restriction. In the former case, the earthquake is said to be finite, meaning that there is a spatial or temporal scale defining the rupture. The finiteness may also be manifested by multiple events occurring a few tens of seconds and a few tens of kilometers apart. In the latter case, the earthquake may have non-double-couple components that could be the result of explosive or implosive chemical phase changes, landslides, or volcanic eruptions. The size of the explosive non-double-couple component is one way of discriminating an earthquake from a nuclear explosion, for example.
One indication of source finiteness is an amplification of seismic signals in the direction of source rupture. Normally, the variation of seismic amplitudes with azimuth from a double-couple source varies in a predictable quadrupolar or bipolar pattern (the beach ball illustrates this best). For some earthquakes, however, the seismic waves that leave the source region at certain azimuths are strongly amplified. This has been ascribed to propagating ruptures which “unzip” the fault along a fairly uniform direction. The best estimate is that most faults rupture at about two-thirds of the shear-wave velocity, but some faults may rupture even more slowly. Both unilateral and bilateral ruptures have been observed, and an important theoretical area in seismology is the examination of more complex ruptures and the prediction of their effects on observed seismic signals.
Another indication of source finiteness is the fact that some large events comprise smaller subevents distributed in space and time and contributing to the total rupture and seismic moment. The position and individual rupture characteristics of these subevents can be mapped with remarkable precision, given data of exceptional bandwidth and good geographical distribution. An outstanding problem is whether the location of these subevents is related to stress heterogeneities within the fault zone. These stress heterogeneities are known as barriers or asperities, depending on whether they stop or initiate rupture. The mappings of stress heterogeneities from seismological data is an active area of research in source seismology. See also: Seismographic instrumentation
Bibliography
-
J. Havskov and G. Alguacil, Instrumentation in Earthquake Seismology (Modern Approaches in Geophysics: Vol. 22), 2004
-
C. H. Scholz, The Mechanics of Earthquakes and Faulting, 2002
-
S. Stein and M. Wysession, An Introduction to Seismology, Earthquakes and Earth Structure, 2002
-
ali fazeli = egeology.blogfa.com