Salam and Hi,
Welcome to my blog entitled Seismic Data Processing Lab.
I'm Mohamad Asyraf bin Mohammad, an undergraduate of Petroleum Geoscience from Universiti Teknologi Petronas and currently undertaking 'QAB 4083 Seismic Data Processing' as one of the courses.
We are using MATLAB software to perform the seismic data processing lab module. The lab task however only available until Lab5 in this blog due to technical error along performing the lab activities.
Please find the module for each lab at the right hand site of this page to explore the module throughout the semester.
Thank you.
Seismic Data Processing Lab
Sunday, 21 August 2016
Lab 05: Sorting, Velocity Analysis and Stacking
At this particular stage, we are ready to prepare the data for revealing a true image of the subsurface where it involves compressing the seismic data to infer a first approximation for such image which includes:
1.
Sorting the shot
gathered data into common mid-point gathers (CMP).
2.
Picking appropriate
stacking velocities and applying accordingly normal-move-out (NMO) corrections.
3.
Stacking all CMPs and
then concatenate all stacked CMP traces together.
Surface
seismic reflection surveys are commonly been acquired using the common
mid-point (CMP) method. There are also other names for this method for example
common reflection point CRP) and common depth point CDP). Traces which
reflected from the same mid-point form a CMP gather, while the number of traces
in a gather is called the fold of that gather.
As
the seismic data processing is performed in the CMP-offset mode, generally we
need a way to sort the traces between these modes. In term of MATLAB function, ssort.m
is used for sorting those traces accordingly. Later, one will obtain
the fold or number of traces per CMP based on another MATLAB function, extracting_cmp_fold_num.m
as shown in Figure 1 below.
Figure 1: The
fold (number of traces per CMP) versus the CMP numbers
From the Figure 1, one will see the data is started at CMP 203 with one single trace then increases gradually until it reaches the maximum fold of 18 at CMP number 235. Followed by that, it then ends at CMP 266 with once trace. Figure 2 below showing some examples of the CMP gather taken for viewing the different number of folds at different CMP gathers..
Figure
2: CMP gathers 208 (left), 234 (middle) and 250 (right) of CMP-sorted
seismic data
For velocity analysis, it is primarily aim to determine the seismic velocities of layers in the subsurface. At this stage most of the working involves with stacking velocities which will greatly used for the NMO correction of the seismic data. In term of methodology, one can obtain or pick stacking velocities by simply running a velocity analysis on a given CMP gather, where the aim is to flatten out the hyperbolas on the CMP gathers and the most common stacking velocities used are the velocity spectrum and the constant velocity stack techniques. The examples of velocity picking are as shown in Figure 3 below.
Figure
3: Velocity analysis for CMP gathers 220, 240 and 250.
After velocity analysis, the NMO correction is what we need to applied next. However, as per referred to the manual, there is a technical error toward the NMO correction function in the MATLAB. It is believed to have a technical error towards the looping syntax of the NMO correction function where the looping does not terminate after the 11th iteration of the stacking function done in the CMP part. Figure 4 below explained and illustrates the main problem been faced throughout the completion of the NMO correction function.
Figure
4: Error in the NMO correction function
Lab 04: Seismic Deconvolution
Seismic deconvolution came after one had performed the frequency filtering on the seismic data via BPF’s. By performing the frequency filtering, the seismic data is smoothed. However, this event has affected the seismic data vertical resolution which is due to loss of some of its original wider frequency band.
Therefore, this is where the seismic deconvolution came with an aim to increase the vertical resolution of the seismic data by compressing the source wavelet (also known as spiking deconvolution). Apart from that, the seismic deconvolution also can be used for noise attenuation procedures such as multiples attenuation and since we are dealing with land seismic data, it requires the enhancement of the vertical resolution to the data set. Before performing the spiking deconvolution to the seismic data set, there are few sets of parameters need to be considered and there are:
1. Autocorrelation
window (w): This sets up the part of seismic trace from which we will select
the elements of the autocorrelation matrix in the equations.
2. Filter
length (N): This sets up the length of the spiking filter h(n).
3. Percent
pre-whitening (Ɛ): This sets up the amount of white random noise we want to
include into our auto-correlation matrix to stabilize the solution of the
normal equations.
To perform spiking deconvolution, there are two (2) MATLAB functions involve which are spiking_decon.m and auto_correlation_map.m. Both of these functions are used as per respected to the equation below.
(1) The
earth respond e(n) can safely be approximated as a white random series
impulses. In this case, the amplitude spectrum of e(n) will be constant. That
is
Equation 4.2 |
(2) Using
the Equation 4.1 to find the trace
amplitude spectrum and substituting Equation
4.2 yields
|
Equation 4.1 |
Equation 4.2 |
Equation 4.3 |
(3) Equation 4.2 means
that the amplitude spectrum of the seismic trace is a scaled version of the
amplitude spectrum of the source wavelet
Figure 1: MATLAB file
1. Display
all the parameters needed for applying the spiking deconvolution to the seismic
data. Shot numbers from 4-6 has been chosen for the deconvolution model of the
data.
2. Parameter
for showing the results before the spiking deconvolution (Figure 2a)
3. Parameter
for showing the auto-correlograms of shot gathers 4-6 (Figure 2b)
4. Parameter
for showing the results of the spiking deconvolution of the seismic data
(Figure 2c)
Figure 2: (a) Shot gathers 4, 5
and 6 before applying spiking deconvolution, (b) Auto-correlograms of shot
gathers 4, 5 and 6 and (c) Shot gathers 4, 5 and 6 after applying spiking
deconvolution
Figure 3: PSD of the average
trace of shot gathers 4, 5 and 6 before and after spiking deconvolution
Figure 3 above shows the overall data in the spectrum domain via the power spectral density of the average traces. In can conclude that the data has become more spiky after been applied the spiking decomposition in compare to before deconvolution. All in all, spiking deconvolution has created an alternative of having acceptable resolution without having noise in the data set in compare to have wide-frequency band, which contains noise.
Last but not least, to further enhance the spiking deconvolution results, an instantaneous AGC with window length of 0.5 s need to be apply to the deconvolved data and the results of the AGC is shown in the Figure 4. Note that, this AGC is applied in order to compensate for the lost amplitudes after applying the deconvolution.
Figure
4: Shot gathers 4, 5 and 6 after applying spiking deconvolution and instantaneous AGC
Lab 3: Seismic Noise Attenuation
Seismic data are highly damaged with noise
and unwanted energy came from different kind of sources. This noises can be
classified into two main categories:1.
Random
noise (incoherent noise)
2.
Coherent
noise
However, it is impossible to eliminate all
the noises, instead our goal ts to improve as much as possible the
signal-to-noise ratio (SNR)
Noise
Ø Random
Noise
Disturbances in seismic data which lack
phase coherency between adjacent traces are considered to be random noise. This
energy is usually not related to the source that generates the seismic signals.
Some examples of random noises in land records are near-surface scatterers,
wind, rain, and instruments. Random noises can be attenuated easily in several
different ways such as frequency filtering, deconvolution, wavelet denoising,
filetring using Gabor representation, stacking and many other methods.
Ø
Coherent
Noise
It is the vice versa of random noise,
which is the energy comes from the source itself and considered undesirable
energy that comes along with the primary signals. Such energy shows consistent
phase from trace to trace. Examples of coherent noise in land records are
multiple reflections or multiples, surface waves like ground roll and air waves,
dynamite ghosts and others. Improper removal of coherent noise can affect
nearly all processing techniques and complicates interpretation of geological
structures
Ground
Roll Noise
They are surface waves
travelling along the ground surface and generally have low velocity, low
velocity and high amplitudes. In time distance t-x curves, they are straight
lines with low velocities and zero intercepts for an inline source. There might
be several modes of ground rolls in the record because of their dispersive nature.
They are attenuated using source and receiver arrays in the field and various
processing methods such as frequency filtering and f-k filtering.
Ø Spectrum Analysis and Filtering of Seismic
Data
The filtering process is an important step in order to
proceed further with other seismic data processing steps. Since our land
seismic data set contains ground roll noise, we can simply apply frequency
linear filters such as band-pass filters (BPF)’s to attenuate their effect.
BPF’s in particular enhance the overall gain of each seismic shot gather, and
increases the SNR ratio by attenuating low and high frequency noise records,
including the ground roll noise
There exist different means for such useful analysis: the frequency content of 1D time signals, 2D like the frequency-space (f-x) or the frequency-wavenumber (f-k) spectra: each can be used to gain meaningful interpretation and, therefore, apply a suitable filtering technique. We can obtaion the f-x and f-k magnitude spectra in dB respectively by using MATLAB m-file fx.m and fk.m
Then, we can design a Finite Impulse Response (FIR) BPF digital filter and apply it to each seismic trace in the shot gather number 8 using the bpf_fir.m m-file as follow where we band the spectrum between 15 and 60 Hz and the remaining was left out.
- N=100
- Cut_off=[15,60];
- [Dbpf,Dbpf_f]=bpf_fir(Dshot,dt,N,cut_off);
LAB 2: Seismic Data QCBasically after seismic data had been acquired and recorded, few steps of quality control (QC) are necessary before carrying out the remaining seismic data processing forward and ultimately gained an accurate seismic range of subsurface structures. The QC process involves a step-to-step to condition the data and prepare the data for further QC and processing
1. De-multiplexing: The data is transferred from the recording mode, where each record contains the same time sample from all receivers, to the trace mode where each record contains all time samples from one receiver. De-multiplexing usually was performed in the field.
2. Reformatting: From one seismic digital format (SEG-Y for example), the data is converted to another format so that the processing software can easily use the data throughout the processing (e.g., MATLAB).
3. Setup of field geometry: The field geometry is written into the data (trace headers) in order to associate each trace with its respective shot, offset, channel and CMP.
4. Trace editing: Bad traces or parts of traces are muted or deleted from the data during this step. Problems regarding polarity are also fixed.
5. Gain application: Amplitude corrections are applied to compensate for losses of amplitude due to spherical divergence and absorption of the seismic waves.
6. Application of field statics: In land surveys, elevation statics are applied to bring the sources and receivers to a single planar reference level known as the datum. This step can be delayed until the static correction process where better near surface velocities might be available.
Existing data for this lab session had already took care step 1-3, thus only step 4 and forward needs to be executed. Basically we are still using the east Texas seismic dataset in Lab1 that comprises some high noisy amplitudes.Figure shows one way of muting at shot gather number 16, the recorded trace must be muted due to the use of bad geophones in land data. The code below was used to execute the muting
[I,j] = find (D==max(max(D)));
D(:,j)=0;
|
Figure: Seismic data shot gather number 16 before muting |
Figure: After muting |
Correction to Amplitude Losses
Raw seismic data usually characterized by a noticeable decrease in the amplitudes of its recorded traces with time. It includes geometric divergence effects as waves spread out from a source as well as conversion of the seismic energy into heat besides other factors such as transmission losses.
Amplitude correction or gain must be applied to seismic data at various stages. The data-independent scheme corrects the amplitudes using a common scaling function to all the traces such as gain through multiplication by a power of time using:
where f(t) is the seismic trace amplitude to be corrected,t is the independent variable, and α is the power of time which controls the change in the amplitude of f(t). Another commonly used function is the exponential gain function correction:
where θ is the parameter which controls the exponential function. We can apply amplitude correction using the function iac.m. In this example, we display amplitude correction at shot gather number 8 by executing command
1. Shot_num = 8;
2. P=0;
3. [Dshot,dt,dx,t,offset] = extracting_shots(D,H,shot_num,p);
4. Pow=2;
5. T=0;
6. Dg=iac(Dshot,t,pow,T);
Figure: Seismic data shot gather number 8 before amplitude gain correction |
Figure: After amplitude gain correction |
This gain increase after the correction can be analyzed further by plotting the average amplitudes trace envelope in dB for the both shot gathers. Figure shows the plot by generating function of seis_env_dB.m as follows:
1. Tnum=33;
2. Seis-env_dB(Dshot,Dg,t,tnum)
3. Seis_env_dB(Dshot,Dg,t)
Figure:Amplitude envelope gain in dB for average trace |
Figure:Amplitude envelope gain in dB for trace 33 |
The data-dependent scheme, on the other hand, relies on multiplying each time sample by a scalar derived from a window of data around the sample. Such a technique is known as automatic gain control (AGC). Some of the famous of AFC techniques include RMPS amplitude AGC and Instantaneous AGC: Following scripts are used for RMS AGC and instantaneous AGC respectively:
Instantaneous
1.
agc_gate=0.5;
2.
T=1;
3.
Dg=AGCgain(Dshot,dt,agc_gate,T);
RMS
1.
agc_gate=0.5;
2.
T=2
3.
Dg+AGCgain(Dshot,dt,agc_gate,T);
Figure: Seismic data shot gather number 8 after applying AGC using RMS |
Figure: Seismic data shot gather number 8 after applying AGC using Instantaenous |
Amplitude envelope gain in dB for the average trace |
Amplitude envelope gain in dB for the trace 33 |
Amplitude envelope gain in dB for the average trace |
Amplitude envelope gain in dB for the average trace 33 |
Subscribe to:
Posts (Atom)