How Does Regularly-Indexed CS Sampling Work?
Compressive Sensing (CS) is a well-known signal processing technique for acquiring and accurately reconstructing highly undersampled signals. This innovative approach was proven valid in 2005 by famous and influential mathematicians including David Donoho, Emmanuel Candès, and Terrence Tao and subsequently applied to fields including MRI imaging, Radio Astronomy, Electron Microscopy, and Internet Network Routing. In 2008, Hennenfent & Herrmann laid significant groundwork for the theory and application of Compressive Sensing to seismic acquisition (AKA Compressive Seismic), and we use one of their examples here. This article will explain the basic theoretical basis for Compressive Seismic and how our patented regularly-indexed acquisition design (U.S. Patents 10,156,648 & 10,317,542) produces optimized reconstruction results.
What makes Sensing Compressive?
Every approach to Compressive Sensing utilizes the same fundamental idea: most signals are highly compressible, that is, they can be well-approximated using simple “sparse” representations after applying some mathematical transform. Take, for example, the following fully sampled wave signal:
After applying a Fourier transform to extract the signal’s spatial frequencies, we see that this seemingly complex waveform is actually comprised of three simple sinusoidal functions. Because this large dataset has been shrunk to a smaller equivalent dataset, we say it has been compressed. Once we have these sinusoids, we can easily convert them back to the original signal by applying the inverse Fourier transform.
The innovative idea behind compressive sensing is that, rather than acquiring the fully-sampled wave signal on the left, we record only what is needed to produce the simple, compressed representation on the right. For seismic data, our aim is typically to capture each of the key plane waves that make up a seismic signal. This process allows us to acquire the same or highly similar information using far fewer recorded input samples, which is extremely useful whenever the act of recording these signals may be limited or expensive.
Using this approach, we can produce high-quality seismic images with significantly fewer sources and receivers, which is especially useful for expensive acquisitions with frequent obstacles. Even better, we can also use this method to boost the resolution and/or fold of a seismic survey by redistributing a preexisting set number of sources and receivers onto a finer acquisition grid.
Conventional Decimation vs. Regularly-Indexed CS Sampling
In the previous section, we explained that compressive sensing tries to record only what is needed to produce the sparse frequency representation (or some other transform) of the data. The sparsity of seismic data is well known for a variety of transforms (e.g., curvelet, Tau-P) and many approaches use this sparsity for interpolation or noise removal (e.g., SVD-based Eigenimage Filtering). However, our goal with Compressive Seismic is to take advantage of this fact so we can use fewer samples to obtain the same data.
Conventional Decimation -
If we naively try to undersample the wave signal from Figure 1 using a conventional subsampling process, we get the following:
Notice how, due to undersampling the data below the Nyquist limit, there are fictitious frequencies that appear in the data (highlighted red for clarity). This junk noise is called aliasing and can be extremely difficult to remove, especially in this case because the aliasing is stronger than much of the original signal. This is a particularly large problem for sparse surveys, but any regularly-gridded seismic survey will have some level of aliasing above a certain spatial frequency.
This aliasing is a fundamentally unavoidable side-effect of undersampling the data. There is no way to prevent aliasing from entering the acquired data without adding more samples. However, with intelligent sampling, we can find a clever way to remove it.
Regularly-Indexed CS Sampling -
Our patented Compressive Seismic approach can reduce or eliminate the effect of coherent aliasing by sampling the data irregularly while retaining the data’s original regular subgrid. Let’s take a look at how it works below:
This specialized spacing spreads the aliased energy across a wide range of spatial frequencies. In total, there is the same amount of aliased energy as in Figure 2, but because this energy is now incoherent, it can be easily filtered out by a sparsity-promoting solver. After filtering this noise, we can apply the inverse Fourier transform to obtain the original data as if it had been fully sampled. If we had sampled the data using a conventional grid, as in Figure 2, it would be impossible to filter out the aliasing and recover the true signal!
At In-Depth, we primarily use a sparsity-promoting reconstruction method called EPOCS, which strongly preserves coherent signals and removes incoherent aliasing (also applicable to deblending!). If you’d like to read more about the mathematical details of our EPOCS solver, look at this SEG publication. For data examples, take a look at our suite of technical publications available under the “Publications” tab at the top of the page.
Conclusion
Our unique Regularly-Indexed CS Sampling approach uses irregular spacing on a regular subgrid to enhance the antialiasing power of sparsity while retaining the benefits of a regular grid for future processing steps. Because seismic data is simple in many transform domains, we can leverage this simplicity to reduce how many samples are needed to fully characterize the seismic signals. Here at In-Depth, we use our patented 2D Regularly-Indexed CS-Acquisition Solver to solve for an optimal acquisition given your survey and obstacle constraints, guaranteeing that the acquired survey is well-suited for EPOCS reconstruction.
Fairfield Geotechnologies recently published a parallel-processed field data comparison in the Permian between a conventional dataset and an equivalent CS dataset with 70% sampling. The final datasets were nearly identical, with the CS-acquired dataset having slightly higher S/N and well-tie correlation despite having fewer input samples!
For another demonstration of the power of our patented Regularly-Indexed CS applied to land field data, check out the following example:
For a description of our onshore Compressive Seismic Acquisition workflow, read our previous blog post here. For more information about Compressive Seismic Acquisition, see our explainer or other blog posts on the topic. We offer CS Acquisition Survey Design services as well as advanced reconstruction and processing. If you are interested in shooting a CS survey or processing CS data, contact our team and ask us how we can help!