Compression methods
- Overview of the compression methods available in Inti.jl
- Details and limitations of the various compression methods
- Guideline on how to choose a compression method
Inti.jl wraps several external libraries providing acceleration routines for integral operators. In general, acceleration routines have the signature assemble_*(iop, args...; kwargs...)
, and take an IntegralOperator
as a first argument. They return a new object that represents a compressed version of the operator. The following methods are available:
assemble_matrix
: create a denseMatrix
representation of the integral operator. Not really a compression method, but useful for debugging and small problems.assemble_hmatrix
: assemble a hierarchical matrix representation of the operator using theHMatrices
library.assemble_fmm
: return aLinearMap
object that represents the operator using the fast multipole method. This method is powered by theFMM2D
,FMMLIB2D
andFMM3D
libraries, and is only available for certain kernels.
Acceleration methods do not correct for singular or nearly-singular interactions. When the underlying kernel is singular, a correction is usually necessary in order to obtain accurate results (see the section on correction methods for more details).
To illustrate the use of compression methods, we will use the following problem as an example. Note that for such a small problem, compression methods are not likely not necessary, but they are useful for larger problems.
using Inti
using LinearAlgebra
# define the quadrature
geo = Inti.GeometricEntity("ellipsoid")
Ω = Inti.Domain(geo)
Γ = Inti.boundary(Ω)
Q = Inti.Quadrature(Γ; meshsize = 0.4, qorder = 5)
# create the operator
op = Inti.Helmholtz(; dim = 3, k = 2π)
K = Inti.SingleLayerKernel(op)
Sop = Inti.IntegralOperator(K, Q, Q)
x = rand(eltype(Sop), length(Q))
rtol = 1e-8
In what follows we compress Sop
using the different methods available.
Dense matrix
Inti.assemble_matrix
— Functionassemble_matrix(iop::IntegralOperator; threads = true)
Assemble a dense matrix representation of an IntegralOperator
.
Typically used for small problems, the dense matrix representation converts the IntegralOperator
into a Matrix
object. The underlying type of the Matrix
is determined by the eltype
of the IntegralOperator
, and depends on the inferred type of the kernel. Here is how assemble_matrix
can be used:
Smat = Inti.assemble_matrix(Sop; threads=true)
er = norm(Sop * x - Smat * x, Inf) / norm(Sop * x, Inf)
println("Forward map error: $er")
Forward map error: 6.037452551574307e-15
Since the returned object is plain Julia Matrix
, it can be used with any of the linear algebra routines available in Julia (e.g. \
, lu
, qr
, *
, etc.)
Hierarchical matrix
Inti.assemble_hmatrix
— Functionassemble_hmatrix(iop[; atol, rank, rtol, eta])
Assemble an H-matrix representation of the discretized integral operator iop
using the HMatrices.jl
library.
See the documentation of HMatrices
for more details on usage and other keyword arguments.
The hierarchical matrix representation is a compressed representation of the underlying operator; as such, it takes a tolerance parameter that determines the relative error of the compression. Here is an example of how to use the assemble_hmatrix
method to compress the previous problem:
using HMatrices
Shmat = Inti.assemble_hmatrix(Sop; rtol = 1e-8)
er = norm(Smat * x - Shmat * x, Inf) / norm(Smat * x, Inf)
println("Forward map error: $er")
Forward map error: 2.0400023872607663e-8
Note that HMatrices
are said to be kernel-independent, meaning that they efficiently compress a wide range of integral operators provided they satisfy a certain asymptotic smoothness criterion (see e.g. [3, 4]).
The HMatrix
object can be used to solve linear systems, both iteratively through e.g. GMRES, or directly using an LU
factorization.
Fast multipole method
Inti.assemble_fmm
— Functionassemble_fmm(iop; atol)
Set up a 2D or 3D FMM for evaluating the discretized integral operator iop
associated with the op
. In 2D the FMM2D
or FMMLIB2D
library is used (whichever was most recently loaded) while in 3D FMM3D
is used.
FMMLIB2D does no checking for if the targets and sources coincide, and will return Inf
values if iop.target !== iop.source
, but there is a point x ∈ iop.target
such that x ∈ iop.source
.
The fast multipole method (FMM) is an acceleration technique based on an analytic multipole expansion of the kernel in the integral operator [5, 6]. It provides a very memory-efficient and fast way to evaluate certain types of integral operators. Here is how assemble_fmm
can be used:
using FMM3D
Sfmm = Inti.assemble_fmm(Sop; rtol = 1e-8)
er = norm(Sop * x - Sfmm * x, Inf) / norm(Sop * x, Inf)
println("Forward map error: $er")
Forward map error: 1.0535441704352412e-10
Tips on choosing a compression method
The choice of compression method depends on the problem at hand, as well as on the available hardware. Here is a rough guide on how to choose a compression:
- For small problems (say less than 5k degrees of freedom), use the dense matrix representation. It is the simplest and most straightforward method, and does not require any additional packages. It is also the most accurate since it does not introduce any approximation errors.
- If the integral operator is supported by the
assemble_fmm
, and if an iterative solver is acceptable, use it. The FMM is a very efficient method for certain types of kernels, and can handle problems with up to a few million degrees of freedom on a laptop. - If the kernel is not supported by
assemble_fmm
, if iterative solvers are not an option, or if the system needs solution for many right-hand sides, use theassemble_hmatrix
method. It is a very general method that can handle a wide range of kernels, and although assembling theHMatrix
can be time and memory consuming (the complexity is still log-linear in the DOFs for many kernels of interest, but the constants can be large), the resultingHMatrix
object is very efficient to use. For example, the forward map is usually significantly faster than the one obtained throughassemble_fmm
.