pylops_distributed.optimization.cg.cgls

pylops_distributed.optimization.cg.cgls(A, y, x=None, niter=10, damp=0.0, tol=0.0001, compute=False, client=None)[source]

Conjugate gradient least squares

Solve an overdetermined system of equations given an operator A and data y using conjugate gradient iterations.

Parameters:
A : pylops_distributed.LinearOperator

Operator to invert of size \([N \times N]\)

y : dask.array

Data of size \([N \times 1]\)

x0 : dask.array, optional

Initial guess

niter : int, optional

Number of iterations

damp : float, optional

Damping coefficient

tol : float, optional

Tolerance on residual norm

compute : tuple, optional

Compute intermediate results at the end of every iteration

client : dask.distributed.client.Client, optional

Dask client. If provided when compute=None each iteration is persisted. This is the preferred method to avoid repeating computations.

Returns:
x : dask.array

Estimated model

iit : int

Number of executed iterations

Notes

Minimize the following functional using conjugate gradient iterations:

\[J = || \mathbf{y} - \mathbf{Ax} ||^2 + \epsilon || \mathbf{x} ||^2\]

where \(\epsilon\) is the damping coefficient.

Note that early stopping based on tol is activated only when client is provided or compute=True. The formed approach is preferred as it avoid repeating computations along the compute tree.