pylops_distributed.optimization.cg.cg

pylops_distributed.optimization.cg.cg(A, y, x=None, niter=10, tol=1e-05, compute=False, client=None)[source]

Conjugate gradient

Solve a system of equations given the square operator A and data y using conjugate gradient iterations.

Parameters:
A : pylops_distributed.LinearOperator

Operator to invert of size \([N \times N]\)

y : dask.array

Data of size \([N \times 1]\)

x0 : dask.array, optional

Initial guess

niter : int, optional

Number of iterations

tol : float, optional

Tolerance on residual norm

compute : tuple, optional

Compute intermediate results at the end of every iteration

client : dask.distributed.client.Client, optional

Dask client. If provided when compute=None each iteration is persisted. This is the preferred method to avoid repeating computations.

Returns:
x : dask.array

Estimated model

iter : int

Number of executed iterations

Notes

Solve the the following problem using conjugate gradient iterations:

\[\mathbf{y} = \mathbf{Ax}\]

Note that early stopping based on tol is activated only when client is provided or compute=True. The formed approach is preferred as it avoid repeating computations along the compute tree.