# API

CMBLensing.MAP_jointMethod
MAP_joint(ds::DataSet; kwargs...)

Compute the maximum a posteriori (i.e. "MAP") estimate of the joint posterior, $\mathcal{P}(f,\phi,\theta\,|\,d)$, or compute a quasi-sample.

Keyword arguments:

• ϕstart — Starting point of the maximizer (default: $\phi=0$)
• nsteps — The number of iterations for the maximizer.
• lbfgs_rank — The maximum rank of the LBFGS approximation to the Hessian (default: 5)
• conjgrad_kwargs — Passed to the inner call to [conjugate_gradient][@ref]
• progress — whether to show progress bar
• Nϕ — Noise to use in the initial approximation to the Hessian. Can also give Nϕ=:qe to use the quadratic estimate noise (default: :qe)
• quasi_samplefalse (default) to compute the MAP, true to iterate quasi-samples, or an integer to compute a fixed-seed quasi-sample.

Returns a tuple (f, ϕ, tr) where f is the best-fit (or quasi-sample) field, ϕ is the lensing potential, and tr contains info about the run.

CMBLensing.MAP_margMethod
MAP_marg(ds; kwargs...)

Compute the maximum a posteriori (i.e. "MAP") estimate of the marginal posterior, $\mathcal{P}(\phi,\theta\,|\,d)$.

CMBLensing.argmaxf_lnPMethod
argmaxf_lnP(ϕ,                ds::DataSet; kwargs...)
argmaxf_lnP(ϕ, θ::NamedTuple, ds::DataSet; kwargs...)
argmaxf_lnP(Lϕ,               ds::DataSet; kwargs...)

Computes either the Wiener filter at fixed $\phi$, or a sample from this slice along the posterior.

Keyword arguments:

• which:wf, :sample, or fluctuation to compute 1) the Wiener filter, i.e. the best-fit of $\mathcal{P}(f\,|\,\phi,d)$, 2) a sample from $\mathcal{P}(f\,|\,\phi,d)$, or 3) a sample minus the Wiener filter, i.e. the fluctuation on top of the mean.
• guess — starting guess for f for the conjugate gradient solver
• conjgrad_kwargs — Passed to the inner call to [conjugate_gradient][@ref]
CMBLensing.batchMethod
batch(f::FlatField, D::Int)

Construct a batch-length-D FlatField from an unbatched FlatField which will broadcast as if it were D copies of f (without actually making D copies of the data in f)

CMBLensing.batchMethod
batch(fs::FlatField...)
batch(fs::Vector{<:FlatField})
batch(fs::TUple{<:FlatField})

Turn a length-N array of FlatField's into a single batch-length-N FlatField. For the inverse operation, see unbatch.

CMBLensing.batchindexMethod
batchindex(f::FlatField, I)

Get the Ith indexed batch (I can be a slice).

CMBLensing.beamCℓsMethod
beamCℓs(;beamFWHM, ℓmax=8000)

Compute the beam power spectrum, often called $W_\ell$. A map should be multiplied by the square root of this.

CMBLensing.cpuMethod
cpu(xs)

Recursively move an object to CPU memory (i.e. the opposite of cu)

CMBLensing.fixed_white_noiseMethod
fixed_white_noise(rng, F)

Like white noise but the amplitudes are fixed to unity, only the phases are random. Currently only implemented when F is a Fourier basis. Note that unlike white_noise, fixed white-noise generated in EB and QU Fourier bases are not statistically the same.

CMBLensing.gradhessMethod
gradhess(f)

Compute the gradient $g^i = \nabla^i f$, and the hessian, $H_j^{\,i} = \nabla_j \nabla^i f$.

CMBLensing.lnPMethod
lnP(t, fₜ, ϕₜ,                ds::DataSet)
lnP(t, fₜ, ϕₜ, θ::NamedTuple, ds::DataSet)

Compute the log posterior probability in the joint parameterization as a function of the field, $f_t$, the lensing potential, $\phi_t$, and possibly some cosmological parameters, $\theta$. The subscript $t$ can refer to either a "time", e.g. passing t=0 corresponds to the unlensed parametrization and t=1 to the lensed one, or can be :mix correpsonding to the mixed parametrization. In all cases, the arguments fₜ and ϕₜ should then be $f$ and $\phi$ in that particular parametrization.

If any parameters $\theta$ are provided, we also include the determinant terms for covariances which depend on $\theta$. In the mixed parametrization, we also include any Jacobian determinant terms that depend on $\theta$.

The argument ds should be a DataSet and stores the masks, data, etc... needed to construct the posterior.

CMBLensing.load_camb_CℓsMethod
load_camb_Cℓs(;path_prefix, custom_tensor_params=nothing,
unlensed_scalar_postfix, unlensed_tensor_postfix, lensed_scalar_postfix, lenspotential_postfix)

Load some Cℓs from CAMB files.

path_prefix specifies the prefix for the files, which are then expected to have the normal CAMB postfixes: "scalCls.dat", "tensCls.dat", "lensedCls.dat", "lenspotentialCls.dat", unless otherwise specified via the other keyword arguments. custom_tensor_params can be used to call CAMB directly for the unlensedtensors, rather than reading them from a file (since alot of times this file doesn't get saved). The value should be a Dict/NamedTuple which will be passed to a call to camb, e.g. customtensor_params=(r=0,) for zero tensors.

CMBLensing.load_chainsMethod
load_chains(filename; burnin=0, burnin_chunks=0, thin=1, join=false, unbatch=true)

Load a single chain or multiple parallel chains which were written to a file by sample_joint.

Keyword arguments:

• burnin — Remove this many samples from the start of each chain, or if negative, keep only this many samples at the end of each chain.
• burnin_chunks — Same as burnin, but in terms of chain "chunks" stored in the chain file, rather than in terms of samples.
• thin — If thin is an integer, thin the chain by this factor. If thin == :hasmaps, return only samples which have maps saved. If thin is a Function, filter the chain by this function (e.g. thin=haskey(:g) on Julia 1.5+)
• unbatch — If true, unbatch the chains if they are batched.
• join — If true, concatenate all the chains together.
• skip_missing_chunks — Skip missing chunks in the chain instead of terminating the chain there.

The object returned by this function is a Chain or Chains object, which simply wraps an Array of Dicts or an Array of Array of Dicts, respectively (each sample is a Dict). The wrapper object has some extra indexing properties for convenience:

• It can be indexed as if it were a single multidimensional object, e.g. chains[1,:,:accept] would return the :accept key of all samples in the first chain.
• Leading colons can be dropped, i.e. chains[:,:,:accept] is the same as chains[:accept].
• If some samples are missing a particular key, missing is returned for those samples insted of an error.
• The recursion goes arbitrarily deep into the objects it finds. E.g., since sampled parameters are stored in a NamedTuple like (Aϕ=1.3,) in the θ key of each sample Dict, you can do chain[:θ,:Aϕ] to get all Aϕ samples as a vector.
CMBLensing.load_simMethod
load_sim

Create a BaseDataSet object with some simulated data, returing the DataSet and simulated truths. E.g.

@unpack f,ϕ,ds = load_sim(;
θpix  = 2,
Nside = 128,
pol   = :I,
T     = Float32
);

For rectangular maps, Nside expects (Ny, Nx), i.e. (Nrow, Ncol). If Nside isa Int, the code interprets it as a square map.

CMBLensing.mean_std_and_errorsMethod
mean_std_and_errors(samples; N_bootstrap=10000)

Get the mean and standard deviation of a set of correlated samples from a chain where the error on the mean and standard deviation is estimated with bootstrap resampling using the calculated "effective sample size" of the chain.

CMBLensing.mixMethod
mix(f, ϕ,                ds::DataSet)
mix(f, ϕ, θ::NamedTuple, ds::DataSet)

Compute the mixed (f°, ϕ°) from the unlensed field f and lensing potential ϕ, given the definition of the mixing matrices in ds evaluated at parameters θ (or at fiducial values if no θ provided).

CMBLensing.noiseCℓsMethod
noiseCℓs(;μKarcminT, beamFWHM=0, ℓmax=8000, ℓknee=100, αknee=3)

Compute the (:TT,:EE,:BB,:TE) noise power spectra given white noise + 1/f. Polarization noise is scaled by $\sqrt{2}$ relative to μKarcminT. beamFWHM is in arcmin.

CMBLensing.pixwinMethod
pixwin(θpix, ℓ)

Returns the pixel window function for square flat-sky pixels of width θpix (in arcmin) evaluated at some ℓs. This is the scaling of k-modes, the scaling of the power spectrum will be pixwin^2.

CMBLensing.pmap_saveMethod
pmap_save(f, c...; filename, savekey="pmap_results")

Like Distributed.pmap but save results as they come in using FileIO.save to a file filename in a key savekey (ensures partial results are saved even if the pmap errors or Julia is killed before the full operation completes).

CMBLensing.quadratic_estimateMethod
quadratic_estimate(ds::DataSet, which; wiener_filtered=true)
quadratic_estimate((ds1::DataSet,ds2::DataSet), which; wiener_filtered=true)

Compute quadratic estimate of ϕ given data.

The ds or (ds1,ds2) tuple contain the DataSet object(s) which houses the data and covariances used in the estimate. Note that only the Fourier-diagonal approximations for the beam, mask, and noise,, i.e. ds.B̂, ds.M̂, and ds.Cn̂, are accounted for. To account full operators (if they are not actually Fourier-diagonal), you should compute the impact using Monte Carlo.

If a tuple is passed in, the result will come from correlating the data from ds1 with that from ds2, which can be useful for debugging / isolating various noise terms.

An optional keyword argument AL can be passed in in case the QE normalization was already computed, in which case it won't be recomputed during the calculation.

Returns a NamedTuple (ϕqe, AL, Nϕ) where ϕqe is the (possibly Wiener filtered, depending on wiener_filtered option) quadratic estimate, AL is the normalization (which is already applied to ϕqe, it does not need to be applied again), and Nϕ is the analytic N0 noise bias (Nϕ==AL if using unlensed weights, currently only Nϕ==AL is always returned, no matter the weights)

CMBLensing.resimulate!Method
resimulate!(ds::DataSet; [f, ϕ, n])

Replace the data in this DataSet in-place with a simulation, potentially given a fixed f, ϕ, or n, if any are provided.

Returns a named tuple of (ds, f, ϕ, n, f̃)

CMBLensing.resimulateMethod
resimulate(ds::DataSet; [f, ϕ, n])

Make a new DataSet with the data replaced by a simulation, potentially given a fixed f, ϕ, or n, if any are provided.

Returns a named tuple of (ds, f, ϕ, n, f̃)

CMBLensing.sample_jointMethod
sample_joint(ds::DataSet; kwargs...)

Sample the joint posterior, $\mathcal{P}(f,\phi,\theta\,|\,d)$.

Keyword arguments:

• nsamps_per_chain(required) The number of samples per chain
• nchains — Run nchains chains in parallel (default: 1)
• nchunk — Do nchunk steps between parallel chain communication (default: 1)
• nsavemaps — Save maps into chain every nsavemaps steps (default: 1)
• nburnin_always_accept — The first nburnin_always_accept steps, always accept HMC steps independent of integration error (default: 0)
• nburnin_fixθ — For the first nburnin_fixθ steps, fix θ at its starting point (default: 0)
• Nϕ — Noise to use in the HMC mass matrix. can also give Nϕ=:qe to use the EB quadratic estimate noise (default: :qe)
• chainsnothing to start a new chain; the return value from a previous call to sample_joint to resume those chains; :resume to resume chains from a file given by filename
• θrange — Range and density to grid sample parameters as a NamedTuple, e.g. (Aϕ=range(0.7,1.3,length=20),).
• θstart — Starting values of parameters as a NamedTuple, e.g. (Aϕ=1.2,), or nothing to randomly sample from θrange
• ϕstart — Starting ϕ, either a Field object, :quasi_sample, or :best_fit
• metadata — Does nothing, but is saved into the chain file
• nhmc — The number of HMC passes per ϕ Gibbs step (default: 1)
• symp_kwargs — an array of NamedTupe kwargs to pass to symplectic_integrate. E.g. [(N=50,ϵ=0.1),(N=25,ϵ=0.01)] would do 50 large steps then 25 smaller steps per each Gibbs pass. If specified, nhmc is ignored.
• wf_kwargs — Keyword arguments to pass to argmaxf_lnP in the Wiener Filter Gibbs step.
• MAP_kwargs — Keyword arguments to pass to MAP_joint when computing the starting point.
CMBLensing.seed_for_storage!Function
seed_for_storage!(storage[, seed])
seed_for_storage!((storage1, storage2, ...)[, seed])

Set the global random seed for the RNG which controls storage-type.

CMBLensing.simulateMethod
simulate(Σ; rng=global_rng_for(Σ), seed=nothing)

Draw a simulation from the covariance matrix Σ, i.e. draw a random vector $\xi$ such that the covariance $\langle \xi \xi^\dagger \rangle = \Sigma$.

The random number generator rng will be used and advanced in the proccess, and is by default the appropriate one depending on if Σ is backed by Array or CuArray.

The seed argument can also be used to seed the rng.

CMBLensing.symplectic_integrateMethod
symplectic_integrate(x₀, p₀, Λ, U, δUδx, N=50, ϵ=0.1, progress=false)

Do a symplectic integration of the potential energy U (with gradient δUδx) starting from point x₀ with momentum p₀ and mass matrix Λ. The number of steps is N and the step size ϵ.

Returns ΔH, xᵢ, pᵢ corresponding to change in Hamiltonian, and final position and momenta. If hist is specified a trace of requested variables throughout each step is also returned.

CMBLensing.ud_gradeMethod
ud_grade(f::Field, θnew, mode=:map, deconv_pixwin=true, anti_aliasing=true)

Up- or down-grades field f to new resolution θnew (only in integer steps). Two modes are available specified by the mode argument:

• :map — Up/downgrade by replicating/averaging pixels in map-space
• :fourier — Up/downgrade by extending/truncating the Fourier grid

For :map mode, two additional options are possible. If deconv_pixwin is true, deconvolves the pixel window function from the downgraded map so the spectrum of the new and old maps are the same. If anti_aliasing is true, filters out frequencies above Nyquist prior to down-sampling.

CMBLensing.unbatchMethod
unbatch(chains::Chains)

Expand each chain in this Chains object by unbatching it.

CMBLensing.unbatchMethod
unbatch(chain::Chain)

Convert a chain of batch-length-D fields to D chains of unbatched fields.

CMBLensing.unbatchMethod
unbatch(f::FlatField)

If f is a batch-length-D field, return length-D vector of each batch component, otherwise just return f. For the inverse operation, see batch.

CMBLensing.unmixMethod
unmix(f°, ϕ°,                ds::DataSet)
unmix(f°, ϕ°, θ::NamedTuple, ds::DataSet)

Compute the unmixed/unlensed (f, ϕ) from the mixed field f° and mixed lensing potential ϕ°, given the definition of the mixing matrices in ds evaluated at parameters θ (or at fiducial values if no θ provided).

CMBLensing.BilinearLensType
BilinearLens(ϕ)

BilinearLens is a lensing operator that computes lensing with bilinear interpolation. The action of the operator, as well as its adjoint, inverse, inverse-adjoint, and gradient w.r.t. ϕ can all be computed. The log-determinant of the operation is non-zero and can't be computed.

Internally, BilinearLens forms a sparse matrix with the interpolation weights, which can be applied and adjoint-ed extremely fast (e.g. at least an order of magnitude faster than LenseFlow). Inverse and inverse-adjoint lensing is somewhat slower as it is implemented with several steps of the preconditioned generalized minimal residual algorithm, taking anti-lensing as the preconditioner.

Warning

Due to this bug in PackageCompiler, currently you have to run using SparseArrays by hand in your Julia session before BilinearLens is available.

CMBLensing.FlatEBFourierType
# main constructor:
FlatEBFourier(
El::AbstractArray, Bl::AbstractArray;
Nside, # required, size of the map in pixels
θpix,  # optional, resolution in arcmin (default: 1)
∂mode, # optional, fourier∂ or map∂ (default: fourier∂)
)

# more low-level:
FlatEBFourier{P}(El::AbstractArray, Bl::AbstractArray) # specify pixelization P explicilty
FlatEBFourier{P,T}(El::AbstractArray, Bl::AbstractArray) # additionally, convert elements to type Complex{T}
FlatEBFourier{P,T,M<:AbstractArray{Complex{T}}}(El::M, Bl::M) # specify everything explicilty

Construct a FlatEBFourier object. The top form of the constructor is most convenient for interactive work, while the others may be more useful for low-level code.

CMBLensing.FlatEBMapType
# main constructor:
FlatEBMap(
Ex::AbstractArray, Bx::AbstractArray;
θpix,  # optional, resolution in arcmin (default: 1)
∂mode, # optional, fourier∂ or map∂ (default: fourier∂)
)

# more low-level:
FlatEBMap{P}(Ex::AbstractArray, Bx::AbstractArray) # specify pixelization P explicilty
FlatEBMap{P,T}(Ex::AbstractArray, Bx::AbstractArray) # additionally, convert elements to type T
FlatEBMap{P,T,M<:AbstractArray{T}}(Ex::M, Bx::M) # specify everything explicilty

Construct a FlatEBMap object. The top form of the constructor is most convenient for interactive work, while the others may be more useful for low-level code.

CMBLensing.FlatFourierType
# main constructor:
FlatFourier(
Il::AbstractArray;
Nside, # required, size of the map in pixels
θpix,  # optional, resolution in arcmin (default: 1)
∂mode, # optional, fourier∂ or map∂ (default: fourier∂)
)

# more low-level:
FlatFourier{P}(Il::AbstractArray) # specify pixelization P explicilty
FlatFourier{P,T}(Il::AbstractArray) # additionally, convert elements to type Complex{T}
FlatFourier{P,T,M<:AbstractArray{Complex{T}}}(Il::M) # specify everything explicilty

Construct a FlatFourier object. The top form of the constructor is most convenient for interactive work, while the others may be more useful for low-level code.

CMBLensing.FlatIEBFourierType
# main constructors:
FlatIEBFourier(
Il::AbstractMatrix, El::AbstractMatrix, Bl::AbstractMatrix;
Nside, # required, size of the map in pixels
θpix,  # optional, resolution in arcmin (default: 1)
∂mode, # optional, fourier∂ or map∂ (default: fourier∂)
)
FlatIEBFourier(I::FlatFourier, E::FlatFourier, B::FlatFourier)

# more low-level:
FlatIEBFourier{P}(Il::AbstractMatrix, El::AbstractMatrix, Bl::AbstractMatrix) # specify pixelization P explicilty
FlatIEBFourier{P,T}(Il::AbstractMatrix, El::AbstractMatrix, Bl::AbstractMatrix) # additionally, convert elements to type Complex{T}
FlatIEBFourier{P,T,M<:AbstractMatrix{Complex{T}}}(Il::M, El::M, Bl::M) # specify everything explicilty

Construct a FlatIEBFourier object. The top form of the constructors is most convenient for interactive work, while the others may be more useful for low-level code.

CMBLensing.FlatIEBMapType
# main constructors:
FlatIEBMap(
Ix::AbstractMatrix, Ex::AbstractMatrix, Bx::AbstractMatrix;
θpix,  # optional, resolution in arcmin (default: 1)
∂mode, # optional, fourier∂ or map∂ (default: fourier∂)
)
FlatIEBMap(I::FlatMap, E::FlatMap, B::FlatMap)

# more low-level:
FlatIEBMap{P}(Ix::AbstractMatrix, Ex::AbstractMatrix, Bx::AbstractMatrix) # specify pixelization P explicilty
FlatIEBMap{P,T}(Ix::AbstractMatrix, Ex::AbstractMatrix, Bx::AbstractMatrix) # additionally, convert elements to type T
FlatIEBMap{P,T,M<:AbstractMatrix{T}}(Ix::M, Ex::M, Bx::M) # specify everything explicilty

Construct a FlatIEBMap object. The top form of the constructors is most convenient for interactive work, while the others may be more useful for low-level code.

CMBLensing.FlatIQUFourierType
# main constructors:
FlatIQUFourier(
Il::AbstractMatrix, Ql::AbstractMatrix, Ul::AbstractMatrix;
Nside, # required, size of the map in pixels
θpix,  # optional, resolution in arcmin (default: 1)
∂mode, # optional, fourier∂ or map∂ (default: fourier∂)
)
FlatIQUFourier(I::FlatFourier, Q::FlatFourier, U::FlatFourier)

# more low-level:
FlatIQUFourier{P}(Il::AbstractMatrix, Ql::AbstractMatrix, Ul::AbstractMatrix) # specify pixelization P explicilty
FlatIQUFourier{P,T}(Il::AbstractMatrix, Ql::AbstractMatrix, Ul::AbstractMatrix) # additionally, convert elements to type Complex{T}
FlatIQUFourier{P,T,M<:AbstractMatrix{Complex{T}}}(Il::M, Ql::M, Ul::M) # specify everything explicilty

Construct a FlatIQUFourier object. The top form of the constructors is most convenient for interactive work, while the others may be more useful for low-level code.

CMBLensing.FlatIQUMapType
# main constructors:
FlatIQUMap(
Ix::AbstractMatrix, Qx::AbstractMatrix, Ux::AbstractMatrix;
θpix,  # optional, resolution in arcmin (default: 1)
∂mode, # optional, fourier∂ or map∂ (default: fourier∂)
)
FlatIQUMap(I::FlatMap, Q::FlatMap, U::FlatMap)

# more low-level:
FlatIQUMap{P}(Ix::AbstractMatrix, Qx::AbstractMatrix, Ux::AbstractMatrix) # specify pixelization P explicilty
FlatIQUMap{P,T}(Ix::AbstractMatrix, Qx::AbstractMatrix, Ux::AbstractMatrix) # additionally, convert elements to type T
FlatIQUMap{P,T,M<:AbstractMatrix{T}}(Ix::M, Qx::M, Ux::M) # specify everything explicilty

Construct a FlatIQUMap object. The top form of the constructors is most convenient for interactive work, while the others may be more useful for low-level code.

CMBLensing.FlatMapType
# main constructor:
FlatMap(
Ix::AbstractArray;
θpix,  # optional, resolution in arcmin (default: 1)
∂mode, # optional, fourier∂ or map∂ (default: fourier∂)
)

# more low-level:
FlatMap{P}(Ix::AbstractArray) # specify pixelization P explicilty
FlatMap{P,T}(Ix::AbstractArray) # additionally, convert elements to type T
FlatMap{P,T,M<:AbstractArray{T}}(Ix::M) # specify everything explicilty

Construct a FlatMap object. The top form of the constructor is most convenient for interactive work, while the others may be more useful for low-level code.

CMBLensing.FlatQUFourierType
# main constructor:
FlatQUFourier(
Ql::AbstractArray, Ul::AbstractArray;
Nside, # required, size of the map in pixels
θpix,  # optional, resolution in arcmin (default: 1)
∂mode, # optional, fourier∂ or map∂ (default: fourier∂)
)

# more low-level:
FlatQUFourier{P}(Ql::AbstractArray, Ul::AbstractArray) # specify pixelization P explicilty
FlatQUFourier{P,T}(Ql::AbstractArray, Ul::AbstractArray) # additionally, convert elements to type Complex{T}
FlatQUFourier{P,T,M<:AbstractArray{Complex{T}}}(Ql::M, Ul::M) # specify everything explicilty

Construct a FlatQUFourier object. The top form of the constructor is most convenient for interactive work, while the others may be more useful for low-level code.

CMBLensing.FlatQUMapType
# main constructor:
FlatQUMap(
Qx::AbstractArray, Ux::AbstractArray;
θpix,  # optional, resolution in arcmin (default: 1)
∂mode, # optional, fourier∂ or map∂ (default: fourier∂)
)

# more low-level:
FlatQUMap{P}(Qx::AbstractArray, Ux::AbstractArray) # specify pixelization P explicilty
FlatQUMap{P,T}(Qx::AbstractArray, Ux::AbstractArray) # additionally, convert elements to type T
FlatQUMap{P,T,M<:AbstractArray{T}}(Qx::M, Ux::M) # specify everything explicilty

Construct a FlatQUMap object. The top form of the constructor is most convenient for interactive work, while the others may be more useful for low-level code.

CMBLensing.ParamDependentOpType
ParamDependentOp(recompute_function::Function)

Creates an operator which depends on some parameters $\theta$ and can be evaluated at various values of these parameters.

recompute_function should be a function which accepts keyword arguments for $\theta$ and returns the operator. Each keyword must have a default value; the operator will act as if evaluated at these defaults unless it is explicitly evaluated at other parameters.

Example:

Cϕ₀ = Diagonal(...) # some fixed Diagonal operator
Cϕ = ParamDependentOp((;Aϕ=1)->Aϕ*Cϕ₀) # create ParamDependentOp

Cϕ(Aϕ=1.1) * ϕ   # Cϕ(Aϕ=1.1) is equal to 1.1*Cϕ₀
Cϕ * ϕ           # Cϕ alone will act like Cϕ(Aϕ=1) because that was the default above

Note: if you are doing parallel work, global variables referred to in the recompute_function need to be distributed to all workers. A more robust solution is to avoid globals entirely and instead ensure all variables are "closed" over (and hence will automatically get distributed). This will happen by default if defining the ParamDependentOp inside any function, or can be forced at the global scope by wrapping everything in a let-block, e.g.:

Cϕ = let Cϕ₀=Cϕ₀
ParamDependentOp((;Aϕ=1)->Aϕ*Cϕ₀)
end

After executing the code above, Cϕ is now ready to be (auto-)shipped to any workers and will work regardless of what global variables are defined on these workers.

CMBLensing.RK4SolverMethod
RK4Solver(F!::Function, y₀, t₀, t₁, nsteps)

Solve for $y(t_1)$ with 4th order Runge-Kutta assuming $dy/dt = F(t,y)$ and $y(t_0)$ = $y_0$.

Arguments:

• F! — a function F!(v,t,y)which setsv=F(t,y)
CMBLensing.@namedtupleMacro

Pack some variables into a NamedTuple. E.g.:

> x = 3
> y = 4
> @namedtuple(x, y, z=5)
(x=3,y=4,z=5)
CMBLensing.BinRescaledOpMethod
BinRescaledOp(C₀, Cbins, θname::Symbol)

Create a ParamDependentOp which has a parameter named θname which is an array that controls the amplitude of bandpowers in bins given by Cbins.

For example, BinRescaledOp(C₀, [Cbin1, Cbin2], :A) creates the operator:

ParamDependentOp( (;A=[1,1], _...) -> C₀ + (A-1) * Cbin1 + (A-1) * Cbin2 )

where C₀, Cbin1, and Cbin2 should be some LinOps. Note Cbins are directly the power which is added, rather than a mask.

The resulting operator is differentiable in θname.

CMBLensing.LinearInterpolationMethod
itp = LinearInterpolation(xdat::AbstractVector, ydat::AbstractVector; extrapolation_bc=NaN)
itp(x) # interpolate at x

A simple 1D linear interpolation code which is fully Zygote differentiable in either xdat, ydat, or the evaluation point x.

CMBLensing.batch_promote!Method
batch_promote!(to, f)

Promote f to the same batch size as to by replication. If both are already the same batch size, no copy is made and f is returned. If promotion needs to happen, the answer is stored in-place in to and returned.

CMBLensing.batchmapMethod
batchmap(f, args...)

map function f over args, unbatching them first if they are batched, and then batching the result.

CMBLensing.conjugate_gradientFunction
conjugate_gradient(M, A, b, x=M\b; nsteps=length(b), tol=sqrt(eps()), progress=false, callback=nothing, hist=nothing, histmod=1)

Compute x=A\b (where A is positive definite) by conjugate gradient. M is the preconditioner and should be M≈A, and M\x should be fast.

The solver will stop either after nsteps iterations or when dot(r,r)<tol (where r=A*x-b is the residual at that step), whichever occurs first.

Info from the iterations of the solver can be returned if hist is specified. hist can be one or a tuple of:

• :i — current iteration number
• :x — current solution
• :r — current residual r=A*x-b
• :res — the norm of r
• :t — the time elapsed (in seconds) since the start of the algorithm

histmod can be used to include every N-th iteration only in hist.

CMBLensing.fftsymsMethod

Arguments m and n refer to the sizes of an m×n matrix (call it A) that is the output of a real FFT (thus m=n÷2+1)

Returns a tuple of (ireal, iimag, negks) where these are

• irealm×n mask corrsponding to unique real entries of A
• iimagm×n mask corrsponding to unique imaginary entries of A
• negksm×n matrix of giving the index into A where the negative k-vector is, s.t. A[i,j] = A[negks[i,j]]'
CMBLensing.get_term_memoizerMethod

All of the terms in the quadratic estimate and normalization expressions look like

C * l[i] * l̂[j] * l̂[k] * ...

where C is some field or diagonal covariance. For example, there's a term in the EB estimator that looks like:

(CE * (CẼ+Cn) \ d[:E])) * l[i] * l̂[j] * l̂[k]

(where note that l̂[j] and l̂[k] are unit vectors, but l[i] is not). The function get_term_memoizer returns a function term which could be called in the following way to compute this term:

term((CE * (CẼ+Cn) \ d[:E])), [i], j, k)

(note that the fact that l[i] is not a unit vector is specified by putting the [i] index in brackets).

Additionally, all of these terms are symmetric in their indices, i.e. in (i,j,k) in this case. The term function is smart about this, and is memoized so that each unique set of indices is only computed once. This leads to a pretty drastic speedup for terms with many indices like those that arize in the EE and EB normalizations, and lets us write code which is both clear and fast without having to think too hard about these symmetries.

CMBLensing.gmresMethod
gmres(A, b; maxiter, Pl=I)

Solve A \ b with maxiter iterations of the generalized minimal residual algorithm. Pl is a left-preconditioner which should approximate inv(A).

Note: the implemenation is memory inefficient and uses O(n * maxiter) memory, where n,n=size(A) (may not be a big deal for small maxiter), although is totally generic and works with CPU or GPU and dense or sparse matrices, unlike IterativeSolver's gmres.

CMBLensing.grid_and_sampleMethod
grid_and_sample(lnP::Function; range::NamedTuple; progress=false, nsamples=1)

Interpolate the log pdf lnP with support on range, and return the integrated log pdf as well nsamples samples (drawn via inverse transform sampling)

lnP should either accept a NamedTuple argument and range should be a NamedTuple mapping those same names to range objects specifying where to evaluate lnP, e.g.:

grid_and_sample(nt->-(nt.x^2+nt.y^2)/2, (x=range(-3,3,length=100),y=range(-3,3,length=100)))

or lnP should accept a single scalar argument and range should be directly the range for this variable:

grid_and_sample(x->-x^2/2, range(-3,3,length=100))

The return value is (lnP, samples, Px) where lnP is an interpolated/smoothed log PDF which can be evaluated anywhere within the original range, Px are sampled points of the original PDF, and samples is a NamedTuple giving the Monte-Carlo samples of each of the parameters.

(Note: only 1D sampling is currently implemented, but 2D like in the example above is planned)

CMBLensing.paren_errorsMethod
paren_errors(μ, σ; N_in_paren=2)

Get a string represntation of μ ± σ in "parenthesis" format, e.g. 1.234 ± 0.012 becomes 1.234(12).

CMBLensing.safe_pyimportMethod
safe_pyimport(s)

Like pyimport, but if s fails to import, instead of an error right away, the error will be thrown the first time the user tries to access the contents of the module.

CMBLensing.ΣMethod
Σ(ϕ::Field,  ds; [conjgrad_kwargs])
Σ(Lϕ,        ds; [conjgrad_kwargs])

An operator for the data covariance, Cn + PMBLCfL'B'M'P', which can applied and inverted. conjgrad_kwargs are passed to the underlying call to conjugate_gradient.

LinearAlgebra.logdetMethod
logdet(L::LinOp, θ)

If L depends on θ, evaluates logdet(L(θ)) offset by its fiducial value at L(). Otherwise, returns 0.

CMBLensing.BatchedRealType
BatchedReal(::Vector{<:Real}) <: Real

Holds a vector of real numbers and broadcasts algebraic operations over them, as well as broadcasting with batched FlatFields, but is itself a Real.

CMBLensing.@!Macro

Rewrites @! x = f(args...) to x = f!(x,args...)

Special cases for * and \ forward to mul! and ldiv!, respectively.

CMBLensing.@dictMacro

Pack some variables in a dictionary

> x = 3
> y = 4
> @dict x y z=>5
Dict(:x=>3,:y=>4,:z=>5)
CMBLensing.@invokelatestMacro
@invokelatest expr...

Rewrites all non-broadcasted function calls anywhere within an expression to use Base.invokelatest. This means functions can be called that have a newer world age, at the price of making things non-inferrable.

CMBLensing.@ondemandMacro
@ondemand(Package.function)(args...; kwargs...)
@ondemand(Package.Submodule.function)(args...; kwargs...)

Just like calling Package.function or Package.Submodule.function, but Package will be loaded on-demand if it is not already loaded. The call is no longer inferrable.

CMBLensing.@substMacro
@subst sum(x*\$(y+1) for x=1:2)

becomes

let tmp=(y+1)
sum(x*tmp for x=1:2)
end

to aid in writing clear/succinct code that doesn't recompute things unnecessarily.

CMBLensing.@sym_memoMacro
# symmetric in any of its final arguments except for bar:
@sym_memo foo(bar, @sym(args...)) = <body>
# symmetric in (i,j), but not baz
@sym_memo foo(baz, @sym(i, j)) = <body>

The @sym_memo macro should be applied to a definition of a function which is symmetric in some of its arguments. The arguments in which its symmetric are specified by being wrapping them in @sym, and they must come at the very end. The resulting function will be memoized and permutations of the arguments which are equal due to symmetry will only be computed once.

CMBLensing.FFTW_NUM_THREADSConstant

The number of threads used by FFTW for CPU FFTs (default is the environment variable FFTW_NUM_THREADS, or if that is not specified its Sys.CPU_THREADS÷2). This must be set before creating any FlatField` objects.