API

Simulation

CMBLensing.load_simFunction
load_sim(;kwargs...)

The starting point for many typical sessions. Creates a BaseDataSet object with some simulated data, returing the DataSet and simulated truths, which can then be passed to other maximization / sampling functions. E.g.:

@unpack f,ϕ,ds = load_sim(;
    θpix  = 2,
    Nside = 128,
    pol   = :P,
    T     = Float32
)

Keyword arguments:

  • θpix — Angular resolution, in arcmin.
  • Nside — Number of pixels in the map as an (Ny,Nx) tuple, or a single number for square maps.
  • pol — One of :I, :P, or :IP to select intensity, polarization, or both.
  • T = Float32 — Precision, either Float32 or Float64.
  • storage = Array — Set to CuArray to use GPU.
  • Nbatch = nothing — Number of batches of data in this dataset.
  • μKarcminT = 3 — Noise level in temperature in μK-arcmin.
  • ℓknee = 100 — 1/f noise knee.
  • αknee = 3 — 1/f noise slope.
  • beamFWHM = 0 — Beam full-width-half-max in arcmin.
  • pixel_mask_kwargs = (;) — NamedTuple of keyword arguments to pass to make_mask to create the pixel mask.
  • bandpass_mask = LowPass(3000) — Operator which performs Fourier-space masking.
  • fiducial_θ = (;) — NamedTuple of keyword arguments passed to camb() for the fiducial model.
  • seed = nothing — Specific seed for the simulation.
  • L = LenseFlow — Lensing operator.

Returns a named tuple of (;f, f̃, ϕ, n, ds, Cℓ, proj).

CMBLensing.simulateFunction
simulate([rng], Σ)

Draw a simulation from the covariance matrix Σ, i.e. draw a random vector $\xi$ such that the covariance $\langle \xi \xi^\dagger \rangle = \Sigma$.

The random number generator rng will be used and advanced in the proccess, and defaults to Random.default_rng().

Lensing estimation

CMBLensing.MAP_jointFunction
MAP_joint([θ], ds::DataSet, [Ωstart=(ϕ=0,)]; kwargs...)

Compute the maximum a posteriori (i.e. "MAP") estimate of the joint posterior, $\mathcal{P}(f,\phi,\theta\,|\,d)$, or compute a quasi-sample.

Positional arguments:

  • [θ] — Optional θ at which to do maximization.
  • ds::DataSet — The DataSet which defines the posterior
  • [Ωstart=(ϕ=0,)] — Optional starting point for the non-Gaussian fields to optimize over. The maximizer does a coordinate descent which alternates between updating f which the posterior is assumed to be Gaussian in, and updating the fields in Ωstart (which by default is just ϕ).

Keyword arguments:

  • nsteps — The maximum number of iterations for the maximizer.
  • ϕtol = nothing — If given, stop when ϕ updates reach this tolerance. ϕtol is roughly the relative per-pixel standard deviation between changes to ϕ and draws from the ϕ prior. Values in the range $10^{-2}-10^{-4}$ are reasonable.
  • nburnin_update_hessian = Inf — How many steps to wait before starting to do diagonal updates to the Hessian
  • conjgrad_kwargs = (;) — Passed to the inner call to conjugate_gradient.
  • progress = true — Whether to show the progress bar.
  • quasi_sample = falsefalse to compute the MAP, true to iterate quasi-samples, or an integer to compute a fixed-seed quasi-sample.
  • history_keys — What quantities to include in the returned history. Can be any subset of (:f, :f°, :ϕ, :∇ϕ_logpdf, :χ², :logpdf).

Returns a tuple (f, ϕ, history) where f is the best-fit (or quasi-sample) field, ϕ is the lensing potential, and history contains the history of steps during the run.

CMBLensing.MAP_margFunction
MAP_marg(ds; kwargs...)

Compute the maximum a posteriori (i.e. "MAP") estimate of the marginal posterior, $\mathcal{P}(\phi,\theta\,|\,d)$.

CMBLensing.sample_jointFunction
sample_joint(ds::DataSet; kwargs...)

Sample the joint posterior, $\mathcal{P}(f,\phi,\theta\,|\,d)$.

Keyword arguments:

  • nsamps_per_chain — The number of samples per chain.
  • nchains = 1 — Number of chains in parallel.
  • nsavemaps = 1 — Number of steps in between saving maps into chain.
  • nburnin_always_accept = 0 — Number of steps at the beginning of the chain to always accept HMC steps regardless of integration error.
  • nburnin_fixθ = 0 — Number of steps at the beginning of the chain before starting to sample θ.
  • Nϕ = :qe — Noise to use in the initial approximation to the Hessian. Can give :qe to use the quadratic estimate noise.
  • chains = nothingnothing to start a new chain; the return value from a previous call to sample_joint to resume those chains; :resume to resume chains from a file given by filename
  • θrange — Range and density to grid sample parameters as a NamedTuple, e.g. (Aϕ=range(0.7,1.3,length=20),).
  • θstart — Starting values of parameters as a NamedTuple, e.g. (Aϕ=1.2,), or nothing to randomly sample from θrange
  • ϕstart — Starting ϕ, either a Field object, :quasi_sample, or :best_fit
  • metadata — Does nothing, but is saved into the chain file
  • nhmc = 1 — Number of HMC passes per ϕ Gibbs step.
  • symp_kwargs = fill((N=25, ϵ=0.01), nhmc) — an array of NamedTupe kwargs to pass to symplectic_integrate. E.g. [(N=50,ϵ=0.1),(N=25,ϵ=0.01)] would do 50 large steps then 25 smaller steps per each Gibbs pass. If specified, nhmc is ignored.
  • wf_kwargs — Keyword arguments to pass to argmaxf_logpdf in the Wiener Filter Gibbs step.
  • MAP_kwargs — Keyword arguments to pass to MAP_joint when computing the starting point.
CMBLensing.argmaxf_logpdfFunction
argmaxf_logpdf(ds::DataSet, Ω::NamedTuple, [d = ds.d]; kwargs...)

Maximize the logpdf for ds over f, given all the other arguments are held fixed at Ω. E.g.: argmaxf_logpdf(ds, (; ϕ, θ=(Aϕ=1.1,)).

Keyword arguments:

  • fstart — starting guess for f for the conjugate gradient solver
  • conjgrad_kwargs — Passed to the inner call to conjugate_gradient
CMBLensing.quadratic_estimateFunction
quadratic_estimate(ds::DataSet, which; wiener_filtered=true)
quadratic_estimate((ds₁::DataSet, ds₂::DataSet), which; wiener_filtered=true)

Compute the quadratic estimate of ϕ given data.

The ds or (ds₁,ds₂) tuple contain the DataSet object(s) which house the data and covariances used in the estimate. Note that only the Fourier-diagonal approximations for the beam, mask, and noise, i.e. , , and Cn̂, are accounted for. To account full operators (if they are not actually Fourier-diagonal), you should compute the impact using Monte Carlo.

If a tuple is passed in, the result will come from correlating the data from ds₁ with that from ds₂.

An optional keyword argument AL can be passed in case the QE normalization was already computed, in which case it won't be recomputed during the calculation.

Returns a named tuple of (;ϕqe, AL, Nϕ) where ϕqe is the (possibly Wiener filtered, depending on wiener_filtered option) quadratic estimate, AL is the normalization (which is already applied to ϕqe, it does not need to be applied again), and is the analytic N⁰ noise bias (Nϕ==AL if using unlensed weights, currently only Nϕ==AL is always returned, no matter the weights)

Lensing operators

CMBLensing.LenseFlowType
LenseFlow(ϕ, [n=7])

LenseFlow is the ODE-based lensing algorithm from Millea, Anderes, & Wandelt, 2019. The number of steps in the ODE solver is controlled by n. The action of the operator, as well as its adjoint, inverse, inverse-adjoint, and gradient of any of these w.r.t. ϕ can all be computed. The log-determinant of the operation is zero independent of ϕ, in the limit of n high enough.

CMBLensing.BilinearLensType
BilinearLens(ϕ)

BilinearLens is a lensing operator that computes lensing with bilinear interpolation. The action of the operator, as well as its adjoint, inverse, inverse-adjoint, and gradient w.r.t. ϕ can all be computed. The log-determinant of the operation is non-zero and can't be computed.

Internally, BilinearLens forms a sparse matrix with the interpolation weights, which can be applied and adjoint-ed extremely fast (e.g. at least an order of magnitude faster than LenseFlow). Inverse and inverse-adjoint lensing is somewhat slower since it requires an iterative solve, here performed with the preconditioned generalized minimal residual algorithm.

CMBLensing.TaylensType
Taylens(ϕ, order)

Taylens is a lensing operator which lenses a map with a nearest-pixel permute step followed by power series expansion in the residual displacement, to any order. This is the algorithm from Næss&Louis 2013.

CMBLensing.PowerLensType
PowerLens(ϕ, order)

PowerLens is a lensing operator which lenses a map with a power series expansion in $\nabla \phi$ to any order.

\[f(x+\nabla x) \approx f(x) + (\nabla f)(\nabla \phi) + \frac{1}{2} (\nabla \nabla f) (\nabla \phi)^2 + ... \]

The action of the operator and its adjoint can be computed.

CMBLensing.antilensingFunction
antilensing(L::PowerLens)

Create a PowerLens operator that lenses by instead.

Configuration options

CMBLensing.FFTW_NUM_THREADSConstant

The number of threads used by FFTW for CPU FFTs (default is the environment variable FFTW_NUM_THREADS, or if that is not specified its Sys.CPU_THREADS÷2). This must be set before creating any FlatField objects.

CMBLensing.FFTW_TIMELIMITConstant

Time-limit for FFT planning on CPU (default: 5 seconds). This must be set before creating any FlatField objects.

Other

CMBLensing.JpermMethod

Jperm(ℓ::Int, n::Int) return the column number in the J matrix U^2 where U is unitary FFT. The J matrix looks like this:

|1 0| | / 1| | / / | |0 1 |

CMBLensing.LinearInterpolationMethod
itp = LinearInterpolation(xdat::AbstractVector, ydat::AbstractVector; extrapolation_bc=NaN)
itp(x) # interpolate at x

A simple 1D linear interpolation code which is fully Zygote differentiable in either xdat, ydat, or the evaluation point x.

CMBLensing.QE_legMethod
QE_leg(C::Diagonal, inds...)

The quadratic estimate and normalization expressions all consist of terms involving products of two "legs", each leg which look like:

C * l[i] * l̂[j] * l̂[k] * ...

where C is some field or diagonal covariance, l[i] is the Fourier wave-vector in direction i (for i=1:2), and l̂[i] = l[i]/‖l‖. For example, there's a leg in the EB estimator that looks like:

(CE * (CẼ+Cn) \ d[:E])) * l[i] * l̂[j] * l̂[k]

The function QE_leg computes quatities like these, e.g. the above would be given by:

QE_leg((CE * (CẼ+Cn) \ d[:E])), [i], j, k)

(where note that specifying whether its the Fourier wave-vector l instead of the unit-vector l̂ is done by putting that index in brackets).

Additionally, all of these terms are symmetric in their indices, i.e. in (i,j,k) in this case. The QE_leg function is smart about this, and is memoized so that each unique set of indices is only computed once. This leads to a pretty drastic speedup for terms with many indices like those that arize in the EE and EB normalizations, and lets us write code which is both clear and fast without having to think too hard about these symmetries.

CMBLensing.assign_GPU_workersMethod
assign_GPU_workers(;print_info=true, use_master=false, remove_oversubscribed_workers=false)

Assign each Julia worker process a unique GPU using CUDA.device!. Works with workers which may be distributed across different hosts, and each host can have multiple GPUs.

If a unique GPU cannot be assigned, that worker is removed if remove_oversubscribed_workers is true, otherwise an error is thrown.

use_master controls whether the master process counts as having been assigned a GPU (if false, one of the workers may be assigned the same GPU as the master)

CMBLensing.batchMethod
batch(fs::LambertField...)
batch(fs::Vector{<:LambertField})

Concatenate one of more LambertFields along the "batch" dimension (dimension 4 of the underlying array). For the inverse operation, see unbatch.

CMBLensing.beamCℓsMethod
beamCℓs(;beamFWHM, ℓmax=8000)

Compute the beam power spectrum, often called $W_\ell$. A map should be multiplied by the square root of this.

CMBLensing.conjugate_gradientFunction
conjugate_gradient(
    M, A, b, x=M\b; 
    nsteps       = length(b), 
    tol          = sqrt(eps()), 
    progress     = false, 
    callback     = nothing, 
    history_keys = nothing, 
    history_mod  = 1
)

Compute x=A\b (where A is positive definite) by conjugate gradient. M is the preconditioner and should be M≈A, and M\x should be fast.

The solver will stop either after nsteps iterations or when dot(r,r)<tol (where r=A*x-b is the residual at that step), whichever occurs first.

Info from the iterations of the solver can be returned if history_keys is specified. history_keys can be one or a tuple of:

  • :i — current iteration number
  • :x — current solution
  • :r — current residual r=A*x-b
  • :res — the norm of r
  • :t — the time elapsed (in seconds) since the start of the algorithm

history_mod can be used to include every N-th iteration only in history_keys.

CMBLensing.cpuMethod
cpu(x)

Recursively move an object to CPU memory. See also gpu.

CMBLensing.fftsymsMethod

Arguments m and n refer to the sizes of an m×n matrix (call it A) that is the output of a real FFT (thus m=n÷2+1)

Returns a tuple of (ireal, iimag, negks) where these are

  • irealm×n mask corrsponding to unique real entries of A
  • iimagm×n mask corrsponding to unique imaginary entries of A
  • negksm×n matrix of giving the index into A where the negative k-vector is, s.t. A[i,j] = A[negks[i,j]]'
CMBLensing.get_max_lensing_stepMethod

Returns αmax such that 𝕀 + ∇∇(ϕ + α * η) has non-zero discriminant (pixel-by-pixel) for all α values in [0, αmax].

This mean ϕ + αmax * η is the maximum step in the η direction which can be added to ϕ and still yield a lensing potential in the weak-lensing regime. This is important because it guarantees the potential can be paseed to LenseFlow, which cannot handle the strong-lensing / "shell-crossing" regime.

CMBLensing.gmresMethod
gmres(A, b; maxiter, Pl=I)

Solve A \ b with maxiter iterations of the generalized minimal residual algorithm. Pl is a left-preconditioner which should approximate inv(A).

Note: the implemenation is memory inefficient and uses O(n * maxiter) memory, where n,n=size(A) (may not be a big deal for small maxiter), although is totally generic and works with CPU or GPU and dense or sparse matrices, unlike IterativeSolver's gmres.

CMBLensing.gpuFunction
gpu(x)

Recursively move an object to GPU memory. Note that, unlike cu(x), this does not change the eltype of any underlying arrays. See also cpu.

CMBLensing.gradhessMethod
gradhess(f)

Compute the gradient $g^i = \nabla^i f$, and the hessian, $H_j^{\,i} = \nabla_j \nabla^i f$.

CMBLensing.grid_and_sampleMethod
grid_and_sample(lnP::Callable; range::NamedTuple; progress=false, nsamples=1)

Interpolate the log pdf lnP with support on range, and return the integrated log pdf as well nsamples samples (drawn via inverse transform sampling)

lnP should either accept a NamedTuple argument and range should be a NamedTuple mapping those same names to range objects specifying where to evaluate lnP, e.g.:

grid_and_sample(nt->-(nt.x^2+nt.y^2)/2, (x=range(-3,3,length=100),y=range(-3,3,length=100)))

or lnP should accept a single scalar argument and range should be directly the range for this variable:

grid_and_sample(x->-x^2/2, range(-3,3,length=100))

The return value is (lnP, samples, Px) where lnP is an interpolated/smoothed log PDF which can be evaluated anywhere within the original range, Px are sampled points of the original PDF, and samples is a NamedTuple giving the Monte-Carlo samples of each of the parameters.

(Note: only 1D sampling is currently implemented, but 2D like in the example above is planned)

CMBLensing.kdeMethod
kde(samples::AbstractVector; [boundary=(min,max), normalize="integral" or "max"])
kde(samples::AbstractMatrix; [boundary=[(min1,max1),(min2,max2)], normalize="integral" or "max", smooth_scale_2D])

Return a Kernel Density Estimate for a set of 1D or 2D samples. The return object is a function which can be evaluated anywhere to compute the PDF. If provided, boundary specifies a hard upper/lower bound for the 1 or 2 or parameters, normalize specifies whether to normalize the PDF to unit integral or unit maximum, and smooth_scale_2D specifies how much smoothing to do for the 2D case.

Based on Python GetDist, which must be installed.

CMBLensing.load_camb_CℓsMethod
load_camb_Cℓs(;path_prefix, custom_tensor_params=nothing, 
    unlensed_scalar_postfix, unlensed_tensor_postfix, lensed_scalar_postfix, lenspotential_postfix)

Load some Cℓs from CAMB files.

path_prefix specifies the prefix for the files, which are then expected to have the normal CAMB postfixes: scalCls.dat, tensCls.dat, lensedCls.dat, lenspotentialCls.dat, unless otherwise specified via the other keyword arguments. custom_tensor_params can be used to call CAMB directly for the unlensed_tensors, rather than reading them from a file (since alot of times this file doesn't get saved). The value should be a Dict/NamedTuple which will be passed to a call to camb, e.g. custom_tensor_params=(r=0,) for zero tensors.

CMBLensing.load_chainsMethod
load_chains(filename; burnin=0, burnin_chunks=0, thin=1, join=false, unbatch=true)

Load a single chain or multiple parallel chains which were written to a file by sample_joint.

Keyword arguments:

  • burnin — Remove this many samples from the start of each chain, or if negative, keep only this many samples at the end of each chain.
  • burnin_chunks — Same as burnin, but in terms of chain "chunks" stored in the chain file, rather than in terms of samples.
  • thin — If thin is an integer, thin the chain by this factor. If thin == :hasmaps, return only samples which have maps saved. If thin is a Function, filter the chain by this function (e.g. thin=haskey(:g) on Julia 1.5+)
  • unbatch — If true, run unbatch on the chains if they are batched.
  • join — If true, concatenate all the chains together.
  • skip_missing_chunks — Skip missing chunks in the chain instead of terminating the chain there.

The object returned by this function is a Chain or Chains object, which simply wraps an Array of Dicts or an Array of Array of Dicts, respectively (each sample is a Dict). The wrapper object has some extra indexing properties for convenience:

  • It can be indexed as if it were a single multidimensional object, e.g. chains[1,:,:accept] would return the :accept key of all samples in the first chain.
  • Leading colons can be dropped, i.e. chains[:,:,:accept] is the same as chains[:accept].
  • If some samples are missing a particular key, missing is returned for those samples insted of an error.
  • The recursion goes arbitrarily deep into the objects it finds. E.g., since sampled parameters are stored in a NamedTuple like (Aϕ=1.3,) in the θ key of each sample Dict, you can do chain[:θ,:Aϕ] to get all samples as a vector.
CMBLensing.mean_std_and_errorsMethod
mean_std_and_errors(samples; N_bootstrap=10000)

Get the mean and standard deviation of a set of correlated samples from a chain where the error on the mean and standard deviation is estimated with bootstrap resampling using the calculated "effective sample size" of the chain.

CMBLensing.mixMethod
mix(ds::DataSet; f, ϕ, [θ])

Compute the mixed (f°, ϕ°) from the unlensed field f and lensing potential ϕ, given the definition of the mixing matrices in ds evaluated at parameters θ (or at fiducial values if no θ provided).

CMBLensing.noiseCℓsMethod
noiseCℓs(;μKarcminT, beamFWHM=0, ℓmax=8000, ℓknee=100, αknee=3)

Compute the (:TT,:EE,:BB,:TE) noise power spectra given white noise + 1/f. Polarization noise is scaled by $\sqrt{2}$ relative to μKarcminT. beamFWHM is in arcmin.

CMBLensing.paren_errorsMethod
paren_errors(μ, σ; N_in_paren=2)

Get a string represntation of μ ± σ in "parenthesis" format, e.g. 1.234 ± 0.012 becomes 1.234(12).

CMBLensing.pixwinMethod
pixwin(θpix, ℓ)

Returns the pixel window function for square flat-sky pixels of width θpix (in arcmin) evaluated at some s. This is the scaling of k-modes, the scaling of the power spectrum will be pixwin^2.

CMBLensing.projectMethod
project(healpix_field::HealpixField => cart_proj::CartesianProj; [method = :bilinear])
project(cart_field::FlatField => healpix_proj::ProjHealpix; [method=:bilinear])

Project a healpix_field to a cartesian projection specified by cart_proj, or project a cart_field back up to sphere on the Healpix pixelization specified by healpix_proj. E.g.

# sphere to cartesian
healpix_field = HealpixMap(rand(12*2048^2))
cart_proj = ProjLambert(Ny=128, Nx=128, θpix=3, T=Float32, rotator=(0,30,0))
f = project(healpix_field => cart_proj)

# and back to sphere
project(f => ProjHealpix(512))

The (Ny, Nx, θpix, rotator) parameters of cart_proj control the size and location of the projected region.

The use of => is to help remember in which order the arguments are specified.

For either projection direction, if the field is a QU or IQU field, polarization angles are rotated to be aligned with the local coordinates (sometimes called "polarization flattening").

The projection interpolates the original map at the positions of the centers of the projected map pixels. method controls how this interpolation is done, and can be one of:

  • :bilinear — Bilinear interpolation (default)
  • :fft — FFT-based interpolation, which uses a non-uniform FFT to evaluate the discrete Fourier series of the field at arbitrary new positions. This is currently implemented only for cartesian to Healpix projection. To make this mode available, you must load the NFFT package first. For GPU fields, you must also load CuNFFT. Projection with method=:fft is both GPU compatible and automatically differentiable.

A pre-computation step can be cached by first doing,

projector = CMBLensing.Projector(healpix_map.proj => cart_proj, method=:fft)
f = project(projector, healpix_map => cart_proj) 

which makes subsequent project calls significantly faster. Note the method argument is specified in the precomputation step.

CMBLensing.rfft2vecMethod

Convert a matrix A which is the output of a real FFT to a real vector, keeping only unqiue real/imaginary entries of A

CMBLensing.rfft_degeneracy_facMethod
rfft_degeneracy_fac(n)

Returns an Array which is 2 if the complex conjugate of the corresponding entry in the half-plane real FFT appears in the full-plane FFT, and is 1 othewise. n is the length of the first dimension of the full-plane FFT. The following identity holds:

sum(abs2.(fft(x)) = sum(rfft_degeneracy_fac(size(x,1)) .* abs2.(rfft(x))
CMBLensing.sample_fFunction
sample_f([rng::AbstractRNG], ds::DataSet, Ω::NamedTuple, [d = ds.d]; kwargs...)

Draw a posterior sample of f from the logpdf for ds, given all the other arguments are held fixed at Ω. E.g.: sample_f(ds, (; ϕ, θ=(Aϕ=1.1,)).

Keyword arguments:

  • fstart — starting guess for f for the conjugate gradient solver
  • conjgrad_kwargs — Passed to the inner call to conjugate_gradient
CMBLensing.symplectic_integrateMethod
symplectic_integrate(x₀, p₀, Λ, U, δUδx, N=50, ϵ=0.1, progress=false)

Do a symplectic integration of the potential energy U (with gradient δUδx) starting from point x₀ with momentum p₀ and mass matrix Λ. The number of steps is N and the step size ϵ.

Returns ΔH, xᵢ, pᵢ corresponding to change in Hamiltonian, and final position and momenta. If history_keys is specified a history of requested variables throughout each step is also returned.

CMBLensing.ud_gradeMethod
ud_grade(f::Field, θnew, mode=:map, deconv_pixwin=true, anti_aliasing=true)

Up- or down-grades field f to new resolution θnew (only in integer steps). Two modes are available specified by the mode argument:

  • :map — Up/downgrade by replicating/averaging pixels in map-space
  • :fourier — Up/downgrade by extending/truncating the Fourier grid

For :map mode, two additional options are possible. If deconv_pixwin is true, deconvolves the pixel window function from the downgraded map so the spectrum of the new and old maps are the same. If anti_aliasing is true, filters out frequencies above Nyquist prior to down-sampling.

CMBLensing.unbatchMethod
unbatch(chains::Chains)

Expand each chain in this Chains object by unbatching it.

CMBLensing.unbatchMethod
unbatch(chain::Chain)

Convert a chain of batch-length-D fields to D chains of unbatched fields.

CMBLensing.unbatchMethod
unbatch(f::LambertField)

Return an Array of LambertFields corresponding to each batch index. For the inverse operation, see batch.

CMBLensing.unfoldMethod

Convert an M×N matrix (with M=N÷2+1) which is the output a real FFT to a full N×N one via symmetries.

CMBLensing.unmixMethod
unmix(f°, ϕ°,    ds::DataSet)
unmix(f°, ϕ°, θ, ds::DataSet)

Compute the unmixed/unlensed (f, ϕ) from the mixed field and mixed lensing potential ϕ°, given the definition of the mixing matrices in ds evaluated at parameters θ (or at fiducial values if no θ provided).

CMBLensing.vec2rfftMethod

Convert a vector produced by rfft2vec back into a complex matrix.

LinearAlgebra.logdetMethod
logdet(L::FieldOp, θ)

If L depends on θ, evaluates logdet(L(θ)) offset by its fiducial value at L(). Otherwise, returns 0.

CMBLensing.BatchedRealType
BatchedReal(::Vector{<:Real}) <: Real

Holds a vector of real numbers and broadcasts algebraic operations over them, as well as broadcasting along the batch dimension of Fields, but is itself a Real.

CMBLensing.ParamDependentOpType
ParamDependentOp(recompute_function::Function)

Creates an operator which depends on some parameters $\theta$ and can be evaluated at various values of these parameters.

recompute_function should be a function which accepts keyword arguments for $\theta$ and returns the operator. Each keyword must have a default value; the operator will act as if evaluated at these defaults unless it is explicitly evaluated at other parameters.

Example:

Cϕ₀ = Diagonal(...) # some fixed Diagonal operator
Cϕ = ParamDependentOp((;Aϕ=1)->Aϕ*Cϕ₀) # create ParamDependentOp

Cϕ(Aϕ=1.1) * ϕ   # Cϕ(Aϕ=1.1) is equal to 1.1*Cϕ₀
Cϕ * ϕ           # Cϕ alone will act like Cϕ(Aϕ=1) because that was the default above

Note: if you are doing parallel work, global variables referred to in the recompute_function need to be distributed to all workers. A more robust solution is to avoid globals entirely and instead ensure all variables are "closed" over (and hence will automatically get distributed). This will happen by default if defining the ParamDependentOp inside any function, or can be forced at the global scope by wrapping everything in a let-block, e.g.:

Cϕ = let Cϕ₀=Cϕ₀
    ParamDependentOp((;Aϕ=1)->Aϕ*Cϕ₀)
end

After executing the code above, is now ready to be (auto-)shipped to any workers and will work regardless of what global variables are defined on these workers.

CMBLensing.ProjEquiRectMethod
ProjEquiRect(; Ny::Int, Nx::Int, θspan::Tuple, φspan::Tuple,         T=Float32, storage=Array)
ProjEquiRect(; θ::Vector, φ::Vector, θedges::Vector, φedges::Vector, T=Float32, storage=Array)

Construct an EquiRect projection object. The projection can either be specified by:

  • The number of pixels Ny and Nx (corresponding to the θ and φ angular directions, respectively) and the span in radians of the field in these directions, θspan and φspan. The order in which the span tuples are given is irrelevant, either order will refer to the same field. Note, the spans correspond to the field size between outer pixel edges, not from pixel centers. If one wishes to call Cℓ_to_Cov with this projection, φspan must be an integer multiple of 2π, but other functionality will be available if this is not the case.
  • A manual list of pixels centers and pixel edges, θ, φ, θedges, φedges.
CMBLensing.@!Macro

Rewrites @! x = f(args...) to x = f!(x,args...)

Special cases for * and \ forward to mul! and ldiv!, respectively.

CMBLensing.@auto_adjointMacro
@auto_adjoint foo(args...; kwargs...) = body

is equivalent to

_foo(args...; kwargs...) = body
foo(args...; kwargs...) = _foo(args...; kwargs...)
@adjoint foo(args...; kwargs...) = Zygote.pullback(_foo, args...; kwargs...)

That is, it defines the function as well as a Zygote adjoint which takes a gradient explicitly through the body of the function, rather than relying on rules which may be defined for foo. Mainly useful in the case that foo is a common function with existing rules, but which you do not want to be used.

CMBLensing.@cpu!Macro
@cpu! x y

Equivalent to x = cpu(x), y = cpu(y), etc... for any number of listed variables. See cpu.

CMBLensing.@dictMacro

Pack some variables in a dictionary

> x = 3
> y = 4
> @dict x y z=>5
Dict(:x=>3,:y=>4,:z=>5)
CMBLensing.@distributedMacro
CMBLensing.@distributed ds1 ds2 ...

Assuming ds1, ds2, etc... are DataSet objects which are defined in the Main module on all workers, this makes it so that whenever these objects are shipped to a worker as part of a remote call, the data is not actually sent, but rather the worker just refers to their existing local copy. Typical usage:

@everywhere ds = load_sim(seed=1, ...)
CMBLensing.@distributed ds
pmap(1:n) do i
    # do something with ds
end

Note that hash(ds) must yield the same value on all processors, ie the macro checks that it really is the same object on all processors. Sometimes setting the same random seed is not enough to ensure this as there may be tiny numerical differences in the simulated data. In this case you can try:

@everywhere ds.d = $(ds.d)

after loading the dataset to explicitly set the data based on the simulation on the master process.

Additionally, if the dataset object has fields which are custom types, these must have an appropriate Base.hash defined.

CMBLensing.@ismainMacro
@ismain()

Return true if the current file is being run as a script.

CMBLensing.@ondemandMacro
@ondemand(Package.function)(args...; kwargs...)
@ondemand(Package.Submodule.function)(args...; kwargs...)

Just like calling Package.function or Package.Submodule.function, but Package will be loaded on-demand if it is not already loaded. The call is no longer inferrable.

CMBLensing.@substMacro
@subst sum(x*$(y+1) for x=1:2)

becomes

let tmp=(y+1)
    sum(x*tmp for x=1:2)
end

to aid in writing clear/succinct code that doesn't recompute things unnecessarily.

CMBLensing.@⌛Macro
@⌛ [label] code ...
@⌛ [label] function_definition() = ....

Label a section of code to be timed. If a label string is not provided, the first form uses the code itselfs as a label, the second uses the function name, and its the body of the function which is timed.

To run the timer and print output, returning the result of the calculation, use

@show⌛ run_code()

Timing uses TimerOutputs.get_defaulttimer().