API
Simulation
CMBLensing.load_sim
— Functionload_sim(;kwargs...)
The starting point for many typical sessions. Creates a BaseDataSet
object with some simulated data, returing the DataSet and simulated truths, which can then be passed to other maximization / sampling functions. E.g.:
@unpack f,ϕ,ds = load_sim(;
θpix = 2,
Nside = 128,
pol = :P,
T = Float32
)
Keyword arguments:
θpix
— Angular resolution, in arcmin.Nside
— Number of pixels in the map as an(Ny,Nx)
tuple, or a single number for square maps.pol
— One of:I
,:P
, or:IP
to select intensity, polarization, or both.T = Float32
— Precision, eitherFloat32
orFloat64
.storage = Array
— Set toCuArray
to use GPU.Nbatch = 1
— Number of batches of data in this dataset.μKarcminT = 3
— Noise level in temperature in μK-arcmin.ℓknee = 100
— 1/f noise knee.αknee = 3
— 1/f noise slope.beamFWHM = 0
— Beam full-width-half-max in arcmin.pixel_mask_kwargs = (;)
— NamedTuple of keyword arguments to pass tomake_mask
to create the pixel mask.bandpass_mask = LowPass(3000)
— Operator which performs Fourier-space masking.fiducial_θ = (;)
— NamedTuple of keyword arguments passed tocamb()
for the fiducial model.seed = nothing
— Specific seed for the simulation.L = LenseFlow
— Lensing operator.
Returns a named tuple of (;f, f̃, ϕ, n, ds, Cℓ, proj)
.
CMBLensing.resimulate
— Functionresimulate(ds::DataSet; [f, ϕ, n, f̃, rng, seed])
Make a new DataSet with the data replaced by a simulation. Keyword argument fields will be used instead of new simulations, if they are provided.
Returns a named tuple of (;ds, f, ϕ, n, f̃)
.
CMBLensing.resimulate!
— Functionresimulate!(ds::DataSet; [f, ϕ, n, f̃, rng, seed])
Replace the data in this DataSet in-place with a simulation. Keyword argument fields will be used instead of new simulations, if they are provided.
Returns a named tuple of (;ds, f, ϕ, n, f̃)
.
Lensing estimation
CMBLensing.MAP_joint
— FunctionMAP_joint(ds::DataSet; kwargs...)
Compute the maximum a posteriori (i.e. "MAP") estimate of the joint posterior, $\mathcal{P}(f,\phi,\theta\,|\,d)$, or compute a quasi-sample.
Keyword arguments:
nsteps
— The maximum number of iterations for the maximizer.ϕstart = 0
— Starting point of the maximizer.ϕtol = nothing
— If given, stop whenϕ
updates reach this tolerance.ϕtol
is roughly the relative per-pixel standard deviation between changes toϕ
and draws from theϕ
prior. Values in the range $10^{-2}-10^{-4}$ are reasonable.lbfgs_rank = 5
— The maximum rank of the LBFGS approximation to the Hessian.conjgrad_kwargs = (;)
— Passed to the inner call toconjugate_gradient
.progress = true
— Whether to show the progress bar.Nϕ = :qe
— Noise to use in the initial approximation to the Hessian. Can give:qe
to use the quadratic estimate noise.quasi_sample = false
—false
to compute the MAP,true
to iterate quasi-samples, or an integer to compute a fixed-seed quasi-sample.history_keys
— What quantities to include in the returnedhistory
. Can be any subset of(:f, :f°, :ϕ, :∇ϕ_lnP, :χ², :lnP)
.
Returns a tuple (f, ϕ, history)
where f
is the best-fit (or quasi-sample) field, ϕ
is the lensing potential, and history
contains the history of steps during the run.
CMBLensing.MAP_marg
— FunctionMAP_marg(ds; kwargs...)
Compute the maximum a posteriori (i.e. "MAP") estimate of the marginal posterior, $\mathcal{P}(\phi,\theta\,|\,d)$.
CMBLensing.sample_joint
— Functionsample_joint(ds::DataSet; kwargs...)
Sample the joint posterior, $\mathcal{P}(f,\phi,\theta\,|\,d)$.
Keyword arguments:
nsamps_per_chain
— The number of samples per chain.nchains = 1
— Number of chains in parallel.nchunk = 1
— Number of steps between parallel chain communication.nsavemaps = 1
— Number of steps in between saving maps into chain.nburnin_always_accept = 0
— Number of steps at the beginning of the chain to always accept HMC steps regardless of integration error.nburnin_fixθ = 0
— Number of steps at the beginning of the chain before starting to sampleθ
.Nϕ = :qe
— Noise to use in the initial approximation to the Hessian. Can give:qe
to use the quadratic estimate noise.chains = nothing
—nothing
to start a new chain; the return value from a previous call tosample_joint
to resume those chains;:resume
to resume chains from a file given byfilename
θrange
— Range and density to grid sample parameters as a NamedTuple, e.g.(Aϕ=range(0.7,1.3,length=20),)
.θstart
— Starting values of parameters as a NamedTuple, e.g.(Aϕ=1.2,)
, or nothing to randomly sample from θrangeϕstart
— Startingϕ
, either aField
object,:quasi_sample
, or:best_fit
metadata
— Does nothing, but is saved into the chain filenhmc = 1
— Number of HMC passes perϕ
Gibbs step.symp_kwargs = fill((N=25, ϵ=0.01), nhmc)
— an array of NamedTupe kwargs to pass tosymplectic_integrate
. E.g.[(N=50,ϵ=0.1),(N=25,ϵ=0.01)]
would do 50 large steps then 25 smaller steps per each Gibbs pass. If specified,nhmc
is ignored.wf_kwargs
— Keyword arguments to pass toargmaxf_lnP
in the Wiener Filter Gibbs step.MAP_kwargs
— Keyword arguments to pass toMAP_joint
when computing the starting point.
CMBLensing.argmaxf_lnP
— Functionargmaxf_lnP(ϕ, ds::DataSet; kwargs...)
argmaxf_lnP(ϕ, θ::NamedTuple, ds::DataSet; kwargs...)
argmaxf_lnP(Lϕ, ds::DataSet; kwargs...)
Computes either the Wiener filter at fixed $\phi$, or a sample from this slice along the posterior.
Keyword arguments:
which
—:wf
,:sample
, orfluctuation
to compute 1) the Wiener filter, i.e. the best-fit of $\mathcal{P}(f\,|\,\phi,d)$, 2) a sample from $\mathcal{P}(f\,|\,\phi,d)$, or 3) a sample minus the Wiener filter, i.e. the fluctuation on top of the mean.fstart
— starting guess forf
for the conjugate gradient solverconjgrad_kwargs
— Passed to the inner call toconjugate_gradient
CMBLensing.quadratic_estimate
— Functionquadratic_estimate(ds::DataSet, which; wiener_filtered=true)
quadratic_estimate((ds₁::DataSet, ds₂::DataSet), which; wiener_filtered=true)
Compute the quadratic estimate of ϕ
given data.
The ds
or (ds₁,ds₂)
tuple contain the DataSet object(s) which house the data and covariances used in the estimate. Note that only the Fourier-diagonal approximations for the beam, mask, and noise, i.e. B̂
, M̂
, and Cn̂
, are accounted for. To account full operators (if they are not actually Fourier-diagonal), you should compute the impact using Monte Carlo.
If a tuple is passed in, the result will come from correlating the data from ds₁
with that from ds₂
.
An optional keyword argument AL
can be passed in case the QE normalization was already computed, in which case it won't be recomputed during the calculation.
Returns a named tuple of (;ϕqe, AL, Nϕ)
where ϕqe
is the (possibly Wiener filtered, depending on wiener_filtered
option) quadratic estimate, AL
is the normalization (which is already applied to ϕqe, it does not need to be applied again), and Nϕ
is the analytic N⁰ noise bias (Nϕ==AL if using unlensed weights, currently only Nϕ==AL is always returned, no matter the weights)
Field constructors
CMBLensing.FlatMap
— TypeFlatMap(Ix::AbstractArray; [θpix=1])
FlatMap(Ix::AbstractArray, proj::FieldMetadata)
Construct a FlatMap
object from an AbstractArray
. The array dimensions should be (Ny,Nx[,Nbatch])
, where Ny
and Nx
are the number of pixels in the map in the y/x direction, and Nbatch
is an optional batch dimension. θpix
is the angular resolution in arcmin (default=1). A second positional argument proj
can be used instead of keyword arguments to specify the projection/metadata (currently the only builtin projection is ProjLambert
, which is the default if keyword arguments are used).
CMBLensing.FlatFourier
— TypeFlatFourier(Il::AbstractArray; Ny, [θpix=1])
FlatFourier(Il::AbstractArray, proj::FieldMetadata)
Construct a FlatFourier
object from an AbstractArray
. The array dimensions should be (Ny÷2+1,Nx[,Nbatch])
, where Ny
and Nx
are the number of pixels in the map in the y/x direction, and Nbatch
is an optional batch dimension. Ny
must be given as keyword argument. θpix
is the angular resolution in arcmin (default=1). A second positional argument proj
can be used instead of keyword arguments to specify the projection/metadata (currently the only builtin projection is ProjLambert
, which is the default if keyword arguments are used).
CMBLensing.FlatQUMap
— TypeFlatQUMap(Qx::AbstractArray, Ux::AbstractArray; [θpix=1])
FlatQUMap(Qx::AbstractArray, Ux::AbstractArray, proj::FieldMetadata)
Construct a FlatQUMap
object from some AbstractArrays
. The array dimensions should be (Ny,Nx[,Nbatch])
, where Ny
and Nx
are the number of pixels in the map in the y/x direction, and Nbatch
is an optional batch dimension. θpix
is the angular resolution in arcmin (default=1). A second positional argument proj
can be used instead of keyword arguments to specify the projection/metadata (currently the only builtin projection is ProjLambert
, which is the default if keyword arguments are used).
FlatQUMap(Q::FlatMap, U::FlatMap)
Construct a FlatQUMap
object from other FlatMap
objects. Projection/metadata and batch size is required to be the same for the arguments.
CMBLensing.FlatQUFourier
— TypeFlatQUFourier(Ql::AbstractArray, Ul::AbstractArray; Ny, [θpix=1])
FlatQUFourier(Ql::AbstractArray, Ul::AbstractArray, proj::FieldMetadata)
Construct a FlatQUFourier
object from some AbstractArrays
. The array dimensions should be (Ny÷2+1,Nx[,Nbatch])
, where Ny
and Nx
are the number of pixels in the map in the y/x direction, and Nbatch
is an optional batch dimension. Ny
must be given as keyword argument. θpix
is the angular resolution in arcmin (default=1). A second positional argument proj
can be used instead of keyword arguments to specify the projection/metadata (currently the only builtin projection is ProjLambert
, which is the default if keyword arguments are used).
FlatQUFourier(Q::FlatFourier, U::FlatFourier)
Construct a FlatQUFourier
object from other FlatFourier
objects. Projection/metadata and batch size is required to be the same for the arguments.
CMBLensing.FlatEBMap
— TypeFlatEBMap(Ex::AbstractArray, Bx::AbstractArray; [θpix=1])
FlatEBMap(Ex::AbstractArray, Bx::AbstractArray, proj::FieldMetadata)
Construct a FlatEBMap
object from some AbstractArrays
. The array dimensions should be (Ny,Nx[,Nbatch])
, where Ny
and Nx
are the number of pixels in the map in the y/x direction, and Nbatch
is an optional batch dimension. θpix
is the angular resolution in arcmin (default=1). A second positional argument proj
can be used instead of keyword arguments to specify the projection/metadata (currently the only builtin projection is ProjLambert
, which is the default if keyword arguments are used).
FlatEBMap(E::FlatMap, B::FlatMap)
Construct a FlatEBMap
object from other FlatMap
objects. Projection/metadata and batch size is required to be the same for the arguments.
CMBLensing.FlatEBFourier
— TypeFlatEBFourier(El::AbstractArray, Bl::AbstractArray; Ny, [θpix=1])
FlatEBFourier(El::AbstractArray, Bl::AbstractArray, proj::FieldMetadata)
Construct a FlatEBFourier
object from some AbstractArrays
. The array dimensions should be (Ny÷2+1,Nx[,Nbatch])
, where Ny
and Nx
are the number of pixels in the map in the y/x direction, and Nbatch
is an optional batch dimension. Ny
must be given as keyword argument. θpix
is the angular resolution in arcmin (default=1). A second positional argument proj
can be used instead of keyword arguments to specify the projection/metadata (currently the only builtin projection is ProjLambert
, which is the default if keyword arguments are used).
FlatEBFourier(E::FlatFourier, B::FlatFourier)
Construct a FlatEBFourier
object from other FlatFourier
objects. Projection/metadata and batch size is required to be the same for the arguments.
CMBLensing.FlatIQUMap
— TypeFlatIQUMap(Ix::AbstractArray, Qx::AbstractArray, Ux::AbstractArray; [θpix=1])
FlatIQUMap(Ix::AbstractArray, Qx::AbstractArray, Ux::AbstractArray, proj::FieldMetadata)
Construct a FlatIQUMap
object from some AbstractArrays
. The array dimensions should be (Ny,Nx[,Nbatch])
, where Ny
and Nx
are the number of pixels in the map in the y/x direction, and Nbatch
is an optional batch dimension. θpix
is the angular resolution in arcmin (default=1). A second positional argument proj
can be used instead of keyword arguments to specify the projection/metadata (currently the only builtin projection is ProjLambert
, which is the default if keyword arguments are used).
FlatIQUMap(I::FlatMap, Q::FlatMap, U::FlatMap)
Construct a FlatIQUMap
object from other FlatMap
objects. Projection/metadata and batch size is required to be the same for the arguments.
CMBLensing.FlatIQUFourier
— TypeFlatIQUFourier(Il::AbstractArray, Ql::AbstractArray, Ul::AbstractArray; Ny, [θpix=1])
FlatIQUFourier(Il::AbstractArray, Ql::AbstractArray, Ul::AbstractArray, proj::FieldMetadata)
Construct a FlatIQUFourier
object from some AbstractArrays
. The array dimensions should be (Ny÷2+1,Nx[,Nbatch])
, where Ny
and Nx
are the number of pixels in the map in the y/x direction, and Nbatch
is an optional batch dimension. Ny
must be given as keyword argument. θpix
is the angular resolution in arcmin (default=1). A second positional argument proj
can be used instead of keyword arguments to specify the projection/metadata (currently the only builtin projection is ProjLambert
, which is the default if keyword arguments are used).
FlatIQUFourier(I::FlatFourier, Q::FlatFourier, U::FlatFourier)
Construct a FlatIQUFourier
object from other FlatFourier
objects. Projection/metadata and batch size is required to be the same for the arguments.
CMBLensing.FlatIEBMap
— TypeFlatIEBMap(Ix::AbstractArray, Ex::AbstractArray, Bx::AbstractArray; [θpix=1])
FlatIEBMap(Ix::AbstractArray, Ex::AbstractArray, Bx::AbstractArray, proj::FieldMetadata)
Construct a FlatIEBMap
object from some AbstractArrays
. The array dimensions should be (Ny,Nx[,Nbatch])
, where Ny
and Nx
are the number of pixels in the map in the y/x direction, and Nbatch
is an optional batch dimension. θpix
is the angular resolution in arcmin (default=1). A second positional argument proj
can be used instead of keyword arguments to specify the projection/metadata (currently the only builtin projection is ProjLambert
, which is the default if keyword arguments are used).
FlatIEBMap(I::FlatMap, E::FlatMap, B::FlatMap)
Construct a FlatIEBMap
object from other FlatMap
objects. Projection/metadata and batch size is required to be the same for the arguments.
CMBLensing.FlatIEBFourier
— TypeFlatIEBFourier(Il::AbstractArray, El::AbstractArray, Bl::AbstractArray; Ny, [θpix=1])
FlatIEBFourier(Il::AbstractArray, El::AbstractArray, Bl::AbstractArray, proj::FieldMetadata)
Construct a FlatIEBFourier
object from some AbstractArrays
. The array dimensions should be (Ny÷2+1,Nx[,Nbatch])
, where Ny
and Nx
are the number of pixels in the map in the y/x direction, and Nbatch
is an optional batch dimension. Ny
must be given as keyword argument. θpix
is the angular resolution in arcmin (default=1). A second positional argument proj
can be used instead of keyword arguments to specify the projection/metadata (currently the only builtin projection is ProjLambert
, which is the default if keyword arguments are used).
FlatIEBFourier(I::FlatFourier, E::FlatFourier, B::FlatFourier)
Construct a FlatIEBFourier
object from other FlatFourier
objects. Projection/metadata and batch size is required to be the same for the arguments.
Lensing operators
CMBLensing.LenseFlow
— TypeLenseFlow(ϕ, [n=7])
LenseFlow
is the ODE-based lensing algorithm from Millea, Anderes, & Wandelt, 2019. The number of steps in the ODE solver is controlled by n
. The action of the operator, as well as its adjoint, inverse, inverse-adjoint, and gradient of any of these w.r.t. ϕ
can all be computed. The log-determinant of the operation is zero independent of ϕ
, in the limit of n
high enough.
CMBLensing.BilinearLens
— TypeBilinearLens(ϕ)
BilinearLens
is a lensing operator that computes lensing with bilinear interpolation. The action of the operator, as well as its adjoint, inverse, inverse-adjoint, and gradient w.r.t. ϕ
can all be computed. The log-determinant of the operation is non-zero and can't be computed.
Internally, BilinearLens
forms a sparse matrix with the interpolation weights, which can be applied and adjoint-ed extremely fast (e.g. at least an order of magnitude faster than LenseFlow
). Inverse and inverse-adjoint lensing is somewhat slower since it requires an iterative solve, here performed with the preconditioned generalized minimal residual algorithm.
CMBLensing.Taylens
— TypeTaylens(ϕ, order)
Taylens
is a lensing operator which lenses a map with a nearest-pixel permute step followed by power series expansion in the residual displacement, to any order. This is the algorithm from Næss&Louis 2013.
CMBLensing.PowerLens
— TypePowerLens(ϕ, order)
PowerLens
is a lensing operator which lenses a map with a power series expansion in $\nabla \phi$ to any order.
The action of the operator and its adjoint can be computed.
CMBLensing.antilensing
— Functionantilensing(L::PowerLens)
Create a PowerLens
operator that lenses by -ϕ
instead.
Configuration options
CMBLensing.FFTW_NUM_THREADS
— ConstantThe number of threads used by FFTW for CPU FFTs (default is the environment variable FFTW_NUM_THREADS
, or if that is not specified its Sys.CPU_THREADS÷2
). This must be set before creating any FlatField
objects.
CMBLensing.FFTW_TIMELIMIT
— ConstantTime-limit for FFT planning on CPU (default: 5 seconds). This must be set before creating any FlatField
objects.
Other
CMBLensing.batch
— Methodbatch(fs::FlatField...)
batch(fs::Vector{<:FlatField})
Concatenate one of more FlatFields along the "batch" dimension (dimension 4 of the underlying array). For the inverse operation, see unbatch
.
CMBLensing.beamCℓs
— MethodbeamCℓs(;beamFWHM, ℓmax=8000)
Compute the beam power spectrum, often called $W_\ell$. A map should be multiplied by the square root of this.
CMBLensing.cpu
— Methodcpu(x)
Recursively move an object to CPU memory. See also gpu
.
CMBLensing.gradhess
— Methodgradhess(f)
Compute the gradient $g^i = \nabla^i f$, and the hessian, $H_j^{\,i} = \nabla_j \nabla^i f$.
CMBLensing.kde
— Methodkde(samples; [boundary=(min,max), normalize="integral" or "max"])
Return a Kernel Density Estimate for a set of samples. The return object is a function which can be evaluated anywhere to compute the PDF. Based on Python GetDist, which must be installed.
CMBLensing.lnP
— MethodlnP(t, fₜ, ϕₜ, ds::DataSet)
lnP(t, fₜ, ϕₜ, θ::NamedTuple, ds::DataSet)
Compute the log posterior probability in the joint parameterization as a function of the field, $f_t$, the lensing potential, $\phi_t$, and possibly some cosmological parameters, $\theta$. The subscript $t$ can refer to either a "time", e.g. passing t=0
corresponds to the unlensed parametrization and t=1
to the lensed one, or can be :mix
correpsonding to the mixed parametrization. In all cases, the arguments fₜ
and ϕₜ
should then be $f$ and $\phi$ in that particular parametrization.
If any parameters $\theta$ are provided, we also include the determinant terms for covariances which depend on $\theta$. In the mixed parametrization, we also include any Jacobian determinant terms that depend on $\theta$.
The argument ds
should be a DataSet
and stores the masks, data, etc... needed to construct the posterior.
CMBLensing.load_camb_Cℓs
— Methodload_camb_Cℓs(;path_prefix, custom_tensor_params=nothing,
unlensed_scalar_postfix, unlensed_tensor_postfix, lensed_scalar_postfix, lenspotential_postfix)
Load some Cℓs from CAMB files.
path_prefix
specifies the prefix for the files, which are then expected to have the normal CAMB postfixes: scalCls.dat
, tensCls.dat
, lensedCls.dat
, lenspotentialCls.dat
, unless otherwise specified via the other keyword arguments. custom_tensor_params
can be used to call CAMB directly for the unlensed_tensors
, rather than reading them from a file (since alot of times this file doesn't get saved). The value should be a Dict/NamedTuple which will be passed to a call to camb
, e.g. custom_tensor_params=(r=0,)
for zero tensors.
CMBLensing.load_chains
— Methodload_chains(filename; burnin=0, burnin_chunks=0, thin=1, join=false, unbatch=true)
Load a single chain or multiple parallel chains which were written to a file by sample_joint
.
Keyword arguments:
burnin
— Remove this many samples from the start of each chain, or if negative, keep only this many samples at the end of each chain.burnin_chunks
— Same as burnin, but in terms of chain "chunks" stored in the chain file, rather than in terms of samples.thin
— Ifthin
is an integer, thin the chain by this factor. Ifthin == :hasmaps
, return only samples which have maps saved. If thin is aFunction
, filter the chain by this function (e.g.thin=haskey(:g)
on Julia 1.5+)unbatch
— If true, unbatch the chains if they are batched.join
— If true, concatenate all the chains together.skip_missing_chunks
— Skip missing chunks in the chain instead of terminating the chain there.
The object returned by this function is a Chain
or Chains
object, which simply wraps an Array
of Dicts
or an Array
of Array
of Dicts
, respectively (each sample is a Dict
). The wrapper object has some extra indexing properties for convenience:
- It can be indexed as if it were a single multidimensional object, e.g.
chains[1,:,:accept]
would return the:accept
key of all samples in the first chain. - Leading colons can be dropped, i.e.
chains[:,:,:accept]
is the same aschains[:accept]
. - If some samples are missing a particular key,
missing
is returned for those samples insted of an error. - The recursion goes arbitrarily deep into the objects it finds. E.g., since sampled parameters are stored in a
NamedTuple
like(Aϕ=1.3,)
in theθ
key of each sampleDict
, you can dochain[:θ,:Aϕ]
to get allAϕ
samples as a vector.
CMBLensing.mean_std_and_errors
— Methodmean_std_and_errors(samples; N_bootstrap=10000)
Get the mean and standard deviation of a set of correlated samples
from a chain where the error on the mean and standard deviation is estimated with bootstrap resampling using the calculated "effective sample size" of the chain.
CMBLensing.mix
— Methodmix(f, ϕ, ds::DataSet)
mix(f, ϕ, θ::NamedTuple, ds::DataSet)
Compute the mixed (f°, ϕ°)
from the unlensed field f
and lensing potential ϕ
, given the definition of the mixing matrices in ds
evaluated at parameters θ
(or at fiducial values if no θ
provided).
CMBLensing.noiseCℓs
— MethodnoiseCℓs(;μKarcminT, beamFWHM=0, ℓmax=8000, ℓknee=100, αknee=3)
Compute the (:TT,:EE,:BB,:TE) noise power spectra given white noise + 1/f. Polarization noise is scaled by $\sqrt{2}$ relative to μKarcminT
. beamFWHM
is in arcmin.
CMBLensing.pixwin
— Methodpixwin(θpix, ℓ)
Returns the pixel window function for square flat-sky pixels of width θpix
(in arcmin) evaluated at some ℓ
s. This is the scaling of k-modes, the scaling of the power spectrum will be pixwin^2.
CMBLensing.simulate
— Methodsimulate(Σ; rng=global_rng_for(Σ), seed=nothing)
Draw a simulation from the covariance matrix Σ
, i.e. draw a random vector $\xi$ such that the covariance $\langle \xi \xi^\dagger \rangle = \Sigma$.
The random number generator rng
will be used and advanced in the proccess, and is by default the appropriate one depending on if Σ
is backed by Array
or CuArray
.
The seed
argument can also be used to seed the rng
.
CMBLensing.symplectic_integrate
— Methodsymplectic_integrate(x₀, p₀, Λ, U, δUδx, N=50, ϵ=0.1, progress=false)
Do a symplectic integration of the potential energy U
(with gradient δUδx
) starting from point x₀
with momentum p₀
and mass matrix Λ
. The number of steps is N
and the step size ϵ
.
Returns ΔH, xᵢ, pᵢ
corresponding to change in Hamiltonian, and final position and momenta. If history_keys
is specified a history of requested variables throughout each step is also returned.
CMBLensing.ud_grade
— Methodud_grade(f::Field, θnew, mode=:map, deconv_pixwin=true, anti_aliasing=true)
Up- or down-grades field f
to new resolution θnew
(only in integer steps). Two modes are available specified by the mode
argument:
:map
— Up/downgrade by replicating/averaging pixels in map-space:fourier
— Up/downgrade by extending/truncating the Fourier grid
For :map
mode, two additional options are possible. If deconv_pixwin
is true, deconvolves the pixel window function from the downgraded map so the spectrum of the new and old maps are the same. If anti_aliasing
is true, filters out frequencies above Nyquist prior to down-sampling.
CMBLensing.unbatch
— Methodunbatch(chains::Chains)
Expand each chain in this Chains
object by unbatching it.
CMBLensing.unbatch
— Methodunbatch(chain::Chain)
Convert a chain of batch-length-D
fields to D
chains of unbatched fields.
CMBLensing.unbatch
— Methodunbatch(f::FlatField)
Return an Array of FlatFields corresponding to each batch index. For the inverse operation, see batch
.
CMBLensing.unmix
— Methodunmix(f°, ϕ°, ds::DataSet)
unmix(f°, ϕ°, θ::NamedTuple, ds::DataSet)
Compute the unmixed/unlensed (f, ϕ)
from the mixed field f°
and mixed lensing potential ϕ°
, given the definition of the mixing matrices in ds
evaluated at parameters θ
(or at fiducial values if no θ
provided).
CMBLensing.ParamDependentOp
— TypeParamDependentOp(recompute_function::Function)
Creates an operator which depends on some parameters $\theta$ and can be evaluated at various values of these parameters.
recompute_function
should be a function which accepts keyword arguments for $\theta$ and returns the operator. Each keyword must have a default value; the operator will act as if evaluated at these defaults unless it is explicitly evaluated at other parameters.
Example:
Cϕ₀ = Diagonal(...) # some fixed Diagonal operator
Cϕ = ParamDependentOp((;Aϕ=1)->Aϕ*Cϕ₀) # create ParamDependentOp
Cϕ(Aϕ=1.1) * ϕ # Cϕ(Aϕ=1.1) is equal to 1.1*Cϕ₀
Cϕ * ϕ # Cϕ alone will act like Cϕ(Aϕ=1) because that was the default above
Note: if you are doing parallel work, global variables referred to in the recompute_function
need to be distributed to all workers. A more robust solution is to avoid globals entirely and instead ensure all variables are "closed" over (and hence will automatically get distributed). This will happen by default if defining the ParamDependentOp
inside any function, or can be forced at the global scope by wrapping everything in a let
-block, e.g.:
Cϕ = let Cϕ₀=Cϕ₀
ParamDependentOp((;Aϕ=1)->Aϕ*Cϕ₀)
end
After executing the code above, Cϕ
is now ready to be (auto-)shipped to any workers and will work regardless of what global variables are defined on these workers.
CMBLensing.RK4Solver
— MethodRK4Solver(F!::Function, y₀, t₀, t₁, nsteps)
Solve for $y(t_1)$ with 4th order Runge-Kutta assuming $dy/dt = F(t,y)$ and $y(t_0)$ = $y_0$.
Arguments:
F!
— a functionF!(v,t,y)
which sets
v=F(t,y)`
CMBLensing.@ismain
— Macro@ismain()
Return true if the current file is being run as a script.
CMBLensing.@repeated
— MacroReturn a tuple with the expression repeated n times
CMBLensing.@show⌛
— MacroSee @⌛
CMBLensing.@⌛
— Macro@⌛ code ...
@⌛ function_definition() = ....
Label a section of code to be timed. The first form uses the code itselfs as a label, the second uses the function name, and its the body of the function which is timed.
To run the timer and print output, returning the result of the calculation, use
@show⌛ run_code()
Timing uses TimerOutputs.get_defaulttimer()
.
CMBLensing.GPU_worker_info
— MethodGPU_worker_info()
Returns string showing info about assigned GPU workers.
CMBLensing.LinearInterpolation
— Methoditp = LinearInterpolation(xdat::AbstractVector, ydat::AbstractVector; extrapolation_bc=NaN)
itp(x) # interpolate at x
A simple 1D linear interpolation code which is fully Zygote differentiable in either xdat
, ydat
, or the evaluation point x
.
CMBLensing.QE_leg
— MethodQE_leg(C::Diagonal, inds...)
The quadratic estimate and normalization expressions all consist of terms involving products of two "legs", each leg which look like:
C * l[i] * l̂[j] * l̂[k] * ...
where C is some field or diagonal covariance, l[i] is the Fourier wave-vector in direction i (for i=1:2), and l̂[i] = l[i]/‖l‖. For example, there's a leg in the EB estimator that looks like:
(CE * (CẼ+Cn) \ d[:E])) * l[i] * l̂[j] * l̂[k]
The function QE_leg
computes quatities like these, e.g. the above would be given by:
QE_leg((CE * (CẼ+Cn) \ d[:E])), [i], j, k)
(where note that specifying whether its the Fourier wave-vector l instead of the unit-vector l̂ is done by putting that index in brackets).
Additionally, all of these terms are symmetric in their indices, i.e. in (i,j,k)
in this case. The QE_leg
function is smart about this, and is memoized so that each unique set of indices is only computed once. This leads to a pretty drastic speedup for terms with many indices like those that arize in the EE and EB normalizations, and lets us write code which is both clear and fast without having to think too hard about these symmetries.
CMBLensing.assign_GPU_workers
— Methodassign_GPU_workers()
Assign each Julia worker process a unique GPU using CUDA.device!
. Workers may be distributed across different hosts, and each host can have multiple GPUs.
CMBLensing.conjugate_gradient
— Functionconjugate_gradient(
M, A, b, x=M\b;
nsteps = length(b),
tol = sqrt(eps()),
progress = false,
callback = nothing,
history_keys = nothing,
history_mod = 1
)
Compute x=A\b
(where A
is positive definite) by conjugate gradient. M
is the preconditioner and should be M≈A
, and M\x
should be fast.
The solver will stop either after nsteps
iterations or when dot(r,r)<tol
(where r=A*x-b
is the residual at that step), whichever occurs first.
Info from the iterations of the solver can be returned if history_keys
is specified. history_keys
can be one or a tuple of:
:i
— current iteration number:x
— current solution:r
— current residualr=A*x-b
:res
— the norm ofr
:t
— the time elapsed (in seconds) since the start of the algorithm
history_mod
can be used to include every N-th iteration only in history_keys
.
CMBLensing.fftsyms
— MethodArguments m
and n
refer to the sizes of an m
×n
matrix (call it A
) that is the output of a real FFT (thus m=n÷2+1
)
Returns a tuple of (ireal, iimag, negks) where these are
ireal
—m
×n
mask corrsponding to unique real entries ofA
iimag
—m
×n
mask corrsponding to unique imaginary entries ofA
negks
—m
×n
matrix of giving the index into A where the negative k-vector is, s.t.A[i,j] = A[negks[i,j]]'
CMBLensing.fieldvalues
— MethodReturn the type's fields as a tuple
CMBLensing.finite_second_derivative
— Methodfinite_second_derivative(x)
Second derivative of a vector x
via finite differences, including at end points.
CMBLensing.gmres
— Methodgmres(A, b; maxiter, Pl=I)
Solve A \ b
with maxiter
iterations of the generalized minimal residual algorithm. Pl
is a left-preconditioner which should approximate inv(A)
.
Note: the implemenation is memory inefficient and uses O(n * maxiter) memory, where n,n=size(A)
(may not be a big deal for small maxiter
), although is totally generic and works with CPU or GPU and dense or sparse matrices, unlike IterativeSolver's gmres
.
CMBLensing.gpu
— Functiongpu(x)
Recursively move an object to GPU memory. Note that, unlike cu(x)
, this does not change the eltype
of any underlying arrays. See also cpu
.
CMBLensing.grid_and_sample
— Methodgrid_and_sample(lnP::Function; range::NamedTuple; progress=false, nsamples=1)
Interpolate the log pdf lnP
with support on range
, and return the integrated log pdf as well nsamples
samples (drawn via inverse transform sampling)
lnP
should either accept a NamedTuple argument and range
should be a NamedTuple mapping those same names to range
objects specifying where to evaluate lnP
, e.g.:
grid_and_sample(nt->-(nt.x^2+nt.y^2)/2, (x=range(-3,3,length=100),y=range(-3,3,length=100)))
or lnP
should accept a single scalar argument and range
should be directly the range for this variable:
grid_and_sample(x->-x^2/2, range(-3,3,length=100))
The return value is (lnP, samples, Px)
where lnP
is an interpolated/smoothed log PDF which can be evaluated anywhere within the original range, Px
are sampled points of the original PDF, and samples
is a NamedTuple giving the Monte-Carlo samples of each of the parameters.
(Note: only 1D sampling is currently implemented, but 2D like in the example above is planned)
CMBLensing.lazy_pyimport
— Methodlazy_pyimport(s)
Like pyimport(s)
, but doesn't actually load anything (not even PyCall) until a property of the returned module is accessed, allowing this to go in __init__
and still delay loading PyCall, as well as preventing a Julia module load error if a Python module failed to load.
CMBLensing.longest_run_of_trues
— Methodlongest_run_of_trues(x)
The slice corresponding to the longest run of true
s in the vector x
.
CMBLensing.paren_errors
— Methodparen_errors(μ, σ; N_in_paren=2)
Get a string represntation of μ ± σ
in "parenthesis" format, e.g. 1.234 ± 0.012
becomes 1.234(12)
.
CMBLensing.rfft2vec
— MethodConvert a matrix A which is the output of a real FFT to a real vector, keeping only unqiue real/imaginary entries of A
CMBLensing.rfft_degeneracy_fac
— Methodrfft_degeneracy_fac(n)
Returns an Array which is 2 if the complex conjugate of the corresponding entry in the half-plane real FFT appears in the full-plane FFT, and is 1 othewise. n
is the length of the first dimension of the full-plane FFT. The following identity holds:
sum(abs2.(fft(x)) = sum(rfft_degeneracy_fac(size(x,1)) .* abs2.(rfft(x))
CMBLensing.set_distributed_dataset
— Methodset_distributed_dataset(ds)
get_distributed_dataset()
Sometimes it's more performant to distribute a DataSet object to parallel workers just once, and have them refer to it from the global state, rather than having it get automatically but repeatedly sent as part of closures. This provides that functionality. Use set_distributed_dataset(ds)
from the master process to set the global DataSet and get_distributed_dataset()
from any process to retrieve it. Repeated calls will not resend ds
if it hasn't changed (based on hash(ds)
) and if no new workers have been added since the last send.
CMBLensing.unfold
— MethodConvert an M×N matrix (with M=N÷2+1) which is the output a real FFT to a full N×N one via symmetries.
CMBLensing.vec2rfft
— MethodConvert a vector produced by rfft2vec back into a complex matrix.
CMBLensing.Σ
— MethodΣ(ϕ::Field, ds; [conjgrad_kwargs])
Σ(Lϕ, ds; [conjgrad_kwargs])
An operator for the data covariance, Cn + M*B*L*Cf*L'*B'*M'
, which can applied and inverted. conjgrad_kwargs
are passed to the underlying call to conjugate_gradient
.
LinearAlgebra.logdet
— Methodlogdet(L::FieldOp, θ)
If L
depends on θ
, evaluates logdet(L(θ))
offset by its fiducial value at L()
. Otherwise, returns 0.
CMBLensing.BatchedReal
— TypeBatchedReal(::Vector{<:Real}) <: Real
Holds a vector of real numbers and broadcasts algebraic operations over them, as well as broadcasting along the batch dimension of Field
s, but is itself a Real
.
CMBLensing.@!
— MacroRewrites @! x = f(args...)
to x = f!(x,args...)
Special cases for *
and \
forward to mul!
and ldiv!
, respectively.
CMBLensing.@auto_adjoint
— Macro@auto_adjoint foo(args...; kwargs...) = body
is equivalent to
_foo(args...; kwargs...) = body
foo(args...; kwargs...) = _foo(args...; kwargs...)
@adjoint foo(args...; kwargs...) = Zygote.pullback(_foo, args...; kwargs...)
That is, it defines the function as well as a Zygote adjoint which takes a gradient explicitly through the body of the function, rather than relying on rules which may be defined for foo
. Mainly useful in the case that foo
is a common function with existing rules, but which you do not want to be used.
CMBLensing.@dict
— MacroPack some variables in a dictionary
> x = 3
> y = 4
> @dict x y z=>5
Dict(:x=>3,:y=>4,:z=>5)
CMBLensing.@ondemand
— Macro@ondemand(Package.function)(args...; kwargs...)
@ondemand(Package.Submodule.function)(args...; kwargs...)
Just like calling Package.function
or Package.Submodule.function
, but Package
will be loaded on-demand if it is not already loaded. The call is no longer inferrable.
CMBLensing.@subst
— Macro@subst sum(x*$(y+1) for x=1:2)
becomes
let tmp=(y+1)
sum(x*tmp for x=1:2)
end
to aid in writing clear/succinct code that doesn't recompute things unnecessarily.