# Lensing a flat-sky map

using CMBLensing, PyPlot

First we load a simulated unlensed field, $f$, and lensing potential, $\phi$,

@unpack f,ϕ = load_sim(
θpix  = 2,       # size of the pixels in arcmin
Nside = 256,     # number of pixels per side in the map
T     = Float32, # Float32 or Float64 (former is ~twice as fast)
pol   = :I       # :I for Intensity, :P for polarization, or :IP for both
);

We can lense the map with LenseFlow,

f̃ = LenseFlow(ϕ) * f;

And flip between lensed and unlensed maps,

animate([f,f̃], fps=1)

The difference between lensed and unlensed,

plot(f-f̃); CMBLensing flat-sky Field objects like f or ϕ are just thin wrappers around arrays. You can get the underlying data arrays for $I(\mathbf{x})$, $Q(\mathbf{x})$, and $U(\mathbf{x})$ with f[:Ix], f[:Qx], and f[:Ux] respectively, or the Fourier coefficients, $I(\mathbf{l})$, $Q(\mathbf{l})$, and $U(\mathbf{l})$ with f[:Il], f[:Ql], and f[:Ul],

mapdata = f[:Ix]
256×256 view(::Array{Float32, 3}, :, :, 1) with eltype Float32:
29.1464    4.27098   -14.7498   …   41.8773    39.9558   43.4806
24.8466    2.66682   -22.8516       15.093     17.6299   28.5196
25.6432    0.402061  -35.2564       13.7062    14.3105   25.5715
34.1147    3.86804   -35.6684       30.8706    29.4579   37.277
52.4594   26.4572    -10.7255       50.6472    49.1331   54.0263
72.0695   60.596      33.1468   …   59.2798    58.2317   63.1597
81.4112   84.3512     72.5602       53.4537    53.3593   63.7863
80.713    89.84       90.259        43.7545    44.1253   60.3986
71.3399   79.884      87.5582       38.8894    37.578    53.2899
51.9152   56.6975     70.4102       32.6675    31.5428   42.9394
32.4157   31.3189     50.2084   …   26.4022    24.1204   30.7589
36.0508   32.4062     47.9066       27.4095    25.9515   35.762
65.4038   63.0281     71.0028       37.762     43.786    60.5433
⋮                              ⋱                         ⋮
-51.2904  -75.9633    -98.2333      -37.975    -33.7213  -37.9204
-14.7973  -38.8593    -60.0449   …  -24.2794   -10.0256   -6.31061
15.8787   -4.37674   -24.5258      -11.6389    10.6175   20.9796
33.8044   18.1236      1.84258      -1.25383   24.9355   36.0672
39.0546   31.1096     19.0312        7.16021   30.8318   39.5411
40.8634   43.9268     39.1545       11.441     27.8089   35.5816
45.2212   58.1515     60.4654   …    9.65897   20.3985   31.9267
50.1554   64.803      72.614        12.4024    19.5027   34.3457
52.7228   61.9005     70.2811       37.0724    35.1691   43.8669
52.7075   48.6716     50.7165       73.7681    61.1282   57.8362
49.103    29.2128     18.4669       93.2442    76.7918   65.6536
38.657    11.392      -5.61105  …   79.4093    67.4373   59.0566

If you have your own map data in an array you'd like to load into a CMBLensing Field object, you can construct it as follows:

FlatMap(mapdata, θpix=3)
65536-element 256×256-pixel 3.0′-resolution LambertMap{SubArray{Float32, 2, Array{Float32, 3}, Tuple{Base.Slice{Base.OneTo{Int64}}, Base.Slice{Base.OneTo{Int64}}, Int64}, true}}:
29.146404
24.846622
25.643225
34.114677
52.459366
72.06949
81.41116
80.71296
71.33992
51.915234
32.415657
36.05079
65.40381
⋮
-37.92044
-6.310608
20.97959
36.067207
39.541054
35.5816
31.926695
34.345722
43.866936
57.836185
65.65356
59.056606

For more info on Field objects, see Field Basics.

## Inverse lensing

You can inverse lense a map with the \ operator (which does A \ b ≡ inv(A) * b):

LenseFlow(ϕ) \ f;

Note that this is true inverse lensing, rather than lensing by the negative deflection (which is often called "anti-lensing"). This means that lensing then inverse lensing a map should get us back the original map. Lets check that this is the case:

Ns = [7 10 20]
plot([f - (LenseFlow(ϕ,N) \ (LenseFlow(ϕ,N) * f)) for N in Ns],
title=["ODE steps = $N" for N in Ns]); A cool feature of LenseFlow is that inverse lensing is trivially done by running the LenseFlow ODE in reverse. Note that as we crank up the number of ODE steps above, we recover the original map to higher and higher precision. ## Other lensing algorithms We can also lense via: • PowerLens: the standard Taylor series expansion to any order: $$$f(x+\nabla x) \approx f(x) + (\nabla f)(\nabla \phi) + \frac{1}{2} (\nabla \nabla f) (\nabla \phi)^2 + ...$$$ • TayLens (Næss&Louis 2013): like PowerLens, but first a nearest-pixel permute step, then a Taylor expansion around the now-smaller residual displacement plot([(PowerLens(ϕ,2)*f - f̃) (Taylens(ϕ,2)*f - f̃)], title=["PowerLens - LenseFlow" "TayLens - LenseFlow"]); ## Benchmarking LenseFlow is highly optimized code since it appears on the inner-most loop of our analysis algorithms. To benchmark LenseFlow, note that there is first a precomputation step, which caches some data in preparation for applying it to a field of a given type. This was done automatically when evaluating LenseFlow(ϕ) * f but we can benchmark it separately since in many cases this only needs to be done once for a given$\phi$, e.g. when Wiener filtering at fixed$\phi\$,

using BenchmarkTools
@benchmark cache(LenseFlow(ϕ),f)
BenchmarkTools.Trial: 90 samples with 1 evaluation.
Range (min … max):  43.876 ms … 80.871 ms  ┊ GC (min … max):  9.64% … 13.53%
Time  (median):     50.382 ms              ┊ GC (median):     9.11%
Time  (mean ± σ):   56.260 ms ± 11.344 ms  ┊ GC (mean ± σ):  10.92% ±  2.85%

█▁
▆▆██▅▄▁▄▄▄▅▅▅▁▄▁▁▁▁▁▁▁▁▁▁▁▃▁▁▁▁▃▁▁▃▁▃▇▃▄▁▁▃█▅▄▄▃▆▁▃▃▁▁▃▃▁▄▃ ▁
43.9 ms         Histogram: frequency by time        76.4 ms <

Memory estimate: 92.42 MiB, allocs estimate: 1743.

Once cached, it's faster and less memory intensive to repeatedly apply the operator:

@benchmark Lϕ * f setup=(Lϕ=cache(LenseFlow(ϕ),f))
BenchmarkTools.Trial: 56 samples with 1 evaluation.
Range (min … max):  38.335 ms … 50.094 ms  ┊ GC (min … max): 0.00% … 0.00%
Time  (median):     40.403 ms              ┊ GC (median):    0.00%
Time  (mean ± σ):   41.493 ms ±  2.673 ms  ┊ GC (mean ± σ):  3.15% ± 4.71%

█▂ ▂▂▂   ▂                    ▂
▅▅▁███████▅▁█▅█▅█▅▁▁▅▅▁▁▁▁▁▁▁▁▁▅▁██▅▁██▅█▁▅▁▅▁▁▁▁▅▁▅▁▁▁▁▁▁▅ ▁
38.3 ms         Histogram: frequency by time        47.3 ms <

Memory estimate: 16.13 MiB, allocs estimate: 441.

Note that this documentation is generated on limited-performance cloud servers. Actual benchmarks are likely much faster locally or on a cluster, and yet (much) faster on GPU.