# Lensing a flat-sky map

using CMBLensing, PyPlot

First we load a simulated unlensed field, $f$, and lensing potential, $\phi$,

@unpack f,ϕ = load_sim(
θpix  = 2,       # size of the pixels in arcmin
Nside = 256,     # number of pixels per side in the map
T     = Float32, # Float32 or Float64 (former is ~twice as fast)
pol   = :I       # :I for Intensity, :P for polarization, or :IP for both
);

We can lense the map with LenseFlow,

f̃ = LenseFlow(ϕ) * f;

And flip between lensed and unlensed maps,

animate([f,f̃], fps=1)

The difference between lensed and unlensed,

plot(f-f̃); CMBLensing flat-sky Field objects like f or ϕ are just thin wrappers around arrays. You can get the underlying data arrays for $I(\mathbf{x})$, $Q(\mathbf{x})$, and $U(\mathbf{x})$ with f[:Ix], f[:Qx], and f[:Ux] respectively, or the Fourier coefficients, $I(\mathbf{l})$, $Q(\mathbf{l})$, and $U(\mathbf{l})$ with f[:Il], f[:Ql], and f[:Ul],

mapdata = f[:Ix]
256×256 Matrix{Float32}:
16.9223   16.7107     6.76394   …   30.5313     27.1077     20.8228
20.2004   23.8824    19.2507        25.6273     24.9317     22.0486
27.3758   27.577     25.0777        20.4196     27.0562     30.6022
41.6781   32.1097    24.8412        14.4931     31.064      43.4462
67.2678   52.0998    35.5184        24.4832     47.7734     65.5578
91.8447   74.6301    54.6156    …   55.4197     77.4977     90.8392
91.8751   77.1666    65.84          87.8228    102.607     101.58
82.4671   68.3388    67.8973       105.495     115.112     102.72
84.6062   70.5613    68.8783       114.516     120.247     106.039
88.2165   81.1797    76.8356       109.163     110.216      98.2464
93.1327   96.2596    91.8871    …   88.6865     89.7113     88.2815
105.105   111.808    106.006         76.6157     80.6468     90.0507
112.853   117.914    108.188         87.4246     90.5486     98.5647
⋮                              ⋱                            ⋮
-30.8362  -13.8389   -17.8254       -87.1399    -88.2407    -64.3883
-20.2782    1.37109   -4.691     …  -92.9386    -97.7261    -64.5651
-18.8288    7.22943    1.90754      -79.1766    -92.7357    -64.3492
-26.2707   -2.28238   -3.46554      -40.2546    -65.8453    -56.9451
-26.401   -13.5927   -17.6139        -0.951288  -32.363     -39.8456
-11.1627   -9.29941  -25.6078        18.9132     -9.90499   -19.0793
11.4837    8.56296  -16.9111    …   20.6765      0.616802   -0.268501
28.7133   25.1438     0.519607      14.4127      1.52029    11.4055
30.8485   29.9892    10.5519        10.528      -4.42103     9.50083
21.9322   22.1668     8.35608       14.467      -2.81549     4.73002
17.5845   15.3108     3.43599       23.1523      9.95823    10.3446
17.9051   13.8167     1.72432   …   30.8429     23.9065     19.3472

If you have your own map data in an array you'd like to load into a CMBLensing Field object, you can construct it as follows:

FlatMap(mapdata, θpix=3)
65536-element 256×256-pixel 3.0′-resolution LambertMap{Array{Float32, 2}}:
16.922283
20.20036
27.375805
41.67814
67.26776
91.84473
91.875145
82.46712
84.606186
88.216545
93.132675
105.10481
112.852715
⋮
-64.38826
-64.56508
-64.34915
-56.945107
-39.845634
-19.079266
-0.26850128
11.405504
9.500831
4.7300243
10.344635
19.347223

For more info on Field objects, see Field Basics.

## Inverse lensing

You can inverse lense a map with the \ operator (which does A \ b ≡ inv(A) * b):

LenseFlow(ϕ) \ f;

Note that this is true inverse lensing, rather than lensing by the negative deflection (which is often called "anti-lensing"). This means that lensing then inverse lensing a map should get us back the original map. Lets check that this is the case:

Ns = [7 10 20]
plot([f - (LenseFlow(ϕ,N) \ (LenseFlow(ϕ,N) * f)) for N in Ns],
title=["ODE steps = $N" for N in Ns]); A cool feature of LenseFlow is that inverse lensing is trivially done by running the LenseFlow ODE in reverse. Note that as we crank up the number of ODE steps above, we recover the original map to higher and higher precision. ## Other lensing algorithms We can also lense via: • PowerLens: the standard Taylor series expansion to any order: $f(x+\nabla x) \approx f(x) + (\nabla f)(\nabla \phi) + \frac{1}{2} (\nabla \nabla f) (\nabla \phi)^2 + ...$ • TayLens (Næss&Louis 2013): like PowerLens, but first a nearest-pixel permute step, then a Taylor expansion around the now-smaller residual displacement plot([(PowerLens(ϕ,2)*f - f̃) (Taylens(ϕ,2)*f - f̃)], title=["PowerLens - LenseFlow" "TayLens - LenseFlow"]); ## Benchmarking LenseFlow is highly optimized code since it appears on the inner-most loop of our analysis algorithms. To benchmark LenseFlow, note that there is first a precomputation step, which caches some data in preparation for applying it to a field of a given type. This was done automatically when evaluating LenseFlow(ϕ) * f but we can benchmark it separately since in many cases this only needs to be done once for a given$\phi$, e.g. when Wiener filtering at fixed$\phi\$,

using BenchmarkTools
@benchmark cache(LenseFlow(ϕ),f)
BenchmarkTools.Trial: 89 samples with 1 evaluation.
Range [90m([39m[36m[1mmin[22m[39m … [35mmax[39m[90m):  [39m[36m[1m55.463 ms[22m[39m … [35m60.109 ms[39m  [90m┊[39m GC [90m([39mmin … max[90m): [39m6.89% … 12.55%
Time  [90m([39m[34m[1mmedian[22m[39m[90m):     [39m[34m[1m56.076 ms              [22m[39m[90m┊[39m GC [90m([39mmedian[90m):    [39m6.78%
Time  [90m([39m[32m[1mmean[22m[39m ± [32mσ[39m[90m):   [39m[32m[1m56.500 ms[22m[39m ± [32m 1.301 ms[39m  [90m┊[39m GC [90m([39mmean ± σ[90m):  [39m7.59% ±  1.98%

[39m [39m [39m [39m [39m█[39m▆[39m [39m▃[34m▄[39m[39m▂[39m▆[39m [39m [32m [39m[39m [39m [39m [39m [39m [39m [39m [39m [39m [39m [39m [39m [39m [39m [39m [39m [39m [39m [39m [39m [39m [39m [39m [39m [39m [39m [39m [39m [39m [39m [39m [39m [39m [39m [39m [39m [39m [39m [39m [39m [39m [39m [39m [39m [39m [39m [39m
[39m▃[39m▄[39m█[39m▅[39m█[39m█[39m▃[39m█[34m█[39m[39m█[39m█[39m▄[39m▄[32m▁[39m[39m▁[39m▁[39m▃[39m▁[39m▁[39m▁[39m▁[39m▁[39m▁[39m▁[39m▁[39m▁[39m▁[39m▁[39m▁[39m▁[39m▁[39m▁[39m▁[39m▁[39m▁[39m▁[39m▁[39m▁[39m▁[39m▁[39m▁[39m▁[39m▁[39m▁[39m▁[39m▁[39m▁[39m▁[39m▁[39m▁[39m▁[39m▄[39m▁[39m▅[39m▁[39m▆[39m▁[39m▃[39m▃[39m [39m▁
55.5 ms[90m         Histogram: frequency by time[39m          60 ms [0m[1m<[22m

Memory estimate[90m: [39m[33m92.37 MiB[39m, allocs estimate[90m: [39m[33m1653[39m.

Once cached, it's faster and less memory intensive to repeatedly apply the operator:

@benchmark Lϕ * f setup=(Lϕ=cache(LenseFlow(ϕ),f))
BenchmarkTools.Trial: 53 samples with 1 evaluation.
Range [90m([39m[36m[1mmin[22m[39m … [35mmax[39m[90m):  [39m[36m[1m37.149 ms[22m[39m … [35m42.434 ms[39m  [90m┊[39m GC [90m([39mmin … max[90m): [39m0.00% … 8.82%
Time  [90m([39m[34m[1mmedian[22m[39m[90m):     [39m[34m[1m38.136 ms              [22m[39m[90m┊[39m GC [90m([39mmedian[90m):    [39m0.00%
Time  [90m([39m[32m[1mmean[22m[39m ± [32mσ[39m[90m):   [39m[32m[1m39.086 ms[22m[39m ± [32m 1.713 ms[39m  [90m┊[39m GC [90m([39mmean ± σ[90m):  [39m2.47% ± 3.89%

[39m [39m [39m [39m [39m [39m [39m [39m▃[39m [39m█[39m [34m▁[39m[39m [39m [39m [39m [39m [39m [39m [39m [39m [39m [32m [39m[39m [39m [39m [39m [39m [39m [39m [39m [39m [39m [39m [39m [39m [39m [39m [39m [39m [39m [39m [39m [39m [39m [39m [39m [39m [39m [39m [39m [39m [39m [39m [39m [39m [39m [39m [39m [39m [39m
[39m▄[39m▁[39m▁[39m▁[39m▇[39m▇[39m▄[39m█[39m▆[39m█[39m▇[34m█[39m[39m▁[39m▄[39m▁[39m▄[39m▇[39m▁[39m▁[39m▁[39m▁[39m▄[32m▄[39m[39m▁[39m▁[39m▄[39m▁[39m▁[39m▁[39m▁[39m▁[39m▄[39m▁[39m▁[39m▁[39m▁[39m▁[39m▁[39m▁[39m▁[39m▁[39m▁[39m▁[39m▁[39m▁[39m▁[39m▄[39m▇[39m▁[39m▆[39m▁[39m▄[39m▁[39m▄[39m▄[39m▆[39m▄[39m▁[39m▄[39m [39m▁
37.1 ms[90m         Histogram: frequency by time[39m        42.4 ms [0m[1m<[22m

Memory estimate[90m: [39m[33m16.14 MiB[39m, allocs estimate[90m: [39m[33m441[39m.

Note that this documentation is generated on limited-performance cloud servers. Actual benchmarks are likely much faster locally or on a cluster, and yet (much) faster on GPU.