{ "cells": [ { "cell_type": "markdown", "metadata": { "tags": [] }, "source": [ "# Determination of polarization\n", "\n", "::::{only} html\n", ":::{margin}\n", "This notebook has a `zz.` prefix, because it has to be executed _after_ the polarimeter fields are exported in {doc}`/uncertainties`.\n", ":::\n", "::::\n", "\n", "```{autolink-concat}\n", "```" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "jupyter": { "source_hidden": true }, "mystnb": { "code_prompt_show": "Import Python libraries" }, "tags": [ "hide-cell", "scroll-input" ] }, "outputs": [], "source": [ "from __future__ import annotations\n", "\n", "import itertools\n", "import json\n", "import logging\n", "import os\n", "from functools import lru_cache\n", "from textwrap import dedent, wrap\n", "from warnings import filterwarnings\n", "\n", "import iminuit\n", "import jax.numpy as jnp\n", "import matplotlib.pyplot as plt\n", "import numpy as np\n", "import plotly.graph_objects as go\n", "import yaml\n", "from IPython.display import Latex, Markdown, display\n", "from numpy import pi as π\n", "from plotly.subplots import make_subplots\n", "from scipy.interpolate import interp2d\n", "from tensorwaves.interface import DataSample\n", "from tqdm.auto import tqdm\n", "\n", "from polarimetry.data import generate_phasespace_sample\n", "from polarimetry.io import import_polarimetry_field, mute_jax_warnings\n", "from polarimetry.lhcb import load_model_builder\n", "from polarimetry.lhcb.particle import load_particles\n", "from polarimetry.plot import use_mpl_latex_fonts\n", "\n", "filterwarnings(\"ignore\")\n", "mute_jax_warnings()\n", "logging.getLogger(\"tensorwaves.data\").setLevel(logging.ERROR)\n", "\n", "NO_TQDM = \"EXECUTE_NB\" in os.environ\n", "if NO_TQDM:\n", " logging.getLogger().setLevel(logging.ERROR)\n", " logging.getLogger(\"polarimetry.io\").setLevel(logging.ERROR)" ] }, { "cell_type": "markdown", "metadata": { "tags": [] }, "source": [ "Given the aligned polarimeter field $\\vec\\alpha$ and the corresponding intensity distribution $I_0$, the intensity distribution $I$ for a polarized decay can be computed as follows:\n", "\n", "$$\n", "I\\left(\\phi,\\theta,\\chi; \\tau\\right) = I_0(\\tau)\\left(1+\\vec{P} R(\\phi,\\theta,\\chi) \\vec{\\alpha}(\\tau)\\right)\n", "$$ (eq:master.intensity)\n", "\n", "with $R$ the rotation matrix over the decay plane orientation, represented in Euler angles $\\left(\\phi, \\theta, \\chi\\right)$.\n", "\n", "In this section, we show that it's possible to determine the polarization $\\vec{P}$ from a given intensity distribution $I$ of a $\\lambda_c$ decay if we the $\\vec\\alpha$ fields and the corresponding $I_0$ values of that $\\Lambda_c$ decay. We get $\\vec\\alpha$ and $I_0$ by interpolating the grid samples provided from {ref}`uncertainties:Exported distributions` using the method described in {ref}`appendix/serialization:Import and interpolate`. We perform the same procedure with the averaged aligned polarimeter vector from {numref}`uncertainties:Average polarimetry values` in order to quantify the loss in precision when integrating over the Dalitz plane variables $\\tau$.\n", "\n", "## Polarized test distribution\n", "\n", "For this study, a phase space sample is uniformly generated over the Dalitz plane variables $\\tau$. The phase space sample is extended with uniform distributions over the decay plane angles $\\left(\\phi, \\theta, \\chi\\right)$, so that the phase space can be used to generate a hit-and-miss toy sample for a polarized intensity distribution." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "mystnb": { "code_prompt_show": "Generate phase space sample" }, "tags": [ "hide-input" ] }, "outputs": [], "source": [ "DECAY = load_model_builder(\n", " \"../data/model-definitions.yaml\",\n", " load_particles(\"../data/particle-definitions.yaml\"),\n", " model_id=0,\n", ").decay\n", "\n", "N_EVENTS = 100_000\n", "# Dalitz variables\n", "PHSP = generate_phasespace_sample(DECAY, N_EVENTS, seed=0)\n", "# Decay plane variables\n", "RNG = np.random.default_rng(seed=0)\n", "PHSP[\"phi\"] = RNG.uniform(-π, +π, N_EVENTS)\n", "PHSP[\"cos_theta\"] = RNG.uniform(-1, +1, N_EVENTS)\n", "PHSP[\"chi\"] = RNG.uniform(-π, +π, N_EVENTS)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We now generate an intensity distribution over the phase space sample given a [certain value](https://arxiv.org/pdf/2208.03262.pdf#page=20) for $\\vec{P}$ {cite}`LHCb-PAPER-2022-002` using Eq. {eq}`eq:master.intensity` and by interpolating the $\\vec\\alpha$ and $I_0$ fields with the grid samples for the nominal model." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "jupyter": { "source_hidden": true }, "mystnb": { "code_prompt_show": "Code for interpolating α and I₀" }, "tags": [ "hide-input" ] }, "outputs": [], "source": [ "def interpolate_intensity(phsp: DataSample, model_id: int) -> jnp.ndarray:\n", " x = PHSP[\"sigma1\"]\n", " y = PHSP[\"sigma2\"]\n", " return jnp.array(create_interpolated_function(model_id, \"intensity\")(x, y))\n", "\n", "\n", "def interpolate_polarimetry_field(phsp: DataSample, model_id: int) -> jnp.ndarray:\n", " x = PHSP[\"sigma1\"]\n", " y = PHSP[\"sigma2\"]\n", " return jnp.array(\n", " [create_interpolated_function(model_id, f\"alpha_{i}\")(x, y) for i in \"xyz\"]\n", " )\n", "\n", "\n", "@lru_cache(maxsize=0)\n", "def create_interpolated_function(model_id: int, variable: str):\n", " field_file = f\"_static/export/polarimetry-field-model-{model_id}.json\"\n", " field_data = import_polarimetry_field(field_file)\n", " interpolated_func = interp2d(\n", " x=field_data[\"m^2_Kpi\"],\n", " y=field_data[\"m^2_pK\"],\n", " z=np.nan_to_num(field_data[variable]),\n", " kind=\"linear\",\n", " )\n", " return np.vectorize(interpolated_func)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "jupyter": { "source_hidden": true }, "mystnb": { "code_prompt_show": "Code for computing polarized intensity" }, "tags": [ "scroll-input", "hide-input" ] }, "outputs": [], "source": [ "def compute_polarized_intensity(\n", " Px: float,\n", " Py: float,\n", " Pz: float,\n", " I0: jnp.ndarray,\n", " alpha: jnp.ndarray,\n", " phsp: DataSample,\n", ") -> jnp.array:\n", " P = jnp.array([Px, Py, Pz])\n", " R = compute_rotation_matrix(phsp)\n", " return I0 * (1 + jnp.einsum(\"i,ij...,j...->...\", P, R, alpha))\n", "\n", "\n", "def compute_rotation_matrix(phsp: DataSample) -> jnp.ndarray:\n", " ϕ = phsp[\"phi\"]\n", " θ = jnp.arccos(phsp[\"cos_theta\"])\n", " χ = phsp[\"chi\"]\n", " return jnp.einsum(\"ki...,ij...,j...->k...\", Rz(ϕ), Ry(θ), Rz(χ))\n", "\n", "\n", "def Rz(angle: jnp.ndarray) -> jnp.ndarray:\n", " n_events = len(angle)\n", " ones = jnp.ones(n_events)\n", " zeros = jnp.zeros(n_events)\n", " return jnp.array(\n", " [\n", " [+jnp.cos(angle), -jnp.sin(angle), zeros],\n", " [+jnp.sin(angle), +jnp.cos(angle), zeros],\n", " [zeros, zeros, ones],\n", " ]\n", " )\n", "\n", "\n", "def Ry(angle: jnp.ndarray) -> jnp.ndarray:\n", " n_events = len(angle)\n", " ones = jnp.ones(n_events)\n", " zeros = jnp.zeros(n_events)\n", " return jnp.array(\n", " [\n", " [+jnp.cos(angle), zeros, +jnp.sin(angle)],\n", " [zeros, ones, zeros],\n", " [-jnp.sin(angle), zeros, +jnp.cos(angle)],\n", " ]\n", " )" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "tags": [] }, "outputs": [], "source": [ "P = (+0.2165, +0.0108, -0.665)\n", "I = compute_polarized_intensity(\n", " *P,\n", " I0=interpolate_intensity(PHSP, model_id=0),\n", " alpha=interpolate_polarimetry_field(PHSP, model_id=0),\n", " phsp=PHSP,\n", ")\n", "I /= jnp.mean(I) # normalized times N for log likelihood" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "jupyter": { "source_hidden": true }, "tags": [ "hide-input", "full-width" ] }, "outputs": [], "source": [ "plt.rc(\"font\", size=18)\n", "use_mpl_latex_fonts()\n", "\n", "fig, axes = plt.subplots(figsize=(15, 4), ncols=3)\n", "fig.tight_layout()\n", "for ax in axes:\n", " ax.set_yticks([])\n", "axes[0].hist(PHSP[\"phi\"], weights=I, bins=80)\n", "axes[1].hist(PHSP[\"cos_theta\"], weights=I, bins=80)\n", "axes[2].hist(PHSP[\"chi\"], weights=I, bins=80)\n", "axes[0].set_xlabel(R\"$\\phi$\")\n", "axes[1].set_xlabel(R\"$\\cos\\theta$\")\n", "axes[2].set_xlabel(R\"$\\chi$\")\n", "plt.show()\n", "\n", "fig, ax = plt.subplots(figsize=(12, 3))\n", "ax.hist2d(PHSP[\"sigma2\"], PHSP[\"sigma1\"], weights=I, bins=100, cmin=1)\n", "ax.set_xlabel(R\"$\\sigma_2$\")\n", "ax.set_ylabel(R\"$\\sigma_1$\")\n", "ax.set_aspect(\"equal\")\n", "plt.show()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Using the exported polarimeter grid\n", "\n", "The generated distribution is now assumed to be a _measured distribution_ $I$ with unknown polarization $\\vec{P}$. It is shown below that the actual $\\vec{P}$ with which the distribution was generated can be found by performing a fit on Eq. {eq}`eq:master.intensity`. This is done with [`iminuit`](https://iminuit.rtfd.io), starting with a certain 'guessed' value for $\\vec{P}$ as initial parameters.\n", "\n", "To avoid having to generate a hit-and-miss intensity test distribution, the parameters $\\vec{P} = \\left(P_x, P_y, P_z\\right)$ are optimized with regard to a **weighted negative log likelihood estimator**:\n", "\n", "$$\n", "\\mathrm{NLL} = -\\sum_i w_i \\log I_{i,\\vec{P}}\\left(\\phi,\\theta,\\chi;\\tau\\right)\\,.\n", "$$ (eq:weighted-nll)\n", "\n", "with the normalized intensities of the generated distribution taken as weights:\n", "\n", "$$\n", "w_i = n\\,I_i\\,\\big/\\,\\sum_j^n I_j\\,,\n", "$$ (eq:intensity-as-nll-weight)\n", "\n", "such that $\\sum w_i = n$. To propagate uncertainties, a fit is performed using the exported grids of each alternative model." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "P_GUESS = (+0.3, -0.3, +0.4)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "mystnb": { "code_prompt_show": "Fit polarization with full polarimeter field" }, "tags": [ "hide-input" ] }, "outputs": [], "source": [ "def perform_field_fit(phsp: DataSample, model_id: int) -> iminuit.Minuit:\n", " I0 = interpolate_intensity(phsp, model_id)\n", " alpha = interpolate_polarimetry_field(phsp, model_id)\n", "\n", " def weighted_nll(Px: float, Py: float, Pz: float) -> float:\n", " I_new = compute_polarized_intensity(Px, Py, Pz, I0, alpha, phsp)\n", " I_new /= jnp.sum(I_new)\n", " estimator_value = -jnp.sum(jnp.log(I_new) * I)\n", " return estimator_value\n", "\n", " optimizer = iminuit.Minuit(weighted_nll, *P_GUESS)\n", " optimizer.errordef = optimizer.LIKELIHOOD\n", " return optimizer.migrad()\n", "\n", "\n", "SYST_FIT_RESULTS_FIELD = [\n", " perform_field_fit(PHSP, i)\n", " for i in tqdm(range(18), desc=\"Performing fits\", disable=NO_TQDM)\n", "]" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "jupyter": { "source_hidden": true }, "mystnb": { "code_prompt_show": "Show Minuit fit result for nominal model" }, "tags": [ "hide-cell" ] }, "outputs": [], "source": [ "SYST_FIT_RESULTS_FIELD[0]" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "jupyter": { "source_hidden": true }, "tags": [ "scroll-input", "hide-input" ] }, "outputs": [], "source": [ "def extract_polarizations(fit_results: list[iminuit.Minuit]) -> np.ndarray:\n", " return np.array([extract_polarization(fit) for fit in fit_results])\n", "\n", "\n", "def extract_polarization(fit_result: iminuit.Minuit) -> tuple[float, float, float]:\n", " return tuple(p.value for p in fit_result.params)\n", "\n", "\n", "def render_fit_results(\n", " fit_results: list[iminuit.Minuit],\n", " compare: bool = False,\n", ") -> str:\n", " P_syst = 100 * extract_polarizations(fit_results)\n", " P_nominal = P_syst[0]\n", " P_max = (P_syst[1:] - P_nominal).max(axis=0)\n", " P_min = (P_syst[1:] - P_nominal).min(axis=0)\n", " if compare:\n", " np.testing.assert_array_almost_equal(P_nominal, 100 * np.array(P), decimal=2)\n", "\n", " def render_p(i: int) -> str:\n", " return f\"{P_nominal[i]:+.2f}_{{{P_min[i]:+.2f}}}^{{{P_max[i]:+.2f}}}\"\n", "\n", " src = Rf\"\"\"\n", " \\begin{{array}}{{ccc}}\n", " P_x &=& {render_p(0)} \\\\\n", " P_y &=& {render_p(1)} \\\\\n", " P_z &=& {render_p(2)} \\\\\n", " \\end{{array}}\n", " \"\"\"\n", " return dedent(src).strip()\n", "\n", "\n", "src = Rf\"\"\"\n", "The polarization $\\vec{{P}}$ is determined to be (in %):\n", "\n", "$$\n", "{render_fit_results(SYST_FIT_RESULTS_FIELD, compare=True)}\n", "$$\n", "\n", "with the upper and lower sign being the systematic extrema uncertainties as determined by\n", "the alternative models.\n", "\"\"\"\n", "Markdown(src)" ] }, { "cell_type": "markdown", "metadata": { "jupyter": { "source_hidden": true }, "tags": [] }, "source": [ "This is to be compared with the model uncertainties reported by {cite}`LHCb-PAPER-2022-002`:\n", "\n", "$$\n", "\\begin{array}{ccc}\n", "P_x &=& +21.65 \\pm 0.36 \\\\\n", "P_y &=& +1.08 \\pm 0.09 \\\\\n", "P_z &=& -66.5 \\pm 1.1. \\\\\n", "\\end{array}\n", "$$\n", "\n", "The polarimeter values for each model are (in %):" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "jupyter": { "source_hidden": true }, "tags": [ "hide-input" ] }, "outputs": [], "source": [ "def render_all_polarizations(fit_results: list[iminuit.Minuit]) -> Latex:\n", " src = R\"\"\"\n", " \\begin{array}{r|ccc|ccc}\n", " \\textbf{Model} & \\mathbf{P_x} & \\mathbf{P_y} & \\mathbf{P_z}\n", " & \\mathbf{\\Delta P_x} & \\mathbf{\\Delta P_y} & \\mathbf{\\Delta P_z} \\\\\n", " \\hline\n", " \"\"\"\n", " P_fit_values = np.array([extract_polarization(fit) for fit in fit_results])\n", " P_fit_values *= 100\n", " Px_nom, Py_nom, Pz_nom = P_fit_values[0]\n", " for i, (Px, Py, Pz) in enumerate(P_fit_values):\n", " src += Rf\" \\textbf{{{i}}} & {Px:+.2f} & {Py:+.2f} & {Pz:+.1f} & \"\n", " if i != 0:\n", " src += Rf\"{Px-Px_nom:+.2f} & {Py-Py_nom:+.2f} & {Pz-Pz_nom:+.2f}\"\n", " src += R\" \\\\\" \"\\n\"\n", " src += R\"\\end{array}\"\n", " src = dedent(src).strip()\n", " return Latex(src)\n", "\n", "\n", "render_all_polarizations(SYST_FIT_RESULTS_FIELD)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Using the averaged polarimeter vector\n", "\n", "Equation {eq}`eq:master.intensity` requires knowledge about the aligned polarimeter field $\\vec\\alpha(\\tau)$ and intensity distribution $I_0(\\tau)$ over all kinematic variables $\\tau$. It is, however, also possible to compute the differential decay rate from the averaged polarimeter vector $\\vec{\\overline{\\alpha}}$ (see {ref}`uncertainties:Average polarimetry values`). The equivalent formula to Eq. {eq}`eq:master.intensity` is:\n", "\n", "$$\n", "\\frac{8\\pi^2}{\\Gamma}\\frac{\\mathrm{d}^3 \\Gamma}{\\mathrm{d}\\phi\\,\\mathrm{d}\\cos\\theta\\,\\mathrm{d}\\chi} = 1+\\sum_{i,j}P_i R_{ij}(\\phi, \\theta, \\chi) \\overline{\\alpha}_j\\,,\n", "$$ (eq:I.alpha.averaged)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "jupyter": { "source_hidden": true }, "mystnb": { "code_prompt_show": "Code for computing differential decay rate" }, "tags": [ "hide-input", "scroll-input" ] }, "outputs": [], "source": [ "def get_averaged_polarimeters(polar: bool = False) -> jnp.ndarray:\n", " with open(\"_static/export/averaged-polarimeter-vectors.json\") as f:\n", " json_data = json.load(f)\n", " data = json_data[\"systematics\"]\n", " typ = \"polar\" if polar else \"cartesian\"\n", " items = list(\"xyz\")\n", " if polar:\n", " items = (\"norm\", \"theta\", \"phi\")\n", " return jnp.array([data[typ][i] for i in items]).T\n", "\n", "\n", "def compute_differential_decay_rate(\n", " Px: float,\n", " Py: float,\n", " Pz: float,\n", " averaged_alpha: jnp.array,\n", " phsp: DataSample,\n", ") -> jnp.array:\n", " P = jnp.array([Px, Py, Pz])\n", " R = compute_rotation_matrix(phsp)\n", " return 1 + jnp.einsum(\"i,ij...,j...->...\", P, R, averaged_alpha)\n", "\n", "\n", "SYST_AVERAGED_POLARIMETERS = get_averaged_polarimeters()\n", "SYST_POLAR_POLARIMETERS = get_averaged_polarimeters(polar=True)\n", "assert SYST_AVERAGED_POLARIMETERS.shape == (18, 3)\n", "assert SYST_POLAR_POLARIMETERS.shape == (18, 3)\n", "\n", "diff_rate = compute_differential_decay_rate(*P, SYST_AVERAGED_POLARIMETERS[0], PHSP)\n", "assert diff_rate.shape == (N_EVENTS,)\n", "del diff_rate" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We use this equation along with Eq. {eq}`eq:weighted-nll` to determine $\\vec{P}$ with {class}`~iminuit.Minuit`." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "mystnb": { "code_prompt_show": "Fit polarization with averaged polarimeter" }, "tags": [ "hide-input" ] }, "outputs": [], "source": [ "def perform_averaged_fit(\n", " phsp: DataSample, averaged_alpha: tuple[float, float, float]\n", ") -> iminuit.Minuit:\n", " averaged_alpha = jnp.array(averaged_alpha)\n", "\n", " def weighted_nll(Px: float, Py: float, Pz: float) -> float:\n", " I_new = compute_differential_decay_rate(Px, Py, Pz, averaged_alpha, phsp)\n", " I_new /= jnp.sum(I_new)\n", " estimator_value = -jnp.sum(jnp.log(I_new) * I)\n", " return estimator_value\n", "\n", " optimizer = iminuit.Minuit(weighted_nll, *P_GUESS)\n", " optimizer.errordef = optimizer.LIKELIHOOD\n", " return optimizer.migrad()\n", "\n", "\n", "SYST_FIT_RESULTS_AVERAGED = [\n", " perform_averaged_fit(PHSP, averaged_alpha)\n", " for averaged_alpha in tqdm(\n", " SYST_AVERAGED_POLARIMETERS, desc=\"Performing fits\", disable=NO_TQDM\n", " )\n", "]" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "jupyter": { "source_hidden": true }, "mystnb": { "code_prompt_show": "Show Minuit fit result for nominal model" }, "tags": [ "hide-cell" ] }, "outputs": [], "source": [ "SYST_FIT_RESULTS_AVERAGED[0]" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "jupyter": { "source_hidden": true }, "tags": [ "hide-input" ] }, "outputs": [], "source": [ "src = Rf\"\"\"\n", "Using the averaged polarimeter vector $\\vec{{\\overline{{\\alpha}}}}$, the\n", "polarization $\\vec{{P}}$ is determined to be (in %):\n", "\n", "$$\n", "{render_fit_results(SYST_FIT_RESULTS_AVERAGED)}\\,.\n", "$$\n", "\n", "\n", "The polarimeter values for each model are (in %):\n", "\"\"\"\n", "Markdown(src)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "jupyter": { "source_hidden": true }, "tags": [ "hide-input" ] }, "outputs": [], "source": [ "render_all_polarizations(SYST_FIT_RESULTS_AVERAGED)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Propagating extrema uncertainties\n", "\n", "In {numref}`uncertainties:Average polarimetry values`, the averaged aligned polarimeter vectors with systematic model uncertainties were found to be:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "jupyter": { "source_hidden": true }, "tags": [ "hide-input", "scroll-input" ] }, "outputs": [], "source": [ "def get_alpha_systematics(\n", " all_values: jnp.ndarray,\n", ") -> tuple[tuple[float, float], tuple[float, float], tuple[float, float]]:\n", " central = all_values[0]\n", " syst = np.abs(all_values - central).max(axis=0)\n", " return tuple((μ, σ) for μ, σ in zip(central.tolist(), syst.tolist()))\n", "\n", "\n", "def render_min_max_averaged_polarimeter() -> Latex:\n", " cartesian = get_alpha_systematics(SYST_AVERAGED_POLARIMETERS)\n", " polar = get_alpha_systematics(SYST_POLAR_POLARIMETERS)\n", " src = R\"\"\"\n", " \\begin{array}{c|r|c}\n", " \\textbf{observable} & \\textbf{central} & \\textbf{stat} + \\textbf{syst} \\\\\n", " \\hline\n", " \"\"\"\n", " src = dedent(src)\n", " for xyz, (central, systematic) in zip(\"xyz\", cartesian):\n", " src += Rf\" \\overline{{\\alpha}}_{xyz} \\; \\left[10^{{-3}}\\right]\"\n", " src += Rf\" & {1e3*central:+6.1f} & {1e3*systematic:4.1f}\"\n", " src += R\" \\\\\" \"\\n\"\n", " src += R\" \\hline\" \"\\n\"\n", " polar_labels = [\n", " R\"\\left|\\overline{\\alpha}\\right|\",\n", " R\"\\theta(\\overline{\\alpha})\",\n", " R\"\\phi(\\overline{\\alpha})\",\n", " ]\n", " for label, (central, systematic) in zip(polar_labels, polar):\n", " factor = \"10^{-3}\" if \"left\" in label else R\"\\pi\"\n", " src += Rf\" {label:30s} \\; \\left[{factor:7s}\\right]\"\n", " if \"left\" in label:\n", " src += Rf\" & {1e3*central:6.1f} & {1e3*systematic:5.1f}\"\n", " else:\n", " src += Rf\" & {central/π:+6.3f} & {systematic/π:5.3f}\"\n", " src += R\" \\\\\" \"\\n\"\n", " src += R\"\\end{array}\"\n", " return Latex(src.strip())\n", "\n", "\n", "render_min_max_averaged_polarimeter()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This list of uncertainties is determined by the _extreme deviations_ of the alternative models, whereas the uncertainties on the polarizations determined in {numref}`zz.polarization-fit:Using the averaged polarimeter vector` are determined by the averaged polarimeters of _all_ alternative models. The tables below shows that there is a loss in systematic uncertainty when we propagate uncertainties by taking computing $\\vec{P}$ _only_ with combinations of $\\alpha_i - \\sigma_i, \\alpha_i + \\sigma_i$ for each $i \\in x, y, z$." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "jupyter": { "source_hidden": true }, "mystnb": { "code_prompt_show": "Perform fit with propagated α values" }, "tags": [ "scroll-input", "hide-input" ] }, "outputs": [], "source": [ "def polar_to_cartesian(\n", " r: float, theta: float, phi: float\n", ") -> tuple[float, float, float]:\n", " return (\n", " r * np.sin(theta) * np.cos(phi),\n", " r * np.sin(theta) * np.sin(phi),\n", " r * np.cos(theta),\n", " )\n", "\n", "\n", "def perform_combinatorics_fit(\n", " alpha_array: jnp.ndarray, polar: bool\n", ") -> tuple[list[tuple[float, float, float]], list[tuple[float, float, float]]]:\n", " alpha_with_syst = get_alpha_systematics(alpha_array)\n", " alpha_combinations = tuple((μ - σ, μ + σ) for μ, σ in alpha_with_syst)\n", " alphas = []\n", " polarizations = []\n", " items = list(itertools.product(*alpha_combinations))\n", " for averaged_alpha in tqdm(items):\n", " alphas.append(averaged_alpha)\n", " if polar:\n", " averaged_alpha = polar_to_cartesian(*averaged_alpha)\n", " fit_result = perform_averaged_fit(PHSP, averaged_alpha)\n", " polarizations.append(extract_polarization(fit_result))\n", " return alphas, polarizations\n", "\n", "\n", "(\n", " PROPAGATED_POLARIMETERS_CARTESIAN,\n", " PROPAGATED_POLARIZATIONS_CARTESIAN,\n", ") = perform_combinatorics_fit(SYST_AVERAGED_POLARIMETERS, polar=False)\n", "(\n", " PROPAGATED_POLARIMETERS_POLAR,\n", " PROPAGATED_POLARIZATIONS_POLAR,\n", ") = perform_combinatorics_fit(SYST_POLAR_POLARIMETERS, polar=True)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "jupyter": { "source_hidden": true }, "tags": [ "hide-input", "scroll-input" ] }, "outputs": [], "source": [ "def render_propagated_polarization(\n", " polarizations: list[tuple[float, float, float]], polar: bool\n", ") -> str:\n", " nominal_p = extract_polarization(SYST_FIT_RESULTS_AVERAGED[0])\n", " diff_combinatorics = np.abs(np.array(polarizations) - np.array(nominal_p))\n", " px, py, pz = 100 * np.array(nominal_p)\n", " σx, σy, σz = 100 * diff_combinatorics.max(axis=0)\n", " src = Rf\"\"\"\n", " \\begin{{array}}{{ccrcr}}\n", " P_x &=& {px:+6.2f} &\\pm& {σx:5.2f} \\\\\n", " P_y &=& {py:+6.2f} &\\pm& {σy:5.2f} \\\\\n", " P_z &=& {pz:+6.2f} &\\pm& {σz:5.2f} \\\\\n", " \\end{{array}}\n", " \"\"\"\n", " return dedent(src).strip()\n", "\n", "\n", "src = Rf\"\"\"\n", "Polarizations from $\\overline{{\\alpha}}$ in cartesian coordinates:\n", "\n", "$$\n", "{render_propagated_polarization(PROPAGATED_POLARIZATIONS_CARTESIAN, polar=False)}\n", "$$\n", "\n", "Polarizations from $\\overline{{\\alpha}}$ in polar coordinates:\n", "\n", "$$\n", "{render_propagated_polarization(PROPAGATED_POLARIZATIONS_POLAR, polar=True)}\n", "$$\n", "\"\"\"\n", "Markdown(src)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "jupyter": { "source_hidden": true }, "tags": [ "hide-input", "scroll-input" ] }, "outputs": [], "source": [ "def render_combinatorics_fit(\n", " alphas: list[tuple[float, float, float]],\n", " polarizations: list[tuple[float, float, float]],\n", " polar: bool = False,\n", ") -> None:\n", " src = R\"\\begin{array}{rrr|rrr|rrr}\" \"\\n \"\n", " if polar:\n", " src += R\"|\\alpha| & \\theta\\;[\\pi] & \\phi\\;[\\pi]\"\n", " else:\n", " src += R\"\\alpha_x & \\alpha_y & \\alpha_z\"\n", " src += R\" & P_x & P_y & P_z & \\Delta P_x & \\Delta P_y & \\Delta P_z \\\\ \" \"\\n\"\n", " src += R\" \\hline\" \"\\n \"\n", " if polar:\n", " r, θ, φ = SYST_POLAR_POLARIMETERS[0]\n", " nominal_values = (f\"{1e3*r:.1f}\", f\"{θ/π:.3f}\", f\"{φ/π:.3f}\")\n", " else:\n", " αx, αy, αz = 1e3 * SYST_AVERAGED_POLARIMETERS[0]\n", " nominal_values = (f\"{αx:.1f}\", f\"{αy:.1f}\", f\"{αz:.1f}\")\n", " src += \" & \".join(Rf\"\\color{{gray}}{{{v}}}\" for v in nominal_values) + \" & \"\n", " nominal_α = 1e3 * SYST_AVERAGED_POLARIMETERS[0]\n", " if polar:\n", " nominal_α = (nominal_α[0], 1e-3 * nominal_α[1] / π)\n", " nominal_p = extract_polarization(SYST_FIT_RESULTS_AVERAGED[0])\n", " nominal_p = 100 * np.array(nominal_p)\n", " src += \" & \".join(Rf\"\\color{{gray}}{{{v:+.2f}}}\" for v in nominal_p)\n", " src += R\" \\\\\" \"\\n\"\n", " for alpha, polarization in zip(alphas, polarizations):\n", " polarization = 100 * np.array(polarization)\n", " px, py, pz = polarization\n", " dx, dy, dz = polarization - nominal_p\n", " if polar:\n", " r, θ, φ = np.array(alpha)\n", " src += Rf\" {1e3*r:4.1f} & {θ/π:+5.2f} & {φ/π:+6.2f} \"\n", " else:\n", " αx, αy, αz = 1e3 * np.array(alpha)\n", " src += Rf\" {αx:+5.1f} & {αy:+5.1f} & {αz:+6.1f} \"\n", " src += Rf\"& {px:+5.1f} & {py:+5.2f} & {pz:+5.1f} \"\n", " src += Rf\"& {dx:+5.2f} & {dy:+5.2f} & {dz:+5.1f} \\\\\" \"\\n\"\n", " src += R\"\\end{array}\"\n", " display(Latex(src))\n", "\n", "\n", "render_combinatorics_fit(\n", " PROPAGATED_POLARIMETERS_CARTESIAN,\n", " PROPAGATED_POLARIZATIONS_CARTESIAN,\n", ")\n", "render_combinatorics_fit(\n", " PROPAGATED_POLARIMETERS_POLAR,\n", " PROPAGATED_POLARIZATIONS_POLAR,\n", " polar=True,\n", ")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Increase in uncertainties\n", "\n", "When the polarization is determined with the averaged aligned polarimeter vector $\\vec{\\overline{\\alpha}}$ instead of the aligned polarimeter vector field $\\vec\\alpha(\\tau)$ over all Dalitz variables $\\tau$, the uncertainty is expected to increase by a factor $S_0 / \\overline{S}_0 \\approx 3$, with:\n", "\n", "$$\n", "S_0^2 = 3 \\int I_0 \\left|\\vec{\\alpha}\\right|^2 \\mathrm{d}^n \\tau \\,\\big /\\, \\int I_0\\,\\mathrm{d}^n \\tau \\\\\n", "\\overline{S}_0^2 = 3(\\overline{\\alpha}_x^2+\\overline{\\alpha}_y^2+\\overline{\\alpha}_z^2)\\,.\n", "$$ (eq:s0.integrals)\n", "\n", "The following table shows the maximal deviation (systematic uncertainty) of the determined polarization $\\vec{P}$ for each alternative model (determined with the $\\overline{\\alpha}$-values in cartesian coordinates). The second and third column indicate the systematic uncertainty (in %) as determined with the full vector field and with the averaged vector, respectively." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "jupyter": { "source_hidden": true }, "tags": [ "hide-input" ] }, "outputs": [], "source": [ "def render_uncertainty_increase() -> Latex:\n", " src = R\"\"\"\n", " \\begin{array}{c|ccc}\n", " \\sigma_\\mathrm{{model}}\n", " & \\vec\\alpha(\\tau) & \\vec{\\overline{\\alpha}} & \\color{gray}{\\text{factor}} \\\\\n", " \\hline\n", " \"\"\"\n", " src = dedent(src)\n", " syst_P_field = 100 * extract_polarizations(SYST_FIT_RESULTS_FIELD)\n", " syst_P_avrgd = 100 * extract_polarizations(SYST_FIT_RESULTS_AVERAGED)\n", " for i, xyz in enumerate(\"xyz\"):\n", " src += f\" P_{xyz}\"\n", " syst_sigma_field = np.abs(syst_P_field[:, i] - syst_P_field[0, i]).max()\n", " syst_sigma_avrgd = np.abs(syst_P_avrgd[:, i] - syst_P_avrgd[0, i]).max()\n", " src += Rf\" & {syst_sigma_field:.2f} & {syst_sigma_avrgd:.2f}\"\n", " src += (\n", " Rf\" & \\color{{gray}}{{{syst_sigma_avrgd/syst_sigma_field:.1f}}} \\\\\" \"\\n\"\n", " )\n", " src += R\"\\end{array}\"\n", " return Latex(src)\n", "\n", "\n", "render_uncertainty_increase()" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "jupyter": { "source_hidden": true }, "tags": [ "scroll-input", "hide-input", "full-width" ] }, "outputs": [], "source": [ "def plot_polarization_distribution():\n", " with open(\"../data/model-definitions.yaml\") as f:\n", " yaml_data = yaml.safe_load(f)\n", " model_titles = [\"
\".join(wrap(t, width=60)) for t in yaml_data]\n", " P_field = 100 * extract_polarizations(SYST_FIT_RESULTS_FIELD).T\n", " P_avrgd = 100 * extract_polarizations(SYST_FIT_RESULTS_AVERAGED).T\n", "\n", " template_left = ( # hide trace box\n", " \"%{text}
\"\n", " \"Px = %{x:.2f}, \"\n", " \"Py = %{y:.2f}\"\n", " \"\"\n", " )\n", " template_right = ( # hide trace box\n", " \"%{text}
\"\n", " \"Pz = %{x:.2f}, \"\n", " \"Py = %{y:.2f}\"\n", " \"\"\n", " )\n", " field_group = dict(\n", " legendgroup=\"field\",\n", " legendgrouptitle_text=\"Determined from α(τ) field\",\n", " )\n", " averaged_group = dict(\n", " legendgroup=\"averaged\",\n", " legendgrouptitle_text=\"Determined from ɑ̅ vector\",\n", " )\n", "\n", " fig = make_subplots(cols=2, horizontal_spacing=0.02, shared_yaxes=True)\n", "\n", " def plot_alternative_values(col: int, field: bool, show: bool = True) -> None:\n", " is_left = col == 1\n", " legend_group = field_group if field else averaged_group\n", " p = P_field[:, 1:] if field else P_avrgd[:, 1:]\n", " fig.add_trace(\n", " go.Scatter(\n", " **legend_group,\n", " hovertemplate=template_left,\n", " mode=\"markers\",\n", " marker_color=\"blue\" if field else \"green\",\n", " marker_opacity=0.6,\n", " marker_size=6,\n", " name=\"Alternative models\",\n", " showlegend=show,\n", " text=model_titles[1:],\n", " x=p[0] if is_left else p[2],\n", " y=p[1],\n", " ),\n", " col=col,\n", " row=1,\n", " )\n", "\n", " def plot_nominal_value(col: int, field: bool, show: bool = True) -> None:\n", " is_left = col == 1\n", " legend_group = field_group if field else averaged_group\n", " p = P_field[:, 0] if field else P_avrgd[:, 0]\n", " fig.add_trace(\n", " go.Scatter(\n", " **legend_group,\n", " hovertemplate=template_left if is_left else template_right,\n", " mode=\"markers\",\n", " marker_line_color=\"black\",\n", " marker_line_width=2,\n", " marker_color=\"blue\" if field else \"green\",\n", " marker_size=8,\n", " name=\"Nominal model\",\n", " showlegend=show,\n", " text=model_titles,\n", " x=[p[0] if is_left else p[2]],\n", " y=[p[1]],\n", " ),\n", " col=col,\n", " row=1,\n", " )\n", "\n", " plot_alternative_values(col=1, field=True, show=False)\n", " plot_alternative_values(col=1, field=False, show=False)\n", " plot_alternative_values(col=2, field=True)\n", " plot_alternative_values(col=2, field=False)\n", " plot_nominal_value(col=1, field=True, show=False)\n", " plot_nominal_value(col=1, field=False, show=False)\n", " plot_nominal_value(col=2, field=True)\n", " plot_nominal_value(col=2, field=False)\n", "\n", " fig.update_layout(\n", " height=500,\n", " title_text=\"Distribution of polarization values (systematics)\",\n", " xaxis=dict(title=\"Px [%]\"),\n", " yaxis=dict(title=\"Py [%]\"),\n", " xaxis2=dict(title=\"Pz [%]\"),\n", " )\n", " fig.show()\n", " fig.update_layout(width=1000)\n", " fig.write_image(\"_static/images/polarization-distribution-systematics.svg\")\n", "\n", "\n", "plot_polarization_distribution()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ ":::{only} latex\n", "![](_static/images/polarization-distribution-systematics.svg)\n", ":::" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3 (ipykernel)", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.8.15" }, "myst": { "all_links_external": true } }, "nbformat": 4, "nbformat_minor": 4 }