Browsing by Subject "Physics"
Results Per Page
Sort Options
Item Open Access (0,2) hybrid models(Journal of High Energy Physics, 2018-09-01) Bertolini, M; Plesser, MR© 2018, The Author(s). We introduce a class of (0,2) superconformal field theories based on hybrid geometries, generalizing various known constructions. We develop techniques for the computation of the complete massless spectrum when the theory can be interpreted as determining a perturbative heterotic string compactification. We provide evidence for surprising properties regarding RG flows and IR accidental symmetries in (0,2) hybrid CFTs. We also study the conditions for embedding a hybrid theory in a particular class of gauged linear sigma models. This perspective suggests that our construction generates models which cannot be realized or analyzed by previously known methods.Item Open Access A CG-FFT Based Fast Full Wave Imaging Method and its Potential Industrial Applications(2015) Yu, ZhiruThis dissertation focuses on a FFT based forward EM solver and its application in inverse problems. The main contributions of this work are two folded. On the one hand, it presents the first scaled lab experiment system in the oil and gas industry for through casing hydraulic fracture evaluation. This system is established to validate the feasibility of contrasts enhanced fractures evaluation. On the other hand, this work proposes a FFT based VIE solver for hydraulic fracture evaluation. This efficient solver is needed for numerical analysis of such problem. The solver is then generalized to accommodate scattering simulations for anisotropic inhomogeneous magnetodielectric objects. The inverse problem on anisotropic objects are also studied.
Before going into details of specific applications, some background knowledge is presented. This dissertation starts with an introduction to inverse problems. Then algorithms for forward and inverse problems are discussed. The discussion on forward problem focuses on the VIE formulation and a frequency domain solver. Discussion on inverse problems focuses on iterative methods.
The rest of the dissertation is organized by the two categories of inverse problems, namely the inverse source problem and the inverse scattering problem.
The inverse source problem is studied via an application in microelectronics. In this application, a FFT based inverse source solver is applied to process near field data obtained by near field scanners. Examples show that, with the help of this inverse source solver, the resolution of unknown current source images on a device under test is greatly improved. Due to the improvement in resolution, more flexibility is given to the near field scan system.
Both the forward and inverse solver for inverse scattering problems are studied in detail. As a forward solver for inverse scattering problems, a fast FFT based method for solving VIE of magnetodielectric objects with large electromagnetic contrasts are presented due to the increasing interest in contrasts enhanced full wave EM imaging. This newly developed VIE solver assigns different basis functions of different orders to expand flux densities and vector potentials. Thus, it is called the mixed ordered BCGS-FFT method. The mixed order BCGS-FFT method maintains benefits of high order basis functions for VIE while keeping correct boundary conditions for flux densities and vector potentials. Examples show that this method has an excellent performance on both isotropic and anisotropic objects with high contrasts. Examples also verify that this method is valid in both high and low frequencies. Based on the mixed order BCGS-FFT method, an inverse scattering solver for anisotropic objects is studied. The inverse solver is formulated and solved by the variational born iterative method. An example given in this section shows a successful inversion on an anisotropic magnetodielectric object.
Finally, a lab scale hydraulic fractures evaluation system for oil/gas reservoir based on previous discussed inverse solver is presented. This system has been setup to verify the numerical results obtained from previously described inverse solvers. These scaled experiments verify the accuracy of the forward solver as well as the performance of the inverse solver. Examples show that the inverse scattering model is able to evaluate contrasts enhanced hydraulic fractures in a shale formation. Furthermore, this system, for the first time in the oil and gas industry, verifies that hydraulic fractures can be imaged through a metallic casing.
Item Open Access A Convolutional Neural Network for SPECT Image Reconstruction(2022) Guan, ZixuPurpose: Single photon emission computed tomography (SPECT) is considered as a functional nuclear medicine imaging technique which is commonly used in the clinic. However, it suffers from low resolution and high noise because of the physical structure and photon scatter and attenuation. This research aims to develop a compact neural network reconstructing SPECT images from projection data, with better resolution and low noise. Methods and Materials: This research developed a MATLAB program to generate 2-D brain phantoms. We totally generated 20,000 2-D phantoms and corresponding projection data. Furthermore, those projection data were processed with Gaussian filter and Poisson noise to simulate the real clinical situation. And 16,000 of them were used to train the neural network, 2,000 for validation, and the final 2,000 for testing. To simulate the real clinical situation, there are five groups of projection data with decreasing acquisition views are used to train the network. Inspired by the SPECTnet, we used a two-step training strategy for network design. The full-size phantom images (128×128 pixels) were compressed into a vector (256×1) at first, then they were decompressed to full-size images again. This process was achieved by the AutoEncoder (AE) consisting of encoder and decoder. The compressed vector generated by the encoder works as targets in the second network, which map projection to compressed images. Then those compressed vectors corresponding to the projection were reconstructed to full-size images by the decoder. Results: A total of 10,000 testing dataset divided into 5 groups with 360 degrees, 180 degrees, 150 degrees, 120 degrees and 90 degrees acquisition, respectively, are generated by the developed neural network. Results were compared with those generated by conventional FBP methods. Compared with FBP algorithm, the neural network can provide reconstruction images with high resolution and low noise, even if under the limited-angles acquisitions. In addition, the new neural network had a better performance than SPECTnet. Conclusions: The network successfully reconstruct projection data to activity images. Especially for the groups whose view angles is less than 180 degrees, the reconstruction images by neural network have the same excellent quality as other images reconstructed by projection data over 360 degrees, even has a higher efficiency than the SPECTnet. Keywords: SPECT; SPECT image reconstruction; Deep learning; convolution neural network. Purpose: Single photon emission computed tomography (SPECT) is considered as a functional nuclear medicine imaging technique which is commonly used in the clinic. However, it suffers from low resolution and high noise because of the physical structure and photon scatter and attenuation. This research aims to develop a compact neural network reconstructing SPECT images from projection data, with better resolution and low noise. Methods and Materials: This research developed a MATLAB program to generate 2-D brain phantoms. We totally generated 20,000 2-D phantoms and corresponding projection data. Furthermore, those projection data were processed with Gaussian filter and Poisson noise to simulate the real clinical situation. And 16,000 of them were used to train the neural network, 2,000 for validation, and the final 2,000 for testing. To simulate the real clinical situation, there are five groups of projection data with decreasing acquisition views are used to train the network. Inspired by the SPECTnet, we used a two-step training strategy for network design. The full-size phantom images (128×128 pixels) were compressed into a vector (256×1) at first, then they were decompressed to full-size images again. This process was achieved by the AutoEncoder (AE) consisting of encoder and decoder. The compressed vector generated by the encoder works as targets in the second network, which map projection to compressed images. Then those compressed vectors corresponding to the projection were reconstructed to full-size images by the decoder. Results: A total of 10,000 testing dataset divided into 5 groups with 360 degrees, 180 degrees, 150 degrees, 120 degrees and 90 degrees acquisition, respectively, are generated by the developed neural network. Results were compared with those generated by conventional FBP methods. Compared with FBP algorithm, the neural network can provide reconstruction images with high resolution and low noise, even if under the limited-angles acquisitions. In addition, the new neural network had a better performance than SPECTnet. Conclusions: The network successfully reconstruct projection data to activity images. Especially for the groups whose view angles is less than 180 degrees, the reconstruction images by neural network have the same excellent quality as other images reconstructed by projection data over 360 degrees, even has a higher efficiency than the SPECTnet. Keywords: SPECT; SPECT image reconstruction; Deep learning; convolution neural network.
Item Open Access A Deep Learning Model for V50%, V60%, and V66.7% Prediction in LINAC-based Treatment Planning of Single-Iso-Multiple-Targets (SIMT) Stereotactic Radiosurgery (SRS)(2023) Khazaieli, MercedehBrain metastases are a common complication of many types of cancer, including lung, breast, and melanoma. Approximately 30-40% of patients develop brain metastases that originate from primary systemic tumors during the course of cancer treatment. One treatment method is a LINAC-based single-isocenter multiple-target (SIMT) stereotactic radiosurgery (SRS). High plan quality has been one of the important goals in radiotherapy treatment planning. Generation of a high quality SRS treatment plan, particularly a SIMT plan, usually requires planners’ extensive planning experience, multiple runs of planning and trial-and-error, and frequent communication among planners, physicians and other radiation oncology team members. In clinical practice with potentially limited resources, SIMT SRS planning could be time-consuming and may have large variations in plan dosimetric quality. Therefore, an estimation of achievable dosimetric outcome can help reduce plan quality variation and improve planning efficiency. Assuming 20Gy in a single fraction of treatment, the volume of normal brain tissue receiving 10Gy (V50%), 12Gy (V60%), and 13Gy (V66.7%) are known predictors of brain tissue toxicity, or radionecrosis. We developed deep learning networks for the prediction of V50%, V60%, and V66.7% based on each patient’s target delineation. A prediction of achievable V10Gy, V12Gy, and V13Gy (assuming 20Gy x 1fx) can assist physicians in the determination of fractionation schemes (i.e., single fx vs. multiple fx). Such predictions can be used as guidelines for planners to generate a SIMT plan more rapidly with reduced dosimetric variability. A key technical innovation of this work is the spherical projection design: by projecting target distribution on a spherical surface, the target distribution in 3D is collapsed to a polar-azimuthal angular distribution map. This transformation enables a dimensional reduction for deep learning input without losing volumetric information. Our results indicate promising potential but there is a need for further work to improve the accuracy of our predictions.
Item Open Access A Deep-Learning-based Multi-segment VMAT Plan Generation Algorithm from Patient Anatomy for Prostate Simultaneous Integrated Boost (SIB) Cases(2021) Zhu, QingyuanIntroduction: Several studies have realized fluence-map-prediction-based DL IMRT planning algorithms. However, DL-based VMAT planning remains unsolved. A main difficult in DL-based VMAT planning is how to generate leaf sequences from the predicted radiation intensity maps. Leaf sequences are required for a large number of control points and meet physical restrictions of MLC. A previous study1 reported a DL algorithm to generate 64-beam IMRT plans to approximate VMAT plans with certain dose distributions as input. As a step forward, another study2 reported a DL algorithm to generate one-arc VMAT plans from patient anatomy. This study generated MLC leaf sequence from thresholded predicted intensity maps for one-arc VMAT plans. Based on this study, we developed an algorithm to convert DL-predicted intensity maps to multi-segment VMAT plans to improve the performance of one-arc plans.
Methods: Our deep learning model utilizes a series of 2D projections of a patient’s dose prediction and contour structures to generate a multi-arc 360º dynamic MLC sequence in a VMAT plan. The backbone of this model is a novel U-net implementation which has a 4-resolution-step analysis path and a 4-resolution-step synthesis path. In the pretrained DL model, a total of 130 patients were involved, with 120 patients in the training and 11 patients in testing groups, respectively. These patients were prescribed with 70Gy/58.8Gy to the primary/boost PTVs in 28 fractions in a simulated integrated boost (SIB) regime. In this study, 7-8 arcs with the same collimator angle are used to simulate the predicted intensity maps. The predicted intensity maps are separated into 7-8 segments along the collimator angle. Hence, the arcs could separately simulate predicted intensity maps with independent weight factors. This separation also potentially allows MLC leaves to simulate more dose gradient in the predicted intensity mapsResults: After dose normalization (PTV70 V70Gy=95%), all 11 multi-segment test plans met institutional clinic guidelines of dose distribution outside PTV. Bladder (V70Gy=5.3±3.3cc, V40Gy=16.1±8.6%) and rectum (V70Gy=4.5±2.3cc, V40Gy=33.4±8.1%) results in multi-segment plans were comparable with the commercial TPS plan results. 3D max dose results in AVP-DSP plans(D1cc=112.6±1.9%) were higher than the commercial TPS plans results(D1cc=106.7±0.8%). On average, AVP-DSP used 600 seconds for a plan generation in contrast to the current clinical practice (>20 minutes).
Conclusion: Results suggest that multi-segment plans can generate a prostate VMAT plan with clinically-acceptable dosimetric quality. the proposed multi-segment plan generation algorithm has the capability to achieve higher modulation and lower maximum dose. With its high efficiency, multi-segment may hold great potentials of real-time planning application after further validation.
Item Open Access A Dosimetric Characterization of Novel Formulations of Presage 3D Dosimeters(2014) Jackson, JacobPurpose: The purpose of this work is to characterize three novel formulations of a radiochromic material Presage and identify optimal imaging procedures for accurate 3D dosimetry. The dosimetric qualities of interest were studied for each formulation of Presage dosimeter in the context of accurate 3D dosimetry. The formulation of Presage showing the most promise is compared to a clinical 3D quality assurance device to investigate the accuracy of a complex state-of-the-art brain IMRT treatment.
Methods and Materials: Three novel formulations of Presage were studied for their temporal stability, sensitivity, linearity of dose response, and feasibility of absolute dose calibration in large volume dosimeters (1 kg) with small volume cuvettes (4g). Large cylindrical dosimeters with 11 cm diameter and 10 cm height were irradiated with 5 2x2 cm fields on the upper flat surface with 3 distinct dose levels (3, 6 and 9.5 Gy, representing low, medium and high). This irradiation pattern is used to determine the dosimetric characteristics mentioned above and was chosen because of its repeatability and it lends to simple measurements of linearity and sensitivity. Measurements were taken at various time points from 0 hours to 24 hours post-irradiation using the high resolution (6.45 m pixels) Duke Medium-Sized Optical-CT Scanner (DMOS) and reconstructed with a Matlab-based reconstruction GUI created in-house. Analysis of the pertinent dosimetric characteristics was performed in the GUI. A comprehensive end-to-end QA test was performed on the optimal formulation using optimal scan timing determined from the formulation studies described above. A 5-field IMRT plan was created for head treatment. The plan was delivered both to a head phantom containing a Presage insert, and to the Delta4 QA device. Comparison of both delivered distributions together with the Eclipse predicted dose distribution enabled investigation of the accuracy of the delivery, and the consistency of independent measurement devices.
Results: The DEA-1 formulation showed up to 10% variation from 0-2 hours post-irradiation, but showed excellent temporal stability (<2% variation) between 3-7 hours post irradiation, and maintained good stability until 24 hours post-irradiation (up to 3% variation). The DEA-2 also showed up to 10% variation from 0-2 hours post-irradiation. The DEA-2 formulation then showed good stability (up to 2.1% variation) from 3-7 hours, but optical density values dropped by up to 11% after 24 hours. The DX formulation did not maintain stability of optical density for any significant time with values decreasing by ~20% by the 24-hour time point and optical density decreasing at different rates for different dose levels. Linearity of dose response was good for all formulations with an R2 value > 0.99. Gamma analysis with criteria of 3%/2mm was performed on two irradiations of the 5-field pattern on DEA-1 formulation. Voxel passing rates were 96.68% and 97.96%. Comparison of the DEA-1 formulation large dosimeter was done with small volume cuvettes of the same formulation and batch. Sensitivity of the large dosimeter was less than half the sensitivity of the cuvettes. For clinical 3D QA comparison, the DEA-1 formulation was used because it had optimal performance showed the most promise for accurate 3D dosimetry. Line dose profiles showed that Presage compared very well with the Eclipse calculation and had a much better 3D gamma passing rate for 3%/3mm criteria than the Delta4 (>99% vs 75%).
Conclusions: The DEA-1 formulation shows the most promise because of its temporal stability and linearity of dose response. The optimal imaging window for this formulation was determined to be 3-24 hours post-irradiation. The DEA-2 and DX formulation also showed potential for accurate dosimetry. The optimal imaging window for the DEA-2 formulation was determined to be 2-6 hours post-irradiation. The optimal scan time for the DX formulation was determined to be immediately post-irradiation. The amount of accuracy loss depending on the scan time is dependent on the formulation and when the dosimeter is scanned. Line dose profiles and gamma analysis results from the comparison of Presage and Eclipse calculation provide strong validation of the accuracy of the IMRT treatment delivery. Comparison of Presage to the Delta4 show the Delta4 to be somewhat lacking in its ability to calculate 3D dose in the phantom/Presage geometry.
Item Open Access A High Precision Measurement of the Proton Charge Radius at JLab(2020) Xiong, WeizhiThe elastic electron-proton ($e-p$) scattering and the spectroscopy of hydrogen atoms are the two traditional methods to determine the proton charge radius ($r_{p}$). In 2010, a new method using the muonic hydrogen ($\mu$H)\footnote{A muonic hydrogen has its orbiting electron replaced by a muon.} spectroscopy reported a $r_{p}$ result that was nearly ten times more precise but significantly smaller than the values from the compilation of all previous $r_{p}$ measurements, creating the ``proton charge radius puzzle".
In order to investigate the puzzle,
the PRad experiment (E12-11-106\footnote{Spokespersons: A. Gasparian (contact), H. Gao, M. Khandaker, D. Dutta}) was first proposed in 2011 and performed in 2016 in Hall B at the Thomas Jefferson National Accelerator Facility, with both 1.1 and 2.2 GeV electron beams. The experiment measured the $e-p$ elastic scattering cross sections in an unprecedented low values of momentum transfer squared region ($Q^2 = 2.1\times10^{-4} - 0.06~\rm{(GeV/c)}^2$), with a sub-percent precision.
The PRad experiment utilized a calorimetric method that was magnetic-spectrometer-free. Its detector setup included a large acceptance and high resolution calorimeter (HyCal), and two large-area, high-spatial-resolution Gas Electron Multiplier (GEM) detectors. To have a better control over the systematic uncertainties, the absolute $e-p$ elastic scattering cross section was normalized to that of the well-known M$\o$ller scattering process, which was measured simultaneously during the experiment. For each beam energy, all data with different $Q^{2}$ were collected simultaneously with the same detector setup, therefore sharing a common normalization parameter. The windowless H$_2$ gas-flow target utilized in the experiment largely removed a typical background source, the target cell windows. The proton charge radius was determined as $r_{p} = 0.831 \pm 0.007_{\rm{stat.}} \pm 0.012_{\rm{syst.}}$~fm, which is smaller than the average $r_{p}$ from previous $e-p$ elastic scattering experiments, but in agreement with the $\mu$H spectroscopic results within the experimental uncertainties.
Item Open Access A hybrid ion-atom trap with integrated high resolution mass spectrometer(Review of Scientific Instruments, 2019-10-01) Jyothi, S; Egodapitiya, KN; Bondurant, B; Jia, Z; Pretzsch, E; Chiappina, P; Shu, G; Brown, KR© 2019 Author(s). In this article, we describe the design, construction, and implementation of our ion-atom hybrid system incorporating a high resolution time of flight mass spectrometer (TOFMS). Potassium atoms (39K) in a magneto optical trap and laser cooled calcium ions (40Ca+) in a linear Paul trap are spatially overlapped, and the combined trap is integrated with a TOFMS for radial extraction and detection of reaction products. We also present some experimental results showing interactions between 39K+ and 39K, 40Ca+ and 39K+, as well as 40Ca+ and 39K pairs. Finally, we discuss prospects for cooling CaH+ molecular ions in the hybrid ion-atom system.Item Open Access A Measurement of the Eta Meson Radiative Decay Width via the Primakoff Effect(2024) Smith, DrewThe $\eta$ meson is an interesting tool to study fundamental symmetries in Quantum Chromodynamics (QCD). In particular, its radiative decay width, $\Gamma\left(\eta\rightarrow\gamma\gamma\right)$, is an important quantity that can be predicted in the framework of Chiral Perturbation Theory. A precision measurement of this quantity would provide critical inputs to understanding the mixing of the $\eta$ and $\eta'$ mesons and extracting constants with wide-ranging applications in low-energy QCD. This decay width has been measured in the past using two different experimental techniques. The more popular technique utilized $e^{+}e^{-}$ collisions to produce $\eta$ mesons through electromagnetic interactions. Today, the Particle Data Group (PDG) averages the results of five such experiments to obtain their currently-accepted value of the decay width as: 0.515$\pm$0.018~keV. However the first measurement of this quantity was obtained from a fixed-target experiment that measured the cross section for photoproduction of $\eta$ mesons on a nuclear target via the Primakoff effect. Their result of 0.324$\pm$0.046~keV shows strong tension with the average of the collider measurements, motivating a new, high precision measurement using the Primakoff method.
For this purpose, the PrimEx-\textit{eta} experiment was conducted in Hall D of the Thomas Jefferson National Accelerator Facility (Jefferson Lab or JLab). The data is currently being analyzed to measure the differential cross section for the photoproduction of $\eta$ mesons on a liquid, $^{4}$He target. Preliminary results obtained from the analysis of the first phase of the PrimEx-\textit{eta} experiment show reasonable agreement with the currently-accepted PDG value of the radiative decay width. However, as will be discussed, there are many challenges to this precision measurement which must be studied before any results can be finalized and compared with previous measurements.
In parallel to the $\eta$ decay width measurement, the PrimEx-\textit{eta} experiment measured the total cross section for the fundamental, Quantum Electrodynamics (QED) process of Compton scattering from the atomic electrons inside the target. The results obtained from this measurement are in strong agreement with the next-to-leading order QED calculations, and the total combined uncertainties are below 3\% for incident photon energies between 7-10~GeV. In addition to providing the first precision measurement of the total Compton scattering cross section within this beam energy range, this measurement verifies the capability of the PrimEx-\textit{eta} experimental setup to perform absolute cross section measurements at forward angles, and serves as a reference process for the calibration of systematic uncertainties.
Item Open Access A Measurement of the Proton Structure Function g2p at Low Q2(2016) Huang, MinExperiments at Jefferson Lab have been conducted to extract the nucleon spin-dependent structure functions over a wide kinematic range. Higher moments of these quantities provide tests of QCD sum rules and predictions of chiral perturbation theory ($\chi$PT). While precise measurements of $g_{1}^n$, $g_{2}^n$, and $g_1^p$ have been extensively performed, the data of $g_2^p$ remain scarce. Discrepancies were found between existing data related to $g_2$ and theoretical predictions. Results on the proton at large $Q^2$ show a significant deviation from the Burkhardt-Cottingham sum rule, while results for the neutron generally follow this sum rule. The next-to-leading order $\chi$PT calculations exhibit discrepancy with data on the longitudinal-transverse polarizability $\delta_{LT}^n$. Further measurements of the proton spin structure function $g_2^p$ are desired to understand these discrepancies.
Experiment E08-027 (g2p) was conducted at Jefferson Lab in experimental Hall A in 2012. Inclusive measurements were performed with polarized electron beam and a polarized ammonia target to obtain the proton spin-dependent structure function $g_2^p$ at low Q$^2$ region (0.02$<$Q$^2$$<$0.2 GeV$^2$) for the first time. The results can be used to test the Burkhardt-Cottingham sum rule, and also allow us to extract the longitudinal-transverse spin polarizability of the proton, which will provide a benchmark test of $\chi$PT calculations. This thesis will present and discuss the very preliminary results of the transverse asymmetry and the spin-dependent structure functions $g_1^p$ and $g_2^p$ from the data analysis of the g2p experiment .
Item Open Access A Measurement of the Radiation Environment Around Prompt J/ψ Events at ATLAS(2017) Bjergaard, DavidThe J/ψ particle has been the source of much research since its
discovery in 1974. It provides an important probe of quantum
chromodynamics and has lead to many important insights into the
interactions of quarks and gluons in bound states. The rate of J/ψ
production was found to be much higher than expected at hadron
colliding experiments. Non-relativistic quantum chromodynamics was
developed in order to address these issues. This theory predicts a
strong spin-alignment not observed in data. All previous measurements
of J/ψ production have overlooked the hadronic environment the
J/ψ is produced in. This work is the first exploration of the
radiation surrounding J/ψ events measured at ATLAS at
√s=8 TeV. This is the first measurement of the
separation between the J/ψ and a matched jet and the second
measurement of the momentum fraction shared between the jet and the
J/ψ. These variables probe the radiation environment around the
J/ψ, and provide a new ways to understand quarkonia production.
Item Open Access A Measurement of The Response of A High Purity Germanium Detector to Low-Energy Nuclear Recoils(2022) Li, LongThe Standard model process of Coherent Elastic Neutrino-Nucleus Scattering (CEvNS), which was first predicted by Freedman in 1974, has recently been observed by the COHERENT collaboration on CsI and liquid argon targets. The result is a new way to build a compact neutrino detector which unlocks new channels to test the Standard Model. A semiconductor germanium detector, a technology that has been developed by many dark matter direct detection experiments due to its excellent energy resolution and low-energy thresholds, will also be deployed to ORNL in order to detect CEvNS as part of the next phase of the COHERENT experiment. One of the challenges is to understand the signature of neutrino-induced low-energy nuclear recoils in germanium. A measurement was carried out at the Triangle Universities Nuclear Laboratory (TUNL) to characterize the it response to low-energy nuclear recoils. A quenching factor of 14-20% for nuclear recoil energies between 0.8-4.9 keV in Ge was established. A long predicted smearing effect due to quenching was observed for the first time and estimated to be 0.024 at ~2 keVnr. Finally, the impact of this effect and the quenching factor on the expected CEvNS spectrum of the future Ge deployment is presented.
Item Embargo A Meta-Physics of Sexual Difference: The Quantum Gravity Matrix and Embryogenesis of Our Universe(2021) Murtagh, Mitchell DamianThis dissertation makes a case that sexual difference, to date, has been a deeply misconceptualized philosophical concept. Too often reduced to only one expression of itself—the difference between the sexes—critiques of sexual difference as essentialist, heterosexist, transphobic, and race-blind are based on this limited definition of it as an identity category. Its scope, however, expands far beyond its anthropomorphic or human-centric expression and, I argue, it is only by opening up the concept as an ontology that we can begin to conceive new, nuanced, philosophically-grounded ways out of sexist, racist, transphobic, capitalistic and colonialist metaphysics whose roots run so deep that their foundational frameworks are often left unchallenged. This requires stretching sexual difference from an epistemological project that centers “the knower,” often “Woman,” to an ontological framework that constitutes the condition of possibility for epistemology itself. In other words, sexual difference is not reducible to the sex of the knowing subject but founds the logic that there are always at least two ways of knowing, thinking, and being that are irreducible to, or non-collapsible into each other. An ontology of sexual difference requires moving beyond the concept’s historical basis in feminist critiques of psychoanalysis, and even beyond feminist theory itself, where—in its current form—it remains trapped in a tired and boring binary debate between social constructivists and new materialists.
A Meta-Physics of Sexual Difference aims for a way out of this dualism within feminist theory by proposing sexual difference as the organizing, incorporeal principle of reality itself. Open-ended throughout—neo-finalist rather than teleological—this takes sexual difference further than it has ever been taken before—beyond its role as the engine of evolution proliferating life, even beyond inciting the emergence of life itself from non-living matter. Sexual difference, if it is to be a truly revolutionary metaphysics or first philosophy, must begin from the very beginning, with the origins of space-time For this reason, this project engages deeply and seriously with contemporary physics, and in the spirit of Irigaray, has both critical and creative components.
The first half critiques contemporary Western physics for its unconscious but undergirding phallocentrism—an unacknowledged commitment to a logic of replicating self-sameness, containment, and unification. This is most palpable in the practically unanimous desire to unify all the “self-contained” structures of physical reality—from the smallest subatomic particles to the large-scale cosmological universe itself—into a totalizing “theory of everything.” Doing this, however, would require solving for “quantum gravity,” the biggest challenge the field faces today. It implies overcoming the logical contradiction at the heart of physics—the incompatibility between two theories of nature—general relativity, which governs large and very massive structures, and quantum mechanics, which governs small and light structures. Our best current theory for gravity—Einstein’s general relativity—refers to the curvature of space-time on which quantum fields emerge, but it cannot, and has never been quantized itself. Ever-elusive and enigmatic, quantum gravity is a feminine symptom that seems to situate itself at the boundaries between the physical and the meta-physical, i.e., what is before the Big Bang, above the speed of light, below the Plank scale, and inside black holes. Posed at these thresholds, we may begin to think of quantum gravity as the interval itself.
It is precisely here, in the second half of the dissertation, that sexual difference stages its constructive intervention. As a logic of co-constitutive “twoness,” it emphasizes the relation from which two things emerge rather than trying to enclose two things into one container. Applying this to the “incompatibility” between general relativity and quantum mechanics, I propose embryogenesis, a philosophical concept borrowed from Raymond Ruyer, as a new “model” for physical reality that emerges only by beginning from this different logic or meta-physics for physics: sexual difference rather than phallocentrism. As the condition of possibility for physics, meta-physics itself is the maternal-feminine par excellence, opening physics and feminist theory to an ontological alliance via sexual difference. “Embryogenesis” could be conceived of as an alternative framework to the “theory of everything” for physicists to take up in the future, which may even change the way the problem of quantum gravity is conceptualized. In embryogenesis, quantum reality is not stuffed inside our gravitational universe as it is framed by the epistemological Copenhagen formulation that centers the observer. Inversely, this proposal relies on the only ontological interpretation of quantum mechanics that exists—Hugh Everett’s Many Worlds. Many-worlds theory makes the case that fundamental quantum reality is a Hilbert space in which our universe is represented by a quantum mechanical wave-function that decoheres—splits or branches or sexuates—each time the self-entanglement of the system as a whole evolves. Hilbert space is therefore the “quantum womb” within which our embryonic universe makes itself by evolving and expanding the local geometry of space-time. Quantum gravity, in this context, may be the interval between realms that nourishes this process of embryogenesis, perpetually self-differentiating the realms from each other, but also supplying their mutual growth and development, by crossing the threshold from the non-local, virtual, “in-formational,” or trans-spatial maternal matrix into our gravitational universe and converting itself into the mysterious “dark energy” that supplies the ongoing growth and development of its structuration.
Item Open Access A microscopic model of the Stokes-Einstein relation in arbitrary dimension.(The Journal of chemical physics, 2018-06) Charbonneau, Benoit; Charbonneau, Patrick; Szamel, GrzegorzThe Stokes-Einstein relation (SER) is one of the most robust and widely employed results from the theory of liquids. Yet sizable deviations can be observed for self-solvation, which cannot be explained by the standard hydrodynamic derivation. Here, we revisit the work of Masters and Madden [J. Chem. Phys. 74, 2450-2459 (1981)], who first solved a statistical mechanics model of the SER using the projection operator formalism. By generalizing their analysis to all spatial dimensions and to partially structured solvents, we identify a potential microscopic origin of some of these deviations. We also reproduce the SER-like result from the exact dynamics of infinite-dimensional fluids.Item Open Access A New Method to Investigate RECA Therapeutic Effect(2020) Liu, XiangyuIntroduction: RECA (Radiotherapy Enhanced with Cherenkov photo- Activation) is a novel treatment that induces a synergistic therapeutic effect by combining conventional radiation therapy with phototherapy using the anti-cancer and potentially immunogenic drug, psoralen. This work presents a novel method to investigate the therapeutic effect of RECA using rat brain slices and the agarose- based tissue equivalent material. Methods: 4T1 mCherry Firefly Luciferase mouse breast cancer cells are placed on the brain slice after exposed to psoralen solution. Taking fluorescent imaging of the brain slices every day after irradiation, an independent luciferase imaging was taken after the fifth fluorescence imaging. Using different imaging processing and analysis method to identify the cells. Result: Four analyzing method give different result about the fluorescence signal or luminescence signal. The overall trend of the fluorescence signal is rising over day, reaches the lowest point at 48 hours after irradiation. Control group (no radiation and no Cherenkov lights) has the lowest signal compared with other groups. The signal of brain slices with 4T1 cells exposed to psoralen solution is lower than that of brain slices without psoralen exposition. Conclusion: This work shows that rat brain slice can be used to simulate in vivo environment in exploring the therapeutic effect of RECA. Future work should focus on improving the image analyze method to better identify cells and noises.
Item Open Access A novel mono-energy proton arc therapy with patient specific range shifter for fast treatment delivery(2024) Zhou, YuyinAbstractIntroduction:This study evaluates a new proton therapy filter designed to eliminate the need for energy adjustments. Utilizing the machine's maximum energy, the filter ensures sufficient tumor coverage through the Bragg-peak, potentially improving treatment efficiency by shortening delivery time. Methods:Implemented on the matRad platform, each plan utilized a single arc composed of 72 beams, each spaced 5 degrees apart. Open-access datasets, including TG-119 C-shape, a prostate case, and a liver case, were employed. The prescribed doses for these cases were 50Gy in 25 fractions, 68Gy in 34 fractions, and 45Gy in 25 fractions, respectively. Simplifying from multiple energy layers to a single energy layer for each beam can reduce treatment delivery time. Maintaining spot coverage with a single energy layer for each beam is a critical optimization aspect. The spot coverage, P(i,j), is optimized to maximize spot coverage and the optimization is called mono energy optimization. However, considering spot coverage alone is insufficient; the energy level must also be considered. Higher energy levels indicate a thinner range shifter, which reduces scatter and attenuation caused by range shifters. The new optimization process, called higher mono energy optimization, gave priority to deeper layers and larger spot sizes, using a function that normalizes input energy and combines it with alpha and beta coefficients to optimize the energy function E(i,j) and spot coverage P(i,j). The optimal energy layers were selected, and the initial beam energy was set at 236MeV. All beamlet was adjusted to specific energy levels with a custom-designed PMMA filter based on stopping power, facilitating a smooth transition to the desired energy levels. The effectiveness of this approach was evaluated by comparing dose metrics with those from the Intensity Modulated Proton Therapy (IMPT) method using two or three beams. Results: PTV coverages were relatively close between the IMPT and range filter plans. Organs at Risk (OAR) experienced a dose increase due to enhanced scattering. Simulated treatment delivery times for the three tested range filter plans demonstrated the efficiency, with prostate at 360s, liver at 340s, and TG119 at 390s. Conclusions: Mono-energy with range filters proton therapy is a feasible approach for expediting treatment delivery without compromising the quality of the treatment plan.
Item Open Access A novel technique to irradiate surgical scars using dynamic electron arc radiotherapy(2017) Addido, JohannesPurpose: The usage of conformal electron beam therapy techniques in treating superficial tumors on uneven surfaces has often lead to undesired outcomes such as non-uniform dose inside the target and a wide penumbra at boundary of the target. The dynamic electron arc radiotherapy (DEAR) technique has been demonstrated to improve dose distribution and minimize penumbra. The aim of this study is to investigate the feasibility and the accuracy of DEAR technique in irradiating surgical scars.
Method: 3D scar coordinates, a series of connected points along a line extracted from CT images were used. A treatment plan was designed to irradiate the scar with a uniform dose. An algorithm was developed to produce a DEAR plan consisting of control points (CP) corresponding to various positions along machine mechanical axes as a function of MU. Varian’s Spreadsheet based Automatic Generator (SAGE) software was used to verify and simulate the treatment and also to generate the plan in XML format. XML file was loaded on a TrueBeam Linac in research mode for delivery. The technique was demonstrated in i) a straight line scar on the surface of a solid water phantom and ii) curved scar on the surface of a cylindrical phantom. Energy used was 6MeV and a 6x6 cm2 applicator fitted with a 3x3 cm2 cutout. Dose at the surface and dmax were measured with Gafchromic film. Dose profiles calculated from the Eclipse eMC and Virtual Linac Monte Carlo tool were compared to film dose measurement.
Results: The dose profile analysis show that the TrueBeam Linac can deliver the designed plans for both straight line and arc scars to a high degree of accuracy. The root mean square error (RMSE) value for the line scar is approximately 0.00350 Gantry angle and it is 0.0349 for the arc scar. This is due to the fact that in the straight line delivery the gantry angle is static so it has a
higher degree of agreement than in the arc delivery. RMSE values for the straight line scar has an overall high degree of agreement compared to the arc scar because the arc scar delivery has more mechanical axes motion during delivery.
Conclusion: The DEAR technique can be used to treat various line targets (scars) i.e. straight or curved lines, to a high degree of accuracy. This treatment modality can help reinvigorate electron therapy and make it a clinically viable treatment option.
Item Open Access A Precision Measurement of Neutral Pion Lifetime(2018) Zhang, YangThe neutral pion decays via chiral anomaly and this process historically led to the discovery of the chiral anomaly. The $\pi^0$ decay amplitude is among the most precise predictions of quantum chromodynamics (QCD) at low energy. However, the current experimental results are not commensurate with theoretical predictions. The Partical Data Group (PDG) average of the experimental results is $7.74\pm0.46$ eV, which is consistent with the chiral anomaly prediction (leading order). Recent theoretical calculations (NLO and NNLO) show an increase of about 4.5\% to the LO prediction with 1\% precision. As a result, a precise measurement of the neutral pion decay amplitude would be one of the most stringent tests of low energy QCD. PrimEx-II experiment measured the neutral pion decay amplitude via the Primakoff effect using two targets, silicon and $^{12}$C. The $\pi^0\rightarrow\gamma\gamma$ decay amplitude was extracted by fitting the measured cross sections using recently updated theoretical models for the process. The resulting value is $7.82 \pm 0.05(stat) \pm 0.10(syst)$ eV. With a total uncertainty of 1.8\%, this result is the most precise experimental estimation and is consistent with current theoretical predictions.
Item Open Access A Radiomics Machine Learning Model for Post-Radiotherapy Overall Survival Prediction of Non-Small Cell Lung Cancer (NSCLC)(2023) Zhang, RihuiPurpose: To predict post-radiotherapy overall survival group of NSCLC patients based on clinical information and radiomics analysis of simulation CT. Materials/Methods: A total of 258 non-adenocarcinoma patients who received radical radiotherapy or chemo-radiation were studied: 45/50/163 patients were identified as short(0-6mos)/mid(6-12mos)/long(12+mos) survival groups, respectively. For each patient, we first extracted 76 radiomics features within the gross tumor volume(GTV) identified in the simulation CT; these features were combined with patient clinical information (age, overall stage, and GTV volume) as a patient-specific feature vector, which was utilized by a 2-step machine learning model for survival group prediction. This model first identifies patients with long survival prediction via a supervised binary classifier; for those with otherwise prediction, a 2nd classifier further generates short/mid survival prediction. Two machine learning classifiers, explainable boosting machine(EBM) and balanced random forest(BRF), were interrogated as a comparison study. During the model training, all patients were divided into training/test sets by an 8:2 ratio, and 100-fold random sampling were applied to the training set with a 7:1 validation ratio. Model performances were evaluated by the sensitivity, accuracy, and ROC results. Results: The model with EBM demonstrated an overall ROC AUC (0.58±0.04) with limited sensitivities in short (0.02±0.04) and mid group (0.11±0.08) predictions due to imbalanced data sample distribution. In contrast, the model with BRF improved short/mid group sensitivities to 0.32±0.11/0.29±0.16, respectively, but the improvement of ROC AUC (0.60±0.04) is limited. Nevertheless, both EBM (0.46±0.04) and BRF (0.57±0.04) approaches achieved limited overall accuracy; a noticeable overlap was found in their feature lists with top 10 feature weight rankings. Conclusion: The proposed two-step machine learning model with BRF classifier possesses a better performance than the one with EBM classifier in the post-radiotherapy survival group prediction of NSCLC. Future works, preferably in the joint use of deep learning, are in demand to further improve the prediction results.
Item Open Access A Radiomics-Incorporated Deep Ensemble Learning Model for Multi-Parametric MRI-based Glioma Segmentation(2023) YANG, CHENAbstractPurpose: To develop a deep ensemble learning model with a radiomics spatial encoding execution for improved glioma segmentation accuracy using multi-parametric MRI (mp-MRI). Materials/Methods: This radiomics-incorporated deep ensemble learning model was developed using 369 glioma patients with a 4-modality mp-MRI protocol: T1, contrast-enhanced T1 (T1-Ce), T2, and FLAIR. In each modality volume, a 3D sliding kernel was implemented across the brain to capture image heterogeneity: fifty-six radiomic features were extracted within the kernel, resulting in a 4th order tensor. Each radiomic feature can then be encoded as a 3D image volume, namely a radiomic feature map (RFM). For each patient, all RFMs extracted from all 4 modalities were processed by the Principal Component Analysis (PCA) for dimension reduction, and the first 4 principal components (PCs) were selected. Next, four deep neural networks following the U-net’s architecture were trained for the segmenting of a region-of-interest (ROI): each network utilizes the mp-MRI and 1 of the 4 PCs as a 5-channel input for 2D execution. Last, the 4 softmax probability results given by the U-net ensemble were superimposed and binarized by Otsu’s method as the segmentation result. Three deep ensemble models were trained to segment enhancing tumor (ET), tumor core (TC), and whole tumor (WT), respectively. Segmentation results given by the proposed ensemble were compared to the mp-MRI-only U-net results. Results: All 3 radiomics-incorporated deep learning ensemble models were successfully implemented: Compared to mp-MRI-only U-net results, the dice coefficients of ET (0.777→0.817), TC (0.742→0.757), and WT (0.823→0.854) demonstrated improvements. Accuracy, sensitivity, and specificity results demonstrated the same patterns. Conclusion: The adopted radiomics spatial encoding execution enriches the image heterogeneity information that leads to the successful demonstration of the proposed neural network ensemble design, which offers a new tool for mp-MRI-based medical image segmentation.