Browsing by Subject "Computational physics"
- Results Per Page
- Sort Options
Item Open Access Accurate and Efficient Methods for the Scattering Simulation of Dielectric Objects in a Layered Medium(2019) Huang, WeifengElectromagnetic scattering in a layered medium (LM) is important for many engineering applications, including the hydrocarbon exploration. Various computational methods for tackling well logging simulations are summarized. Given their advantages and limitations, main attention is devoted to the surface integral equation (SIE) and its hybridization with the finite element method (FEM).
The thin dielectric sheet (TDS) based SIE, i.e., TDS-SIE, is introduced to the simulation of fractures. Its accuracy and efficiency are extensively demonstrated by simulating both conductive and resistive fractures. Fractures of variant apertures, conductivities, dipping angles, and extensions are also simulated and analyzed. With the aid of layered medium Green's functions (LMGFs), TDS-SIE is extended into the LM, which results in the solver entitled LM-TDS-SIE.
In order to consider the borehole effect, the well-known loop and tree basis functions are utilized to overcome low-frequency breakdown of the Poggio, Miller, Chang, Harrington, Wu, and Tsai (PMCHWT) formulation. This leads to the loop-tree (LT) enhanced PMCHWT, which can be hybridized with TDS-SIE to simulate borehole and fracture together. The resultant solver referred to as LT-TDS is further extended into the LM, which leads to the solver entitled LM-LT-TDS.
For inhomogeneous or complex structures, SIE is not suitable for their scattering simulations. It becomes advantageous to hybridize FEM with SIE in the framework of domain decomposition method (DDM), which allows independent treatment of each subdomain and nonconformal meshes between them. This hybridization can be substantially enhanced by the adoption of LMGFs and loop-tree bases, leading to the solver entitled LM-LT-DDM. In comparison with LM-LT-TDS, this solver is more powerful and able to handle more general low-frequency scattering problems in layered media.
Item Open Access Adaptive Discontinuous Galerkin Methods Applied to Multiscale & Multiphysics Problems towards Large-scale Modeling & Joint Imaging(2019) Zhan, QiweiAdvanced numerical algorithms should be amenable to the scalability in the increasingly powerful supercomputer architectures, the adaptivity in the intricately multi-scale engineering problems, the efficiency in the extremely large-scale wave simulations, and the stability in the dynamically multi-phase coupling interfaces.
In this study, I will present a multi-scale \& multi-physics 3D wave propagation simulator to tackle these grand scientific challenges. This simulator is based on a unified high-order discontinuous Galerkin (DG) method, with adaptive nonconformal meshes, for efficient wave propagation modeling. This algorithm is compatible with a diverse portfolio of real-world geophysical/biomedical applications, ranging from longstanding tough problems: such as arbitrary anisotropic elastic/electromagnetic materials, viscoelastic materials, poroelastic materials, piezoelectric materials, and fluid-solid coupling, to recent challenging topics: such as fracture-wave interactions.
Meanwhile, I will also present some important theoretical improvements. Especially, I will show innovative Riemann solvers, inspired by physical meanings, in a unified mathematical framework, which are the key to guaranteeing the stability and accuracy of the DG methods and domain decomposition methods.
Item Open Access ADAPTIVE LOCAL REDUCED BASIS METHOD FOR RISK-AVERSE PDE CONSTRAINED OPTIMIZATION AND INVERSE PROBLEMS(2018) Zou, ZilongMany physical systems are modeled using partial dierential equations (PDEs) with uncertain or random inputs. For such systems, naively propagating a xed number of samples of the input probability law (or an approximation thereof) through the PDE is often inadequate to accurately quantify the risk associated with critical system responses. In addition, to manage the risk associated with system response and devise risk-averse controls for such PDEs, one must obtain the numerical solution of a risk-averse PDE-constrained optimization problem, which requires substantial computational eorts resulting from the discretization of the underlying PDE in both the physical and stochastic dimensions.
Bayesian Inverse problem, where unknown system parameters need to be inferred from some noisy data of the system response, is another important class of problems that suffer from excessive computational cost due to the discretization of the underlying PDE. To accurately characterize the inverse solution and quantify its uncertainty, tremendous computational eorts are typically required to sample from the posterior distribution of the system parameters given the data. Surrogate approximation of the PDE model is an important technique to expedite the inference process and tractably solve such problems.
In this thesis, we develop a goal-oriented, adaptive sampling and local reduced basis approximation for PDEs with random inputs. The method, which we denote by local RB, determines a set of samples and an associated (implicit) Voronoi partition of the parameter domain on which we build local reduced basis approximations of the PDE solution. The local basis in a Voronoi cell is composed of the solutions at a xed number of closest samples as well as the gradient information in that cell. Thanks to the local nature of the method, computational cost of the approximation does not increase as more samples are included in the local RB model. We select the local RB samples in an adaptive and greedy manner using an a posteriori error indicator based on the residual of the approximation.
Additionally, we modify our adaptive sampling process using an error indicator that is specifically targeted for the approximation of coherent risk measures evaluated at quantities of interest depending on PDE solutions. This allow us to tailor our method to efficiently quantify the risk associated with the system responses. We then combine our local RB method with an inexact trust region method to eciently solve risk-averse optimization problems with PDE constraints. We propose a numerical framework for systematically constructing surrogate models for the trust-region subproblem and the objective function using local RB approximations.
Finally, we extend our local RB method to eciently approximate the Gibbs posterior distribution for inverse problems under uncertainty. The local RB method is employed to construct a cheap surrogate model for the loss function in the Gibbs posterior formula. To improve the accuracy of the surrogate approximation, we adopt a Sequential Monte Carlo framework to guide the progressive and adaptive construction of the local RB surrogate. The resulted method provides subjective and ecient inference of unknown system parameters under general distribution and noise assumptions.
We provide theoretical error bounds for our proposed local RB method and its extensions, and numerically demonstrate the performance of our methods through various examples.
Item Open Access An Investigation into the Multiscale Nature of Turbulence and its Effect on Particle Transport(2022) Tom, JosinWe study the effect of the multiscale properties of turbulence on particle transport, specifically looking at the physical mechanisms by which different turbulent flow scales impact the settling speeds of particles in turbulent flows. The average settling speed of small heavy particles in turbulent flows is important for many environmental problems such as water droplets in clouds and atmospheric aerosols. The traditional explanation for enhanced particle settling speeds in turbulence for a one-way coupled (1WC) system is the preferential sweeping mechanism proposed by Maxey (1987, J. Fluid Mech.), which depends on the preferential sampling of the fluid velocity gradient field by the inertial particles. However, Maxey's analysis does not shed light on role of different turbulent flow scales contributing to the enhanced settling, partly since the theoretical analysis was restricted to particles with weak inertia.
In the first part of the work, we develop a new theoretical result, valid for particles of arbitrary inertia, that reveals the multiscale nature of the preferential sweeping mechanism. In particular, the analysis shows how the range of scales at which the preferential sweeping mechanism operates depends on particle inertia. This analysis is complemented by results from Direct Numerical Simulations (DNS) where we examine the role of different flow scales on the particle settling speeds by coarse-graining (filtering) the underlying flow. The results explain the dependence of the particle settling speeds on Reynolds number and show how the saturation of this dependence at sufficiently large Reynolds number depends upon particle inertia. We also explore how particles preferentially sample the fluid velocity gradients at various scales and show that while rapidly settling particles do not preferentially sample the fluid velocity gradients, they do preferentially sample the fluid velocity gradients coarse-grained at scales outside of the dissipation range.
Inspired by our finding that the effectiveness of the preferential sweeping mechanism depends on how particles interact with the strain and vorticity fields at different scales, we next shed light on the multiscale dynamics of turbulence by exploring the properties of the turbulent velocity gradients at different scales. We do this by analyzing the evolution equations for the filtered velocity gradient tensor (FVGT) in the strain-rate eigenframe. However, the pressure Hessian and viscous stress are unclosed in this frame of reference, requiring in-depth modelling. Using data from DNS of the forced Navier-Stokes equation, we consider the relative importance of local and non-local terms in the FVGT eigenframe equations across the scales using statistical analysis. We show that the anisotropic pressure Hessian (which is one of the unclosed terms) exhibits highly non-linear behavior at low values of normalized local gradients, with important modeling implications. We derive a generalization of the classical Lumley triangle that allows us to show that the pressure Hessian has a preference for two-component axisymmetric configurations at small scales, with a transition to a more isotropic state at larger scales. We also show that the current models fail to capture a number of subtle features observed in our results and provide useful guidelines for improving Lagrangian models of the FVGT.
In the final part of the work, we look at how two-way coupling (2WC) modifies the multiscale preferential sweeping mechanism. We comment on the the applicability of the theoretical analysis developed in the first part of the work for 2WC flows. Monchaux & Dejoan (2017, Phys. Rev. Fluids) showed using DNS that while for low particle loading the effect of 2WC on the global flow statistics is weak, 2WC enables the particles to drag the fluid in their vicinity down with them, significantly enhancing their settling, and they argued that two-way coupling suppresses the preferential sweeping mechanism. We explore this further by considering the impact of 2WC on the contribution made by eddies of different sizes on the particle settling. In agreement with Monchaux & Dejoan, we show that even for low loading, 2WC strongly enhances particle settling. However, contrary to their study, we show that preferential sweeping remains important in 2WC flows. In particular, for both 1WC and 2WC flows, the settling enhancement due to turbulence is dominated by contributions from particles in straining regions of the flow, but for the 2WC case, the particles also drag the fluid down with them, leading to an enhancement of their settling compared to the 1WC case. Overall, the novel results presented here not only augments the current understanding of the different physical mechanisms in producing enhanced settling speeds from a fundamental physics perspective, but can also be used to improve predictive capabilities in large-scale atmospheric modeling.
Item Open Access Bayesian Parameter Estimation for Relativistic Heavy-ion Collisions(2018) Bernhard, JonahI develop and apply a Bayesian method for quantitatively estimating properties of the quark-gluon plasma (QGP), an extremely hot and dense state of fluid-like matter created in relativistic heavy-ion collisions.
The QGP cannot be directly observed---it is extraordinarily tiny and ephemeral, about 10^(-14) meters in size and living 10^(-23) seconds before freezing into discrete particles---but it can be indirectly characterized by matching the output of a computational collision model to experimental observations.
The model, which takes the QGP properties of interest as input parameters, is calibrated to fit the experimental data, thereby extracting a posterior probability distribution for the parameters.
In this dissertation, I construct a specific computational model of heavy-ion collisions and formulate the Bayesian parameter estimation method, which is based on general statistical techniques.
I then apply these tools to estimate fundamental QGP properties, including its key transport coefficients and characteristics of the initial state of heavy-ion collisions.
Perhaps most notably, I report the most precise estimate to date of the temperature-dependent specific shear viscosity eta/s, the measurement of which is a primary goal of heavy-ion physics.
The estimated minimum value is eta/s = 0.085(-0.025)(+0.026) (posterior median and 90% uncertainty), remarkably close to the conjectured lower bound of 1/4pi =~ 0.08.
The analysis also shows that eta/s likely increases slowly as a function of temperature.
Other estimated quantities include the temperature-dependent bulk viscosity zeta/s, the scaling of initial state entropy deposition, and the duration of the pre-equilibrium stage that precedes QGP formation.
Item Open Access Evaluating Chemoradiation Resistance using 18F-FDG PET/CT in Murine Head and Neck Squamous Cell Carcinoma(2024) Heirman, Casey ClairePurpose: There is an urgent need for enhanced prognostic tools and insights into the biology andbiomarkers of chemoradiation resistance. The purpose of this research is to identify prognostic radiomic features on 18F-FDG micro-PET/CT images for the response to chemoradiation in mouse models of head and neck squamous cell carcinoma (HNSCC). Methods: Two orthotopic murine human papillomavirus (HPV)-negative (MOC1, MOC2) models and one HPV-positive HNSCC model (MLM1) were utilized. Oral cavity tumors were induced by injecting HNSCC cells into the buccal mucosa of C57BL/6J mice. Bidirectional caliper tumor measurements were conducted thrice weekly with chemoradiation initiated once tumors exceeded 50mm3: cisplatin (5 mg/kg, intraperitoneal) and image-guided radiation therapy (8 Gy) on days 0 and 7. On day 14, 18F-FDG PET/CT imaging was performed. Mice were euthanized when they reached humane endpoint (tumor >12mm, any dimension). Tumors were segmented on PET/CT, and volume, SUVmean, and SUVmax were extracted. Liver regions of interest were segmented for normalization of tumor SUVmax to liver SUVmean. Treatment response was evaluated using tumor size on day 10 relative to day 0. Tumor growth and survival were compared across models (Kruskal-Wallis with Tukey’s post hoc test) and based on image feature parameters (Mann-Whitney U test). The associations between survival, SUVmax, and tumor volumes were analyzed by Cox proportional hazards model and Kaplan-Meier curves with log-rank test. Results: 121 mice underwent treatment and imaging. Univariate analysis showed that median tumor volume and SUVmax were significantly associated with survival and treatment response (p<0.05). The Cox model indicated a significant difference in survival probability based on risk score values derived from the model's coefficients estimating their relative risk of time-to-event (p<0.0001). Conclusion: These results suggest that image features on 18F-PET/CT can provide prognostic insight into treatment response and survival in preclinical HNSCC models, providing a platform for further studies to improve understanding of the biological underpinnings of radiomic expression associated with chemoradiation resistance.
Item Open Access Experimentally informed bottom-up model of yeast polarity suggests how single cells respond to chemical gradients(2021) Ghose, DebrajHow do single cells—like neutrophils, amoebae, neurons, yeast, etc.—grow or move in a directed fashion in response to spatial chemical gradients? To address this question, we used the mating response in the budding yeast, Saccharomyces cerevisiae, as a biological model. To mate, pairs of yeast cells orient their cell fronts toward each other and fuse. Each cell relies on a pheromone gradient established by its partner to orient correctly. The ability for cells to resolve gradients is striking, because each cell is only ~5 μm wide and is thought to be operating in complex and noisy environments. Interestingly, mating pairs of cells often start out not facing each other. When this happens, the front of each cell—defined by a patch of cortical polarity proteins—undergoes a series of erratic and random movements along the cell cortex till it ‘finds’ the mating partner’s patch. We sought to understand how polarity patches in misaligned cells find each other. To this end, we first characterized patch movement in cells by the distribution of their step-lengths and turning angles and analyzed a bottom-up model of the polarity patch’s dynamics. The final version of our model combines 11 reaction-diffusion equations representing polarity protein dynamics with a stochastic module representing vesicle trafficking on a plane with periodic boundary conditions. We found that the model could not quantitatively reproduce step-length and turning angle distributions, which suggested that some mechanisms driving patch movement may not be present in the model. Incorpo-rating biologically inspired features into the model—such as focused vesicle delivery, sudden fluctuations in vesicle delivery rates, and the presence of polarity inhibitors on vesicles—allowed us to quantitatively match the in vivo polarity patch’s behavior. We then introduced a pathway, which connects pheromone sensing to polarity, to see how the model behaved when exposed to pheromone gradients. Concurrently, we analyzed the behavior of fluores-cently labeled polarity patches in mating pairs of cells. We discovered that the ~1 μm wide patch could (remarkably) sense and bias its movement up pheromone gradients, a result corroborated by our model. Further analysis of the model revealed that while the polarity patch tends to bias the location of a cluster of pheromone-sensing-receptors, the receptors can transform an external pheromone distribution into a peaked non-linear “polarity-activation” profile that “pulls” the patch. Stochastic perturbations cause the patch to “ping-pong” around the activation-profile. In a gradient of pheromone, this ping-ponging be-comes biased, leading to net patch movement up the gradient. We speculate that such a mechanism could be used by single cells with mobile fronts to track chemical gradients.
Item Open Access Exploring a Patient-Specific Respiratory Motion Prediction Model for Tumor Localization in Abdominal and Lung SBRT Patients(2024) Garcia Alcoser, Michael EnriqueManaging respiratory motion in radiotherapy for abdominal and lung stereotactic body radiation therapy (SBRT) patients is crucial to achieving dose conformity and sparing healthy tissue. While previous studies have utilized principal component analysis (PCA) combined with deep learning to localize lung tumors in real-time, these models lack testing or validation with patient data from later treatment fractions. This study aims to achieve highly accurate 3D tumor localization for abdominal and lung cancer patients using a PCA-based convolutional neural network (CNN) motion model. Another goal is to enhance this prediction model by addressing interfractional motion in lung patients through deep learning and multiple treatment-acquired CT datasets.
The patient's 4D computed tomography (4DCT) image was registered, and resulting deformation vector fields (DVFs) were approximated using PCA. PCA coefficients, linked to diaphragm displacement, controlled breath variability in synthetic CT images. Digitally reconstructed radiographs (DRRs) from synthetic CTs served as network input. The networks were evaluated using a CT image designated for abdominal and lung cancer patient testing. A DRR of the testing CT was input into the model, and the predicted CT image was validated against the test CT to locate the tumor.
This study validated a PCA-based CNN motion model for abdominal and lung cancer patients. Intrafractional motion modeling accurately predicted abdominal tumors with a maximum error of 1.41 mm, whereas lung tumor localization resulted in a maximum error of 2.83 mm. Interfractional motion significantly influenced the accuracy of lung tumor prediction.
Item Open Access Fast Large-Scale Electromagnetic Simulation of Doubly Periodic Structures in Layered Media(2020) Mao, YiqianThis work focuses on the electromagnetic simulation of doubly periodic structures embedded in layered media, which can be commonly found in extreme ultraviolet (EUV) lithography, metasurfaces, and frequency selective surfaces. Such problems can be solved by rigorous numerical methods like finite-difference time-domain (FDTD) method and finite element method (FEM). However, FDTD and FEM are universal methods far from achieving the best efficiency for the target problem. To exploit the problem property and facilitate the problem solving of large size in low complexity, two approaches are proposed.
The first approach, Calder\'{o}n preconditioned spectral-element spectral-integral (CP-SESI) method, is an improvement over the existing finite-element boundary-integral method. By introducing the Calder\'{o}n preconditioner, domain decomposition and the fast Fourier transform technique, the time and memory complexity of CP-SESI is reduced to O(N\textsuperscript{1.30}) and O(N\textsuperscript{1.07}), respectively.
The second approach, based on modified U-Net, introduces two stages of problem solving: the training stage and the inference stage. In the training stage, accurate data generated by the rigorous CP-SESI solver is fed to the U-Net. In the inference stage, the U-Net can be applied to solve unseen problems in real time. Particularly, the EUV problem with mask size of 4 um by 4 um can be solved on a personal desktop within 5 min on CPU or 30 s on GPU.
Besides, two types of equivalent boundary conditions to replace thin structures are developed and incorporated into the framework of CP-SESI. The first one, surface current boundary condition, has better accuracy for resistive materials. The second one, impedance transition boundary condition, is more accurate for conductive materials. The accuracy comparison between the above two boundary conditions are compared.
Item Open Access Fermion Bag Approach for Hamiltonian Lattice Field Theories(2018) Huffman, EmilieUnderstanding the critical behavior near quantum critical points for strongly correlated quantum many-body systems remains intractable for the vast majority of scenarios. Challenges involve determining if a quantum phase transition is first- or second-order, and finding the critical exponents for second-order phase transitions. Learning about where second-order phase transitions occur and determining their critical exponents is particularly interesting, because each new second-order phase transition defines a new quantum field theory.
Quantum Monte Carlo (QMC) methods are one class of techniques that, when applicable, offer reliable ways to extract the nonperturbative physics near strongly coupled quantum critical points. However, there are two formidable bottlenecks to the applicability of QMC: (1) the sign problem and (2) algorithmic update inefficiencies. In this thesis, I overcome both these difficulties for a class of problems by extending the fermion bag approach recently developed by Shailesh Chandrasekharan to the Hamiltonian formalism and by demonstrating progress using the example of a specific quantum system known as the $t$-$V$ model, which exhibits a transition from a semimetal to an insulator phase for a single flavor of four-component Dirac fermions.
I adapt the fermion bag approach, which was originally developed in the context of Lagrangian lattice field theories, to be applicable within the Hamiltonian formalism, and demonstrate its success in two ways: first, through solutions to new sign problems, and second, through the development of new efficient QMC algorithms. In addressing the first point, I present a solution to the sign problem for the $t$-$V$ model. While the $t$-$V$ model is the simplest Gross-Neveu model of the chiral Ising universality class, the specter of the sign problem previously prevented its simulation with QMC for 30 years, and my solution initiated the first QMC studies for this model. The solution is then extended to many other Hamiltonian models within a class that involves fermions interacting with quantum spins. Some of these models contain an interesting quantum phase transition between a massless/semimetal phase to a massive/insulator phase in the so called Gross-Neveu universality class. Thus, the new solutions to the sign problem allow for the use of the QMC method to study these universality classes.
The second point is addressed through the construction of a Hamiltonian fermion bag algorithm. The algorithm is then used to compute the critical exponents for the second-order phase transition in the $t$-$V$ model. By pushing the calculations to significantly larger lattice sizes than previous recent computations ($64^2$ sites versus $24^2$ sites), I am able to compute the critical exponents more reliably here compared to earlier work. I show that the inclusion of these larger lattices causes a significant shift in the values of the critical exponents that was not evident for the smaller lattices. This shift puts the critical exponent values in closer agreement with continuum $4-\epsilon$ expansion calculations. The largest lattice sizes of $64^2$ at a comparably low temperature are reachable due to efficiency gains from this Hamiltonian fermion bag algorithm. The two independent critical exponents I find, which completely characterize the phase transition, are $\eta=.51(3)$ and $\nu=.89(1)$, compared to previous work that had lower values for these exponents. The finite size scaling fit is excellent with a $\chi^2/DOF=.90$, showing strong evidence for a second-order critical phase transition, and hence a non-perturbative QFT can be defined at the critical point.
Item Open Access modeling and application of nonlinear metasurface(2018) Liu, XiaojunA patterned metasurface can strongly scatter incident light, functioning as an extremely low-profile lens, filter, reflector or other optical devices. Nonlinear metasurfaces‒combine the properties of natural nonlinear medium with novel features such as negative refractive index, magneto-electric coupling‒provide novel nonlinear features not available in nature. Compared to conventional optical components that often extend many wavelengths in size, nonlinear metasurfaces are flexible and extremely compact.
Characterization of a nonlinear metasurface is challenging, not only due to its inherent anisotropy, but also because of the rich wave interactions available. This thesis presents an overview of the work by the author in modeling and application of nonlinear metasurfaces. Analytical methods - transfer matrix method and surface homogenization method - for characterizing nonlinear metasurfaces are presented. A generalized transfer matrix method formalism for four wave mixing is derived, and applied to analyze nonlinear interface, film, and metallo-dielectric stack. Various channels of plasmonic and Fabry-perot enhancement are investigated. A homogenized description of nonlinear metasurfaces is presented. The homogenization procedure is based on the nonlinear generalized sheet transition conditions (GSTCs), where an optically thin nonlinear metasurface is modeled as a layer of dipoles radiating at fundamental and nonlinear frequencies. By inverting the nonlinear GSTCs, a retrieval procedure is developed to retrieve the nonlinear parameters of the nonlinear metasurface. As an example, we investigate a nonlinear metasurface which presents nonlinear magnetoelectric coupling in near infrared regime. The method is expected to apply to any patterned metasurface whose thickness is much smaller than the wavelengths of operation, with inclusions of arbitrary geometry and material composition, across the electromagnetic spectrum.
The second part presents the applications of nonlinear metasurfaces. First, we show that the third-harmonic generation (THG) can be drastically enhanced by the nonlinear metasurfaces – film-coupled nanostripes. The large THG enhancement is experimentally and theoretically demonstrated. With numerical simulations, we present methods to clarify the origin of the THG from the metasurface. Second, the enhanced two-photon photochormism is investigated by integrating spiropyrans with film-coupled nanocubes. This metasurface platform couples almost 100% energy at resonance, and induces isomerization of spiropyrans to merocyanines. Due to the large Purcell enhancement introduced by the film-coupled nanocubes, fluorescence lifetime measurements on the merocyanine form reveal large enhancements on spontaneous emission rate, as well as high quantum efficiency. We show that this metasurface platform is capable of storing information, supports reading and writing with ultra-low power, offering new possibilities in optical data storage.
Item Open Access MULTI STAGE HEAVY QUARK TRANSPORT IN ULTRA-RELATIVISTIC HEAVY-ION COLLISIONS(2022) Fan, WenkaiThe quark-gluon plasma (QGP) is one of the most interesting forms of matter providing us with insight on quantum chromodynamics (QCD) and the early universe. It is believed that the heavy-ion collision experiments at the Relativistic Heavy Ion Collider (RHIC) and the Large Hadron Collider (LHC) have created the QGP medium by colliding two heavy nuclei at nearly the speed of light. Since the collision happens really fast, we can not observe the QGP directly. Instead, we look at the hundreds or even thousands of final hadrons coming out of the collision. In particular, jet and heavy flavor observables are excellent probes of the transport properties of such a medium. On the theoretical side, computational models are essential to make the connections between the final observables and the plasma. Previously studies have em-ployed a comprehensive multistage modeling approach of both the probes and the medium. In this dissertation, heavy quarks are investigated as probes of the QGP. First, the framework that describes the evolution of both soft and hard particles during the collision are discussed, which include initial condition, hydrodynamical expansion, parton transport, hadronization, and hadronic rescattering. It has recently been organized into the Jet Energy-loss Tomography with a Statistically and Computationally Advanced Program Envelope (JETSCAPE) framework, which allows people to study heavy-ion collision in a more systematic manner. To study the energy loss of hard partons inside the QGP medium, the linear Boltzmann transport model (LBT) and the MATTER formalism are combined and have achieved a simultaneous description of both charged hadron, D meson, and inclusive jet observables. To further extract the transport coefficients, a Bayesian analysis is conducted which constrains the parameters in the transport models.
Item Open Access Multiscale forward and inverse problems with the DGFD method and the deep learning method(2020) Zhang, RunrenA fast electromagnetic (EM) forward solver has been developed for the subsurface detection, with application includes producing synthetic logging data and instructing large-scale field test and inversion. A deep learning based full wave inversion method has also been developed to reconstruct the underground anomaly.
Since the gas and oil industry has very high demands for the forward modeling speed when doing inversion, the inversion model is usually simplified to a 1D or 2D problem by supposing the geometry of object invariant in two or one direction. The full 3D inversion is still a hot topic for research, which requires both fast 3D forward solver and efficient inversion method. The bottleneck for the forward solver is how to solve the large-scale linear system efficiently; the bottleneck for the inversion is how to pick the global minimum from lots of local minimums efficiently for the inverse problem.
For the forward part, the domain decomposition method (DDM) inspired discontinuous Galerkin frequency domain (DGFD) method has been extended to model the vertical open borehole resistivity measurement with structured gradient meshes; besides, the DGFD method has been extended to model the logging-while-drilling (LWD) resistivity measurement in high-angle and horizontal (HA/HZ) well and curved layers with a flipped total field/scattered field (TF/SF) mixed solver. An approximated casing model has also been proposed to accelerate the large-scale curved casing modeling with borehole-to-surface measurements.
For the inversion part, a convolutional neural network based inversion has been developed to reconstruct the lateral extent and direction of the hydraulic fracture through scattered electromagnetic field data under borehole-to-surface measurements; further, the deep transfer learning is applied in the same scenario to improve the performance of the inversion. Additionally, a fully connected neural network has been developed for the Devine field data and successfully reconstruct the shape of the hydraulic fracture with good agreement to the conventional inversion.
Item Open Access Novel Identification of Radiomic Biomarkers with Langevin Annealing(2018) Lafata, KyleAs modern diagnostic imaging systems become increasingly more quantitative, new techniques and scientific disciplines are emerging as powerful avenues to personalized medicine. Leading this paradigm shift is the field of radiomics, which attempts to identify computational biomarkers hidden within high-throughput imaging data. Radiomic biomarkers may be able to non-invasively detect the underlying phenotype of an image, leading to new insights and innovation. Such insights may include correlations between radiomic features and pathological information, treatment response, functional characteristics, etc. Searching for meaningful structure within these quantitative datasets is therefore fundamental to contemporary imaging science.
However, imaging data is being created at an alarming rate, and the ability to understand hyper-dimensional relationships between radiomic features is often non-trivial. This is a major challenge for radiomic applications to clinical medicine. There is an urgent need to investigate novel technologies to manage this challenge, so that radiomics can be effectively and efficiently used to solve complex clinical problems.
Major contributions of this dissertation research include: (1) the development of a novel data clustering algorithm called Langevin annealing; (2) the development of a translational research environment to use this clustering algorithm for oncological imaging characterization; and (3) applications of the developed technique for clinical diagnosis and treatment evaluation.
Cluster analysis – i.e., the grouping of similar data objects together based on their intrinsic properties – is a common approach to understanding otherwise non-trivial data. Although data clustering is a hallmark of many fields, it is generally an ill-defined practice. Notable limitations and challenges include: (1) defining the appropriate number of clusters, (2) poor optimization near local minima, and (3) black-box approaches that often make interpretation difficult.
To overcome some of these challenges, data clustering may benefit from physics intuition. Langevin Annealing models radiomics data as a dynamical system in equilibrium with a heat bath. The method is briefly summarized as follows. (1) A radial basis function is used to construct a density distribution, , from the radiomics data. (2) A potential, , is then constructed such that is the ground-state solution to the time-independent Schrödinger equation. (3) Using , Langevin dynamics are formulated at sub-critical temperature to avoid ergodicity, and the radiomic feature vectors are propagated as the system evolves.
The time dynamics of individual radiomic feature vectors lead to different metastable states, which are interpreted as clusters. Clustering is achieved when subsets of the data aggregate near minima of . While the radiomic feature vectors are pushed towards potential minima by the potential gradient, , Brownian motion allows them to effectively tunnel through local potential barriers and escape saddle points into functional locations of the potential surface otherwise forbidden. Nearly degenerate local minima can merge, allowing hyper-dimensional radiomics data to be explored at high resolution, while still maintaining a reasonably narrow impulse response.
Since radiomics is still a rather immature field, there is currently a lack of commercially-available software. Therefore, a radiomic feature extraction platform was developed to facilitate this dissertation research. The extraction code – which is the primary focus of Chapter 2 – is the means to converting unstructured data (i.e., images) into structured data (i.e., features). It therefore serves as a translational research environment that provides the necessary input to subsequent radiomic analyses.
Imaging features derived from dynamic environments – such as the lungs – are highly susceptible to variability and motion artifacts. Before implementing major analyses and new techniques, Chapter 3 investigates the spatial-temporal variability of radiomic features. This problem is approached based on computational experiments using both (a) a simulated dynamic digital phantom and (b) real patient CT data. Key findings demonstrate that radiomic features are sensitive to spatial-temporal changes, which may influence the quality of feature analyses. In general, radiomic feature-sensitivity is shown to be broad and inherently feature-specific.
The theory and development of Langevin annealing is covered in Chapter 4, where a complete theory and mathematical derivation is formulated. Several illustrating examples and computational simulations are used to demonstrate the clustering technique. Chapter 5 provides a comprehensive validation of Langevin annealing using a common benchmark dataset. Accurate ergodic sampling is achieved, clustering performance is evaluated, hyper-parameters are characterized, and the approach is compared to several well-known clustering algorithms.
While this dissertation has broader application to many aspects of medical imaging, a majority of the analysis is conducted on patients with non-small cell lung cancer (NSCLC). In particular, two classes of CT-based radiomic biomarkers are considered throughout this work: (1) radiomic biomarkers derived from lungs, and (2) radiomic biomarkers derived from tumors. These radiomic biomarkers are characterized in Chapter 6 and Chapter 7, where the emphasis of the dissertation shifts from highly theoretical work to a more application-driven focus.
Radiomic lung biomarkers are investigated in Chapter 6, where several associations are identified linking the quantitative imaging data to pulmonary function. In general, patients with larger lungs of homogeneous, low attenuating pulmonary tissue are shown to have worse pulmonary function. Radiomic tumor biomarkers are investigated in Chapter 7, where several associations are identified linking the quantitative imaging data to treatment response. In general, relatively dense tumors with a homogenous coarse texture are shown to be linked with higher rates of local cancer recurrence following stereotactic body radiation therapy.
Item Open Access On Improving the Predictable Accuracy of Reduced-order Models for Fluid Flows(2020) Lee, Michael WilliamThe proper orthogonal decomposition (POD) is a classic method to construct empirical, linear modal bases which are optimal in a mean L2 sense. A subset of these modes can form the basis of a dynamical reduced-order model (ROM) of a physical system, including nonlinear, chaotic systems like fluid flows. While these POD-based ROMs can accurately simulate complex fluid dynamics, a priori model accuracy and stability estimates are unreliable. The work presented in this dissertation focuses on improving the predictability and accuracy of POD-based fluid ROMs. This is accomplished by ensuring several kinematically significant flow characteristics -- both at large scales and small -- are satisfied within the truncated bases. Several new methods of constructing and employing modal bases within this context are developed and tested. Reduced-order models of periodic flows are shown to be predictably accurate with high confidence; the predictable accuracy of quasi-periodic and chaotic fluid flow ROMs is increased significantly relative to existing approaches.
Item Open Access Recent Advances on the Design, Analysis and Decision-making with Expensive Virtual Experiments(2024) Ji, YiWith breakthroughs in virtual experimentation, computer simulation has been replacing physical experiments that are prohibitively expensive or infeasible to perform in a large scale. However, as the system becomes more complex and realistic, such simulations can be extremely time-consuming and simulating the entire parameter space becomes impractical. One solution is computer emulation, which builds a predictive model based on a handful of simulation data. Gaussian process is a popular emulator used in many physics and engineering applications for this purpose. In particular, for complicated scientific phenomena like the Quark-Gluon Plasma, employing a multi-fidelity emulator to pool information from multi-fidelity simulation data may enhance predictive performance while simultaneously reducing simulation costs. In this dissertation, we explore two novel approaches for multi-fidelity Gaussian process modeling. The first model is the Graphical Multi-fidelity Gaussian Process (GMGP) model, which embeds scientific dependencies among multi-fidelity data in a directed acyclic graph (DAG). The second model we present is the Conglomerate Multi-fidelity Gaussian Process (CONFIG) model, applicable to scenarios where the accuracy of a simulator is controlled by multiple continuous fidelity parameters.
Software engineering is another domain relying heavily on virtual experimentation. In order to ensure the robustness of a new software application, it is required to go through extensive testing and validation before production. Such testing is typically carried out through virtual experimentation and can require substantial computing resources, particularly as the system complexity grows. Fault localization is a key step in software testing as it pinpoints root causes of failures based on executed test case outcomes. However, existing fault localization techniques are mostly deterministic and provides limited insight into assessing the probabilistic risk of failure-inducing combinations. To address this limitation, we present a novel Bayesian Fault Localization (BayesFLo) framework for software testing, yielding a principled and probabilistic ranking of suspicious inputs for identifying the root causes of software failures.
Item Open Access Stochastic Modeling of Parametric and Model-Form Uncertainties in Computational Mechanics: Applications to Multiscale and Multimodel Predictive Frameworks(2023) Zhang, HaoUncertainty quantification (UQ) plays a critical role in computational science and engineering. The representation of uncertainties stands at the core of UQ frameworks and encompasses the modeling of parametric uncertainties --- which are uncertainties affecting parameters in a well-known model --- and model-form uncertainties --- which are uncertainties defined at the operator level. Past contributions in the field have primarily focused on parametric uncertainties in idealized environments involving simple state spaces and index sets. On the other hand, the consideration of model-form uncertainties (beyond model error correction) is still in its infancy. In this context, this dissertation aims to develop stochastic modeling approaches to represent these two forms of uncertainties in multiscale and multimodel settings.
The case of spatially-varying geometrical perturbations on nonregular index sets is first addressed. We propose an information-theoretic formulation where a push-forward map is used to induce bounded variations and the latent Gaussian random field is implicitly defined through a stochastic partial differential equation on the manifold defining the surface of interest. Applications to a gyroid structure and patient-specific brain interfaces are presented. We then address operator learning in a multiscale setting where we propose a data-free training method, applied to Fourier neural operators. We investigate the homogenization of random media defined at microscopic and mesoscopic scales. Next, we develop a Riemannian probabilistic framework to capture operator-form uncertainties in the multimodel setting (i.e., when a family of model candidates is available). The proposed methodology combines a proper-orthogonal-decomposition reduced-order model with Riemannian operators ensuring admissibility in the almost sure sense. The framework exhibits several key advantages, including the ability to generate a model ensemble within the convex hull defined by model proposals and to constrain the mean in the Fréchet sense, as well as ease of implementation. The method is deployed to investigate model-form uncertainties in various molecular dynamics simulations on graphene sheets. We finally propose an extension of this framework to systems described by coupled partial differential equations, with emphasis on the phase-field approach to brittle fracture.
Item Open Access The Shifted Interface/Boundary Method for Embedded Domain Computations(2021) Li, KanganNumerical computations involving complex geometries have a wide variety of applications in both science and engineering, including the simulation of fractures, melting and solidification, multiphase flows, biofilm growth, etc. Classical finite element methods rely on computational grids that are adapted (fitted) to the geometry, but this approach creates fundamental computational challenges, especially when considering evolving interfaces/boundaries. Embedded methods facilitate the treatment of complex geometries by avoiding fitted grids in favor of immersing the geometry on pre-existing grids.
The first part of this thesis work introduces a new embedded finite element interface method, the shifted interface method (SIM), to simulate partial differential equations over domains with internal interfaces. Our approach belongs to the family of surrogate/approximate interface methods and relies on the idea of shifting the location and value of interface jump conditions, by way of Taylor expansions. This choice has the goal of preserving optimal convergence rates while avoiding small cut cells and related problematic issues, typical of traditional embedded methods. In this part, SIM is applied to internal interface computations in the context of the mixed Poisson problem, also known as the Darcy flow problem and is extended to linear isotropic elasticity interface problems. An extensive set of numerical tests is provided to demonstrate the accuracy, robustness and efficiency of the proposed approach.
In the second part of the thesis, we propose a new framework for linear fracture mechanics, based on the idea of an approximate fracture geometry representation combined with approximate interface conditions. The approach evolves from SIM, and introduces the concept of an approximate fracture surface composed of the full edges/faces of an underlying grid that are geometrically close to the true fracture geometry. Similar to SIM, the interface conditions on the surrogate fracture are modified with Taylor expansions to achieve a prescribed level of accuracy. This shifted fracture method (SFM) does not require cut cell computations or complex data structures, since the behavior of the true fracture is mimicked with standard integrals on the approximate fracture. Furthermore, the energetics of the true fracture are represented within the prescribed level of accuracy and independently of the grid topology. We present a general computational framework and then apply it in the specific context of cohesive zone models, with an extensive set of numerical experiments in two and three dimensions.
In the third and final part, we develop a shifted boundary method (SBM), originated from Main and Scovazzi (2018), for the thermoelasticity equations. SBM requires to shift the location and value of both Dirichlet and Neumann boundary conditions to surrogate boundaries with Taylor expansions. In such a way, an opti- mal convergence rate can be preserved for both temperature and displacement. An extensive of numerical examples in both two and three dimensions are presented in this part to demonstrate the performance of SBM.
Item Open Access Towards Accurate and Robust Modeling of Fluid-Driven Fracture(2023) Costa, AndreThis work advances a phase-field method for fluid-driven fractures and proposes arobust and efficient discretization framework. It begins by addressing a modeling challenge related to the application of pressure loads on diffuse crack surfaces. Along the way, a new J-Integral for pressurized fractures in a regularized context is devel- oped.
Then, the focus turns to a hybrid method to model fluid-driven fracture propaga-tion. A so-called multi-resolution method is presented that uses a combination of en- richment schemes with the phase-field method to address the complex fluid-fracture interaction that occurs during hydraulic fracture. On one hand, the phase-field method alleviates some of the difficulties associated with the geometric evolution of the fracture, which are usually the limiting aspect of purely enrichment-based schemes. On the other hand, the discrete representation allows for a better treat- ment of the fluid loads and crack apertures, which are the main challenges associated with phase-field approaches.
The multi-resolution method is first presented in a simplified scheme to treat two-dimensional problems. Various benchmark problems are used to verify the framework against well-known analytical solutions. The method is then extended to three- dimensions. A robust algorithm to handle planar cracks in 3D is developed and its extension to non-planar cases is discussed. Finally, opportunities for improvements and extensions are discussed, paving the road for future work in this area.
Item Open Access Tunable Electronic Excitations in Hybrid Organic-Inorganic Materials: Ground-State and Many-Body Perturbation Approaches(2019) LIU, CHIThree-dimensional (3D) Hybrid Organic-Inorganic Perovskites (HOIPs) have been investigated intensively for application in photovoltaics in the last decade due to their extraordinary properties, including ease of fabrication, suitable band gap, large absorption, high charge carrier mobility, etc. However, the structure and properties of their two-dimensional (2D) counterparts, especially those with complex organic components, are not understood as deeply as the 3D HOIPs. Due to the easing of spatial constraints for the organic cations, 2D HOIPs potentially have more structural flexibility and thus higher tunability of their electronic properties compared to the 3D HOIPs. Motivated by a desire to demonstrate such flexibility and tunability, a series of 2D HOIPs with oligothiophene derivative as the organic cations and lead halide is investigated in the first part of this work. Initial computational models with variable organic and inorganic components are constructed from the experimental structure of 5,5''-bis(aminoethyl)-2,2':5',2''':5'',2'''-quaterthiophene lead bromide (AE4T\ch{PbBr4}). \textit{Ab initio} first-principles calculations are performed for these materials employing density functional theory with corrections for van der Waals interactions and spin-orbit coupling. The set of 2D HOIPs investigated is found to be understandable within a quantum-well-like model with distinctive localization and nature of the electron and hole carriers. The band alignment types of the inorganic and organic component can be varied by rational variation of the inorganic or organic component. With the computational protocol shown to work for the above series of oligothiophene-based lead halides, a more extensive family of the oligothiophene-based 2D HOIPs is then investigated to demonstrate their structural and electronic tunability. For AE2T\ch{PbI4}, the disorder of the organic cations are investigated systematically in synergy between theoretical techniques and experimental reference data provided by a collaborating group. A staggered arrangement of AE2T cations is revealed to be the most stable packing pattern with the correct band alignment types, in agreement with experiment results from optical spectroscopy. Another representative class of 2D HOIPs based on oligoacene derivatives is investigated to show structural and electronic tunability similar with their oligothiophene based counterparts. In the final part of the thesis, an all-electron implementation of Bethe-Salpeter equation (BSE) approach based on the $GW$ approximation is developed using numeric atom centered orbital basis sets, with the aim of developing first steps to a formal many-body theory treatment of neutral excitations, which goes beyond the independent-particle picture of density functional theory. Benchmarks of this implementation are performed for the low-lying excitation energies of a popular molecular benchmark set (``Thiel's" set) using results obtained using the Gaussian-orbital based MolGW code as reference values. The agreement between the BSE results computed by these two codes when using the same $GW$ quasiparticle energies validate our implementation. The impact of different underlying technical approximations to the $GW$ method is evaluated for the so-called ``two-pole" and ``Pad{\' e}" approximate evaluation techniques of the $GW$ self-energy and resulting quasiparticle energies. To reduce the computational cost in both time and memory, the convergence of the BSE results with respect to basis sets and unoccupied states is examined. An augmented numeric atom centered orbital basis set is proposed to obtain numerical converged results.