Browsing by Subject "Calibration"
Results Per Page
Sort Options
Item Open Access 16-Channel biphasic current-mode programmable charge balanced neural stimulation.(Biomedical engineering online, 2017-08) Li, Xiaoran; Zhong, Shunan; Morizio, JamesBackground
Neural stimulation is an important method used to activate or inhibit action potentials of the neuronal anatomical targets found in the brain, central nerve and peripheral nerve. The neural stimulator system produces biphasic pulses that deliver balanced charge into tissue from single or multichannel electrodes. The timing and amplitude of these biphasic pulses are precisely controlled by the neural stimulator software or imbedded algorithms. Amplitude mismatch between the anodic current and cathodic current of the biphasic pulse will cause permanently damage for the neural tissues. The main goal of our circuit and layout design is to implement a 16-channel biphasic current mode programmable neural stimulator with calibration to minimize the current mismatch caused by inherent complementary metal oxide semiconductor (CMOS) manufacturing processes.Methods
This paper presents a 16-channel constant current mode neural stimulator chip. Each channel consists of a 7-bit controllable current DAC used as sink and source current driver. To reduce the LSB quantization error and the current mismatch, an automatic calibration circuit and flow diagram is presented in this paper. There are two modes of operation of the stimulator chip-namely, stimulation mode and calibration mode. The chip also includes a digital interface used to control the stimulator parameters and calibration levels specific for each individual channel.Results
This stimulator Application Specific Integrated Circuit (ASIC) is designed and fabricated in a 0.18 μm High-Voltage CMOS technology that allows for ±20 V power supply. The full-scale stimulation current was designed to be at 1 mA per channel. The output current was shown to be constant throughout the timing cycles over a wide range of electrode load impedances. The calibration circuit was also designed to reduce the effect of CMOS process variation of the P-channel metal oxide semiconductor (PMOS) and N-channel metal oxide semiconductor (NMOS) devices that will result in charge delivery to have less than 0.13% error.Conclusions
A 16-channel integrated biphasic neural stimulator chip with calibration is presented in this paper. The stimulator circuit design was simulated and the chip layout was completed. The chip layout was verified using design rules check (DRC) and layout versus schematic (LVS) design check using computer aided design (CAD) software. The test results we presented show constant current stimulation with charge balance error within 0.13% least-significant-bit (LSB). This LSB error was consistent throughout a variety stimulation patterns and electrode load impedances.Item Open Access 3D refraction correction and extraction of clinical parameters from spectral domain optical coherence tomography of the cornea.(Opt Express, 2010-04-26) Zhao, Mingtao; Kuo, Anthony N; Izatt, Joseph ACapable of three-dimensional imaging of the cornea with micrometer-scale resolution, spectral domain-optical coherence tomography (SDOCT) offers potential advantages over Placido ring and Scheimpflug photography based systems for accurate extraction of quantitative keratometric parameters. In this work, an SDOCT scanning protocol and motion correction algorithm were implemented to minimize the effects of patient motion during data acquisition. Procedures are described for correction of image data artifacts resulting from 3D refraction of SDOCT light in the cornea and from non-idealities of the scanning system geometry performed as a pre-requisite for accurate parameter extraction. Zernike polynomial 3D reconstruction and a recursive half searching algorithm (RHSA) were implemented to extract clinical keratometric parameters including anterior and posterior radii of curvature, central cornea optical power, central corneal thickness, and thickness maps of the cornea. Accuracy and repeatability of the extracted parameters obtained using a commercial 859nm SDOCT retinal imaging system with a corneal adapter were assessed using a rigid gas permeable (RGP) contact lens as a phantom target. Extraction of these parameters was performed in vivo in 3 patients and compared to commercial Placido topography and Scheimpflug photography systems. The repeatability of SDOCT central corneal power measured in vivo was 0.18 Diopters, and the difference observed between the systems averaged 0.1 Diopters between SDOCT and Scheimpflug photography, and 0.6 Diopters between SDOCT and Placido topography.Item Open Access A unifying framework for interpreting and predicting mutualistic systems.(Nature communications, 2019-01) Wu, Feilun; Lopatkin, Allison J; Needs, Daniel A; Lee, Charlotte T; Mukherjee, Sayan; You, LingchongCoarse-grained rules are widely used in chemistry, physics and engineering. In biology, however, such rules are less common and under-appreciated. This gap can be attributed to the difficulty in establishing general rules to encompass the immense diversity and complexity of biological systems. Furthermore, even when a rule is established, it is often challenging to map it to mechanistic details and to quantify these details. Here we report a framework that addresses these challenges for mutualistic systems. We first deduce a general rule that predicts the various outcomes of mutualistic systems, including coexistence and productivity. We further develop a standardized machine-learning-based calibration procedure to use the rule without the need to fully elucidate or characterize their mechanistic underpinnings. Our approach consistently provides explanatory and predictive power with various simulated and experimental mutualistic systems. Our strategy can pave the way for establishing and implementing other simple rules for biological systems.Item Open Access Ca2+ channel nanodomains boost local Ca2+ amplitude.(Proc Natl Acad Sci U S A, 2013-09-24) Tadross, Michael R; Tsien, Richard W; Yue, David TLocal Ca(2+) signals through voltage-gated Ca(2+) channels (CaVs) drive synaptic transmission, neural plasticity, and cardiac contraction. Despite the importance of these events, the fundamental relationship between flux through a single CaV channel and the Ca(2+) signaling concentration within nanometers of its pore has resisted empirical determination, owing to limitations in the spatial resolution and specificity of fluorescence-based Ca(2+) measurements. Here, we exploited Ca(2+)-dependent inactivation of CaV channels as a nanometer-range Ca(2+) indicator specific to active channels. We observed an unexpected and dramatic boost in nanodomain Ca(2+) amplitude, ten-fold higher than predicted on theoretical grounds. Our results uncover a striking feature of CaV nanodomains, as diffusion-restricted environments that amplify small Ca(2+) fluxes into enormous local Ca(2+) concentrations. This Ca(2+) tuning by the physical composition of the nanodomain may represent an energy-efficient means of local amplification that maximizes information signaling capacity, while minimizing global Ca(2+) load.Item Open Access Calibrating single-ended fiber-optic Raman spectra distributed temperature sensing data.(Sensors (Basel), 2011) Hausner, Mark B; Suárez, Francisco; Glander, Kenneth E; van de Giesen, Nick; Selker, John S; Tyler, Scott WHydrologic research is a very demanding application of fiber-optic distributed temperature sensing (DTS) in terms of precision, accuracy and calibration. The physics behind the most frequently used DTS instruments are considered as they apply to four calibration methods for single-ended DTS installations. The new methods presented are more accurate than the instrument-calibrated data, achieving accuracies on the order of tenths of a degree root mean square error (RMSE) and mean bias. Effects of localized non-uniformities that violate the assumptions of single-ended calibration data are explored and quantified. Experimental design considerations such as selection of integration times or selection of the length of the reference sections are discussed, and the impacts of these considerations on calibrated temperatures are explored in two case studies.Item Open Access Computed tomography dose index and dose length product for cone-beam CT: Monte Carlo simulations.(Journal of applied clinical medical physics, 2011-01-19) Kim, Sangroh; Song, Haijun; Samei, Ehsan; Yin, Fang-Fang; Yoshizumi, Terry TDosimetry in kilovoltage cone beam computed tomography (CBCT) is a challenge due to the limitation of physical measurements. To address this, we used a Monte Carlo (MC) method to estimate the CT dose index (CTDI) and the dose length product (DLP) for a commercial CBCT system. As Dixon and Boone showed that CTDI concept can be applicable to both CBCT and conventional CT, we evaluated weighted CT dose index (CTDI(w)) and DLP for a commercial CBCT system. Two extended CT phantoms were created in our BEAMnrc/EGSnrc MC system. Before the simulations, the beam collimation of a Varian On-Board Imager (OBI) system was measured with radiochromic films (model: XR-QA). The MC model of the OBI X-ray tube, validated in a previous study, was used to acquire the phase space files of the full-fan and half-fan cone beams. Then, DOSXYZnrc user code simulated a total of 20 CBCT scans for the nominal beam widths from 1 cm to 10 cm. After the simulations, CBCT dose profiles at center and peripheral locations were extracted and integrated (dose profile integral, DPI) to calculate the CTDI per each beam width. The weighted cone-beam CTDI (CTDI(w,l)) was calculated from DPI values and mean CTDI(w,l) (CTDI(w,l)) and DLP were derived. We also evaluated the differences of CTDI(w) values between MC simulations and point dose measurements using standard CT phantoms. In results, it was found that CTDI(w,600) was 8.74 ± 0.01 cGy for head and CTDI(w,900) was 4.26 ± 0.01 cGy for body scan. The DLP was found to be proportional to the beam collimation. We also found that the point dose measurements with standard CT phantoms can estimate the CTDI within 3% difference compared to the full integrated CTDI from the MC method. This study showed the usability of CTDI as a dose index and DLP as a total dose descriptor in CBCT scans.Item Open Access Development and Implementation of Bayesian Computer Model Emulators(2011) Lopes, Danilo LourencoOur interest is the risk assessment of rare natural hazards, such as
large volcanic pyroclastic flows. Since catastrophic consequences of
volcanic flows are rare events, our analysis benefits from the use of
a computer model to provide information about these events under
natural conditions that may not have been observed in reality.
A common problem in the analysis of computer experiments, however, is the high computational cost associated with each simulation of a complex physical process. We tackle this problem by using a statistical approximation (emulator) to predict the output of this computer model at untried values of inputs. Gaussian process response surface is a technique commonly used in these applications, because it is fast and easy to use in the analysis.
We explore several aspects of the implementation of Gaussian process emulators in a Bayesian context. First, we propose an improvement for the implementation of the plug-in approach to Gaussian processes. Next, we also evaluate the performance of a spatial model for large data sets in the context of computer experiments.
Computer model data can also be combined to field observations in order to calibrate the emulator and obtain statistical approximations to the computer model that are closer to reality. We present an application where we learn the joint distribution of inputs from field data and then bind this auxiliary information to the emulator in a calibration process.
One of the outputs of our computer model is a surface of maximum volcanic flow height over some geographical area. We show how the topography of the volcano area plays an important role in determining the shape of this surface, and we propose methods
to incorporate geophysical information in the multivariate analysis of computer model output.
Item Open Access Exploiting Optical Contrasts for Cervical Precancer Diagnosis via Diffuse Reflectance Spectroscopy(2010) Chang, Vivide Tuan ChyanAmong women worldwide, cervical cancer is the third most common cancer with an incidence rate of 15.3 per 100,000 and a mortality rate of 7.8 per 100,000 women. This is largely attributed to the lack of infrastructure and resources in the developing countries to support the organized screening and diagnostic programs that are available to women in developed nations. Hence, there is a critical global need for a screening and diagnostic paradigm that is effective in low-resource settings. Various strategies are described to design an optical spectroscopic sensor capable of collecting reliable diffuse reflectance data to extract quantitative optical contrasts for cervical cancer screening and diagnosis.
A scalable Monte Carlo based optical toolbox can be used to extract absorption and scattering contrasts from diffuse reflectance acquired in the cervix in vivo. [Total Hb] was shown to increase significantly in high-grade cervical intraepithelial neoplasia (CIN 2+), clinically the most important tissue grade to identify, compared to normal and low-grade intraepithelial neoplasia (CIN 1). Scattering was not significantly decreased in CIN 2+ versus normal and CIN 1, but was significantly decreased in CIN relative to normal cervical tissues.
Immunohistochemistry via anti-CD34, which stains the endothelial cells that line blood vessels, was used to validate the observed absorption contrast. The concomitant increase in microvessel density and [total Hb] suggests that both are reactive to angiogenic forces from up-regulated expression of VEGF in CIN 2+. Masson's trichrome stain was used to assess collagen density changes associated with dysplastic transformation of the cervix, hypothesized as the dominant source of decreased scattering observed. Due to mismatch in optical and histological sampling, as well as the small sample size, collagen density and scattering did not change in a similar fashion with tissue grade. Dysplasia may also induce changes in cross-linking of collagen without altering the amount of collagen present. Further work would be required to elucidate the exact sources of scattering contrast observed.
Common confounding variables that limit the accuracy and clinical acceptability of optical spectroscopic systems are calibration requirements and variable probe-tissue contact pressures. Our results suggest that using a real-time self-calibration channel, as opposed to conventional post-experiment diffuse reflectance standard calibration measurements, significantly improved data integrity for the extraction of scattering contrast. Extracted [total Hb] and scattering were also significantly associated with applied contact probe pressure in colposcopically normal sites. Hence, future contact probe spectroscopy or imaging systems should incorporate a self-calibration channel and ensure spectral acquisition at a consistent contact pressure to collect reliable data with enhanced absorption and scattering contrasts.
Another method to enhance optical contrast is to selectively interrogate different depths in the dysplastic cervix. For instance, scattering has been shown to increase in the epithelium (increase in nuclear-to-cytoplasmic ratio) while decrease in the stroma (re-organization of the extra-cellular matrix and changes in of collagen fiber cross-links). A fiber-optic probe with 45° illumination and collection fibers with a separation distance of 330 μm was designed and constructed to selectively interrogate the cervical epithelium. Mean extraction errors from liquid phantoms with optical properties mimicking the cervical epithelium for μa and μs' were 11.3 % and 12.7 %, respectively. Diffuse reflectance spectra from 9 sites in four loop electrosurgical excision procedure (LEEP) patients were analyzed. Preliminary data demonstrate the utility of the oblique fiber geometry in extracting scattering contrast in the cervical epithelium. Further work is needed to study the systematic error in optical property extraction and to incorporate simultaneous extraction of epithelial and stromal contrasts using both flat and oblique illumination and collection fibers.
Various strategies, namely self-calibration, consistent contact pressure, and the incorporation of depth-selective sensing, have been proposed to improve the data integrity of an optical spectroscopic system for maximal contrast. In addition to addressing field operation requirements (such as power and operator training requirement), these improvements should enable the collection of reliable spectral data to aid in the adoption of optical smart sensors in the screening and diagnosis of cervical precancer, especially in a global health setting.
Item Open Access Imaging system QA of a medical accelerator, Novalis Tx, for IGRT per TG 142: our 1 year experience.(Journal of applied clinical medical physics, 2012-07-05) Chang, Z; Bowsher, J; Cai, J; Yoo, S; Wang, Z; Adamson, J; Ren, L; Yin, FFAmerican Association of Physicists in Medicine (AAPM) task group (TG) 142 has recently published a report to update recommendations of the AAPM TG 40 report and add new recommendations concerning medical accelerators in the era of image-guided radiation therapy (IGRT). The recommendations of AAPM TG 142 on IGRT are timely. In our institute, we established a comprehensive imaging QA program on a medical accelerator based on AAPM TG 142 and implemented it successfully. In this paper, we share our one-year experience and performance evaluation of an OBI capable linear accelerator, Novalis Tx, per TG 142 guidelines.Item Open Access On Uncertainty Quantification for Systems of Computer Models(2017) Kyzyurova, KseniaScientific inquiry about natural phenomena and processes are increasingly relying on the use of computer models as simulators of such processes. The challenge of using computer models for scientific investigation is that they are expensive in terms of computational cost and resources. However, the core methodology of fast statistical emulation (approximation) of a computer model overcomes this computational problem.
Complex phenomena and processes are often described not by a single computer model, but by a system of computer models or simulators. Direct emulation of a system of simulators may be infeasible for computational and logistical reasons.
This thesis proposes a statistical framework for fast emulation of systems of computer models and demonstrates its potential for inferential and predictive scientific goals.
The first chapter of the thesis introduces the Gaussian stochastic process (GaSP) emulator of a single simulator and summarizes ideas and findings in the rest of the thesis. The second chapter investigates the possibility of using independent GaSP emulators of computer models for fast construction of emulators of systems of computer models. The resulting approximation to a system of computer models is called the linked emulator. The third chapter discusses the irrelevance of attempting to model multivariate output of a computer model, for the purpose of emulation of that model. The linear model of coregionalization (LMC) is used to demonstrate this irrelevance, from both a theoretical perspective and from simulation studies. The fourth chapter introduces a framework for calibration of a system of computer models, using its linked emulator. The linked emulator allows for development of independent emulators of submodels on their own separately constructed design spaces, thus leading to effective dimension reduction in explored parameter space. The fifth chapter addresses the use of some non-Gaussian emulators, in particular censored and truncated GaSP emulators. The censored emulator is constructed to appropriately account for zero-inflated output of a computer model, arising when there are large regions of the input space for which the computer model output is zero. The truncated GaSP accommodates computer model output that is constrained to appear in a certain region. The linked emulator, for systems of computer models whose individual subemulators are either censored or truncated, is also presented. The last chapter concludes with an exposition of further research directions based on the ideas explored in the thesis.
The methodology developed in this thesis is illustrated by an application to quantification of the hazard from pyroclastic flow from the Soufri\`{e}re Hills Volcano on the island of Montserrat; a case study on prediction of volcanic ash transport and dispersal from the Eyjafjallaj{\"o}kull volcano, Iceland in April 14-16, 2010; and calibration of a vapour-liquid equilibrium model, a submodel of the Aspen Plus \textcopyright~chemical process software for design and deployment of amine-based $\mathrm{CO_2}$ capture systems.
Item Open Access Optimal processing of photoreceptor signals is required to maximize behavioural sensitivity.(The Journal of physiology, 2010-06) Okawa, Haruhisa; Miyagishima, K Joshua; Arman, A Cyrus; Hurley, James B; Field, Greg D; Sampath, Alapakkam PThe sensitivity of receptor cells places a fundamental limit upon the sensitivity of sensory systems. For example, the signal-to-noise ratio of sensory receptors has been suggested to limit absolute thresholds in the visual and auditory systems. However, the necessity of optimally processing sensory receptor signals for behaviour to approach this limit has received less attention. We investigated the behavioural consequences of increasing the signal-to-noise ratio of the rod photoreceptor single-photon response in a transgenic mouse, the GCAPs-/- knockout. The loss of fast Ca2+ feedback to cGMP synthesis in phototransduction for GCAPs-/- mice increases the magnitude of the rod single-photon response and dark noise, with the increase in size of the single-photon response outweighing the increase in noise. Surprisingly, despite the increased rod signal-to-noise ratio, behavioural performance for GCAPs-/- mice was diminished near absolute visual threshold. We demonstrate in electrophysiological recordings that the diminished performance compared to wild-type mice is explained by poorly tuned postsynaptic processing of the rod single-photon response at the rod bipolar cell. In particular, the level of postsynaptic saturation in GCAPs-/- rod bipolar cells is not sufficient to eliminate rod noise, and degrades the single-photon response signal-to-noise ratio. Thus, it is critical for retinal processing to be optimally tuned near absolute threshold; otherwise the visual system fails to utilize fully the signals present in the rods.Item Open Access Plate-specific gain map correction for the improvement of detective quantum efficiency in computed radiography.(2010) Schnell, Erich A.The purpose of this work is to improve the NPS, and thus DQE, of CR images by correcting for pixel-to-pixel gain variations specific to each plate. Ten high-exposure open field images were taken with an RQA5 spectrum, with a sixth generation CR plate suspended in air without a cassette. Image values were converted to exposure, the plates registered using fiducial dots on the plate, the ten images averaged, and then high-pass filtered to remove low frequency contributions from field inhomogeneity. A gain-map was then produced by converting all pixel values in the average into fractions with mean of one. The resultant gain-map of the plate was used to normalize subsequent single images to correct for pixel-to-pixel gain fluctuation. The normalized NPS (NNPS) for all images was calculated both with and without the gain-map correction. The NNPS with correction showed improvement over the non-corrected case over the range of frequencies from 0.15 –2.5 mm-1. At high exposure (40 mR), NNPS was 50-90% better with gain-map correction than without. A small further improvement in NNPS was seen from careful registering of the gain-map with subsequent images using small fiducial dots, because of slight misregistration during scanning. CR devices have not traditionally employed gain-map corrections common with DR detectors because of the multiplicity of plates used with each reader. This study demonstrates that a simple gain-map can be used to correct for the fixed-pattern noise and thus improve the DQE of CR imaging. Such a method could easily be implemented by manufacturers because each plate has a unique bar code and the gain-map could be stored for retrieval after plate reading. These experiments indicated that an improvement in NPS (and hence, DQE) is possible, depending on exposure level,over all frequencies with this technique.Item Open Access PRODUCING GROWTH ESTIMATES OF DUKE FOREST PINE STANDS USING USDA’S FOREST VEGETATION SIMULATOR(2021-04-28) Bowman, HunterDuke Forest manages its loblolly pine stands for timber revenues. Duke Forest seeks construct a management plan informed by an optimized harvest schedule. This project aims to produce a reliable growth and yield model in order to produce the volume yield estimates necessary to compute the optimized harvest schedule. This was accomplished by testing and calibrating USDA’s Forest Vegetation Simulator (FVS) using Duke Forest Continuous Forest Inventory data. FVS was tested by using different site index inputs, and the diameter growth modifiers of FVS were then applied to reproduce current loblolly pine stand characteristics. It was found that the observed site index of a Duke Forest loblolly pine stand produces a better estimate of Duke Forest basal areas than do the Natural Resource Conservation Service’s Web Soil Survey site indices. Despite the use of the more accurate site index numbers, FVS needed further calibration in order to produce statistically significant estimates of Duke Forest basal areas. Diameter growth modifiers of 1.25, 2.6, and 2.6 were applied to stands with low, average, and high site indices respectively, which calibrated the model. FVS, when calibrated, can provide Duke Forest with a workable growth and yield model. In the future, even more precise calibrations will be possible as the continuous Forest Inventory process continues, and plots sampled for this project are re-sampled. This will inform the diameter and height growth increments FVS uses to grow the inputted trees into the future.Item Open Access Separating DNA with different topologies by atomic force microscopy in comparison with gel electrophoresis.(J Phys Chem B, 2010-09-23) Jiang, Yong; Rabbi, Mahir; Mieczkowski, Piotr A; Marszalek, Piotr EAtomic force microscopy, which is normally used for DNA imaging to gain qualitative results, can also be used for quantitative DNA research, at a single-molecular level. Here, we evaluate the performance of AFM imaging specifically for quantifying supercoiled and relaxed plasmid DNA fractions within a mixture, and compare the results with the bulk material analysis method, gel electrophoresis. The advantages and shortcomings of both methods are discussed in detail. Gel electrophoresis is a quick and well-established quantification method. However, it requires a large amount of DNA, and needs to be carefully calibrated for even slightly different experimental conditions for accurate quantification. AFM imaging is accurate, in that single DNA molecules in different conformations can be seen and counted. When used carefully with necessary correction, both methods provide consistent results. Thus, AFM imaging can be used for DNA quantification, as an alternative to gel electrophoresis.Item Open Access Spectral diffusion: an algorithm for robust material decomposition of spectral CT data.(Phys Med Biol, 2014-11-07) Clark, Darin P; Badea, Cristian TClinical successes with dual energy CT, aggressive development of energy discriminating x-ray detectors, and novel, target-specific, nanoparticle contrast agents promise to establish spectral CT as a powerful functional imaging modality. Common to all of these applications is the need for a material decomposition algorithm which is robust in the presence of noise. Here, we develop such an algorithm which uses spectrally joint, piecewise constant kernel regression and the split Bregman method to iteratively solve for a material decomposition which is gradient sparse, quantitatively accurate, and minimally biased. We call this algorithm spectral diffusion because it integrates structural information from multiple spectral channels and their corresponding material decompositions within the framework of diffusion-like denoising algorithms (e.g. anisotropic diffusion, total variation, bilateral filtration). Using a 3D, digital bar phantom and a material sensitivity matrix calibrated for use with a polychromatic x-ray source, we quantify the limits of detectability (CNR = 5) afforded by spectral diffusion in the triple-energy material decomposition of iodine (3.1 mg mL(-1)), gold (0.9 mg mL(-1)), and gadolinium (2.9 mg mL(-1)) concentrations. We then apply spectral diffusion to the in vivo separation of these three materials in the mouse kidneys, liver, and spleen.Item Open Access STIFFNESS CALIBRATION OF ATOMIC FORCE MICROSCOPY PROBES UNDER HEAVY FLUID LOADING(2010) Kennedy, Scott JosephThis research presents new calibration techniques for the characterization of atomic force microscopy cantilevers. Atomic force microscopy cantilevers are sensors that detect forces on the order of pico- to nanonewtons and displacements on the order of nano- to micrometers. Several calibration techniques exist with a variety of strengths and weaknesses. This research presents techniques that enable the noncontact calibration of the output sensor voltage-to-displacement sensitivity and the cantilever stiffness through the analysis of the unscaled thermal vibration of a cantilever in a liquid environment.
A noncontact stiffness calibration method is presented that identifies cantilever characteristics by fitting a dynamic model of the cantilever reaction to a thermal bath according to the fluctuation-dissipation theorem. The fitting algorithm incorporates an assumption of heavy fluid loading, which is present in liquid environments.
The use of the Lorentzian line function and a variable-slope noise model as an alternate approach to the thermal noise method was found to reduce the difference between calibrations preformed on the same cantilever in air and in water relative to existing techniques. This alternate approach was used in combination with the new stiffness calibration technique to determine the voltage-to-displacement sensitivity without requiring contact loading of the cantilever.
Additionally, computational techniques are presented in the investigation of alternate cantilever geometries, including V-shaped cantilevers and warped cantilevers. These techniques offer opportunities for future research to further reduce the uncertainty of atomic force microscopy calibration.
Item Open Access Strategic planning to reduce the burden of stroke among veterans: using simulation modeling to inform decision making.(Stroke, 2014-07) Lich, Kristen Hassmiller; Tian, Yuan; Beadles, Christopher A; Williams, Linda S; Bravata, Dawn M; Cheng, Eric M; Bosworth, Hayden B; Homer, Jack B; Matchar, David BBACKGROUND AND PURPOSE: Reducing the burden of stroke is a priority for the Veterans Affairs Health System, reflected by the creation of the Veterans Affairs Stroke Quality Enhancement Research Initiative. To inform the initiative's strategic planning, we estimated the relative population-level impact and efficiency of distinct approaches to improving stroke care in the US Veteran population to inform policy and practice. METHODS: A System Dynamics stroke model of the Veteran population was constructed to evaluate the relative impact of 15 intervention scenarios including both broad and targeted primary and secondary prevention and acute care/rehabilitation on cumulative (20 years) outcomes including quality-adjusted life years (QALYs) gained, strokes prevented, stroke fatalities prevented, and the number-needed-to-treat per QALY gained. RESULTS: At the population level, a broad hypertension control effort yielded the largest increase in QALYs (35,517), followed by targeted prevention addressing hypertension and anticoagulation among Veterans with prior cardiovascular disease (27,856) and hypertension control among diabetics (23,100). Adjusting QALYs gained by the number of Veterans needed to treat, thrombolytic therapy with tissue-type plasminogen activator was most efficient, needing 3.1 Veterans to be treated per QALY gained. This was followed by rehabilitation (3.9) and targeted prevention addressing hypertension and anticoagulation among those with prior cardiovascular disease (5.1). Probabilistic sensitivity analysis showed that the ranking of interventions was robust to uncertainty in input parameter values. CONCLUSIONS: Prevention strategies tend to have larger population impacts, though interventions targeting specific high-risk groups tend to be more efficient in terms of number-needed-to-treat per QALY gained.Item Open Access Strategies for Temporal and Spectral Imaging with X-ray Computed Tomography(2012) Johnston, Samuel MorrisX-ray micro-CT is widely used for small animal imaging in preclinical studies of cardiopulmonary disease, but further development is needed to improve spatial resolution, temporal resolution, and material contrast. This study presents a set of tools that achieve these improvements. These tools include the mathematical formulation and computational implementation of algorithms for calibration, image reconstruction, and image analysis with our custom micro-CT system. These tools are tested in simulations and in experiments with live animals. With these tools, it is possible to visualize the distribution of a contrast agent throughout the body of a mouse as it changes over time, and produce 5-dimensional images (3 spatial dimensions + time + energy) of the cardiac cycle.
Item Open Access Topics in Bayesian Computer Model Emulation and Calibration, with Applications to High-Energy Particle Collisions(2019) Coleman, Jacob RyanProblems involving computer model emulation arise when scientists simulate expensive experiments with computationally expensive computer models. To more quickly probe the experimental design space, statisticians build emulators that act as fast surrogates to the computationally expensive computer models. The emulators are typically Gaussian processes, in order to induce spatial correlation in the input space. Often the main scientific interest lies in inference on one or more input parameters of the computer model which do not vary in nature. Inference on these input parameters is referred to as ``calibration,'' and these inputs are referred to as ``calibration parameters.'' We first detail our emulation and calibration model for an application in high-energy particle physics; this model brings together some existing ideas in the literature on handling multivariate output, and lays out a foundation for the remainder of the thesis.
In the next two chapters, we introduce novel ideas in the field of computer model emulation and calibration. The first addresses the problem of model comparison in this context, and how to simultaneously compare competing computer models while performing calibration. Using a mixture model to facilitate the comparison, we demonstrate that by conditioning on the mixture parameter we can recover the calibration parameter posterior from an independent calibration model. This mixture is then extended in the case of correlated data, a crucial innovation for this comparison framework to be useful in the particle collision setting. Lastly, we explore two possible non-exchangeable mixture models, where model preference changes over the input space.
The second novel idea addresses density estimation when only coarse bin counts are available. We develop an estimation method which avoids costly numerical integration and maintains plausible correlation for nearby bins. Additionally, we extend the method to density regression so that full a full density can be predicted from an input parameter, having only been trained on coarse histograms. This enables inference on the input parameter, and we develop an importance sampling method that compares favorably to the foundational calibration method detailed earlier.
Item Open Access Towards a field-compatible optical spectroscopic device for cervical cancer screening in resource-limited settings: effects of calibration and pressure.(Opt Express, 2011-09-12) Chang, Vivide Tuan-Chyan; Merisier, Delson; Yu, Bing; Walmer, David K; Ramanujam, NirmalaQuantitative optical spectroscopy has the potential to provide an effective low cost, and portable solution for cervical pre-cancer screening in resource-limited communities. However, clinical studies to validate the use of this technology in resource-limited settings require low power consumption and good quality control that is minimally influenced by the operator or variable environmental conditions in the field. The goal of this study was to evaluate the effects of two sources of potential error: calibration and pressure on the extraction of absorption and scattering properties of normal cervical tissues in a resource-limited setting in Leogane, Haiti. Our results show that self-calibrated measurements improved scattering measurements through real-time correction of system drift, in addition to minimizing the time required for post-calibration. Variations in pressure (tested without the potential confounding effects of calibration error) caused local changes in vasculature and scatterer density that significantly impacted the tissue absorption and scattering properties Future spectroscopic systems intended for clinical use, particularly where operator training is not viable and environmental conditions unpredictable, should incorporate a real-time self-calibration channel and collect diffuse reflectance spectra at a consistent pressure to maximize data integrity.