Browsing by Subject "Monte Carlo Method"
Now showing 1 - 19 of 19
Results Per Page
Sort Options
Item Open Access Competition between monomeric and dimeric crystals in schematic models for globular proteins.(J Phys Chem B, 2014-07-17) Fusco, Diana; Charbonneau, PatrickAdvances in experimental techniques and in theoretical models have improved our understanding of protein crystallization. However, they have also left open questions regarding the protein phase behavior and self-assembly kinetics, such as why (nearly) identical crystallization conditions can sometimes result in the formation of different crystal forms. Here, we develop a patchy particle model with competing sets of patches that provides a microscopic explanation of this phenomenon. We identify different regimes in which one or two crystal forms can coexist with a low-density fluid. Using analytical approximations, we extend our findings to different crystal phases, providing a general framework for treating protein crystallization when multiple crystal forms compete. Our results also suggest different experimental routes for targeting a specific crystal form, and for reducing the dynamical competition between the two forms, thus facilitating protein crystal assembly.Item Open Access Computed tomography dose index and dose length product for cone-beam CT: Monte Carlo simulations.(Journal of applied clinical medical physics, 2011-01-19) Kim, Sangroh; Song, Haijun; Samei, Ehsan; Yin, Fang-Fang; Yoshizumi, Terry TDosimetry in kilovoltage cone beam computed tomography (CBCT) is a challenge due to the limitation of physical measurements. To address this, we used a Monte Carlo (MC) method to estimate the CT dose index (CTDI) and the dose length product (DLP) for a commercial CBCT system. As Dixon and Boone showed that CTDI concept can be applicable to both CBCT and conventional CT, we evaluated weighted CT dose index (CTDI(w)) and DLP for a commercial CBCT system. Two extended CT phantoms were created in our BEAMnrc/EGSnrc MC system. Before the simulations, the beam collimation of a Varian On-Board Imager (OBI) system was measured with radiochromic films (model: XR-QA). The MC model of the OBI X-ray tube, validated in a previous study, was used to acquire the phase space files of the full-fan and half-fan cone beams. Then, DOSXYZnrc user code simulated a total of 20 CBCT scans for the nominal beam widths from 1 cm to 10 cm. After the simulations, CBCT dose profiles at center and peripheral locations were extracted and integrated (dose profile integral, DPI) to calculate the CTDI per each beam width. The weighted cone-beam CTDI (CTDI(w,l)) was calculated from DPI values and mean CTDI(w,l) (CTDI(w,l)) and DLP were derived. We also evaluated the differences of CTDI(w) values between MC simulations and point dose measurements using standard CT phantoms. In results, it was found that CTDI(w,600) was 8.74 ± 0.01 cGy for head and CTDI(w,900) was 4.26 ± 0.01 cGy for body scan. The DLP was found to be proportional to the beam collimation. We also found that the point dose measurements with standard CT phantoms can estimate the CTDI within 3% difference compared to the full integrated CTDI from the MC method. This study showed the usability of CTDI as a dose index and DLP as a total dose descriptor in CBCT scans.Item Open Access Crystallization of asymmetric patchy models for globular proteins in solution.(Phys Rev E Stat Nonlin Soft Matter Phys, 2013-07) Fusco, Diana; Charbonneau, PatrickAsymmetric patchy particle models have recently been shown to describe the crystallization of small globular proteins with near-quantitative accuracy. Here, we investigate how asymmetry in patch geometry and bond energy generally impacts the phase diagram and nucleation dynamics of this family of soft matter models. We find the role of the geometry asymmetry to be weak, but the energy asymmetry to markedly interfere with the crystallization thermodynamics and kinetics. These results provide a rationale for the success and occasional failure of the proposal of George and Wilson for protein crystallization conditions as well as physical guidance for developing more effective protein crystallization strategies.Item Open Access Emergence of limit-periodic order in tiling models.(Phys Rev E Stat Nonlin Soft Matter Phys, 2014-07) Marcoux, Catherine; Byington, Travis W; Qian, Zongjin; Charbonneau, Patrick; Socolar, Joshua ESA two-dimensional (2D) lattice model defined on a triangular lattice with nearest- and next-nearest-neighbor interactions based on the Taylor-Socolar monotile is known to have a limit-periodic ground state. The system reaches that state during a slow quench through an infinite sequence of phase transitions. We study the model as a function of the strength of the next-nearest-neighbor interactions and introduce closely related 3D models with only nearest-neighbor interactions that exhibit limit-periodic phases. For models with no next-nearest-neighbor interactions of the Taylor-Socolar type, there is a large degenerate class of ground states, including crystalline patterns and limit-periodic ones, but a slow quench still yields the limit-periodic state. For the Taylor-Socolar lattic model, we present calculations of the diffraction pattern for a particular decoration of the tile that permits exact expressions for the amplitudes and identify domain walls that slow the relaxation times in the ordered phases. For one of the 3D models, we show that the phase transitions are first order, with equilibrium structures that can be more complex than in the 2D case, and we include a proof of aperiodicity for a geometrically simple tile with only nearest-neighbor matching rules.Item Open Access Evaluation of an electron Monte Carlo dose calculation algorithm for electron beam.(Journal of applied clinical medical physics, 2008-06-23) Hu, Ye Angela; Song, Haijun; Chen, Zhe; Zhou, Sumin; Yin, Fang-FangThe electron Monte Carlo (eMC) dose calculation algorithm of the Eclipse treatment planning system is based heavily upon Monte Carlo simulation of the linac head and modeling of the linac beam characteristics with minimal measurement of beam data. Commissioning of the eMC algorithm on multiple identical linacs provided a unique opportunity to systematically evaluate the algorithm with actual measurements of clinically relevant beam and dose parameters. In this study, measured and eMC calculated dose distributions were compared both along and perpendicular to electron beam direction for electron energy/applicator/depth combination using measurement data from four Varian 21EX CLINAC linear accelerator (Varian Medical System, Palo Alto, CA). Cutout factors for sizes down to 3 x 3 cm were also compared. Comparisons between the measurement and the eMC calculated values show that the R90, R80, R50, and R10 values mostly agree within 3 mm. Measure and Calculated bremsstrahlung dose Dx correlates well statistically although eMC calculated Dx values are consistently smaller than the measured, with maximum discrepancy of 1% for the 20 MeV electron beams. Surface dose agrees mostly within 2%. Field width and penumbra agree mostly within 3mm. Calculation grid size is found to have a significant effect on the dose calculation. A grid size of 5 mm can produce erroneous dose distributions. Using a grid size of 2.5 mm and a 3% accuracy specified for the eMC to stop calculation iteration, the absolute output agrees with measurements within 3% for field sizes of 5 x 5 cm or larger. For cutout of 3 x 3 cm, however, the output disagreement can reach 8%. Our result indicate that eMC algorithm in Eclipse provides acceptable agreement with measurement data for most clinical situations. Calculation grid size of 2.5 mm or smaller is recommended.Item Open Access Importance sampling for the infinite sites model.(Statistical applications in genetics and molecular biology, 2008-01) Hobolth, Asger; Uyenoyama, Marcy K; Wiuf, CarstenImportance sampling or Markov Chain Monte Carlo sampling is required for state-of-the-art statistical analysis of population genetics data. The applicability of these sampling-based inference techniques depends crucially on the proposal distribution. In this paper, we discuss importance sampling for the infinite sites model. The infinite sites assumption is attractive because it constraints the number of possible genealogies, thereby allowing for the analysis of larger data sets. We recall the Griffiths-Tavaré and Stephens-Donnelly proposals and emphasize the relation between the latter proposal and exact sampling from the infinite alleles model. We also introduce a new proposal that takes knowledge of the ancestral state into account. The new proposal is derived from a new result on exact sampling from a single site. The methods are illustrated on simulated data sets and the data considered in Griffiths and Tavaré (1994).Item Open Access Inference for nonlinear epidemiological models using genealogies and time series.(PLoS Comput Biol, 2011-08) Rasmussen, David A; Ratmann, Oliver; Koelle, KatiaPhylodynamics - the field aiming to quantitatively integrate the ecological and evolutionary dynamics of rapidly evolving populations like those of RNA viruses - increasingly relies upon coalescent approaches to infer past population dynamics from reconstructed genealogies. As sequence data have become more abundant, these approaches are beginning to be used on populations undergoing rapid and rather complex dynamics. In such cases, the simple demographic models that current phylodynamic methods employ can be limiting. First, these models are not ideal for yielding biological insight into the processes that drive the dynamics of the populations of interest. Second, these models differ in form from mechanistic and often stochastic population dynamic models that are currently widely used when fitting models to time series data. As such, their use does not allow for both genealogical data and time series data to be considered in tandem when conducting inference. Here, we present a flexible statistical framework for phylodynamic inference that goes beyond these current limitations. The framework we present employs a recently developed method known as particle MCMC to fit stochastic, nonlinear mechanistic models for complex population dynamics to gene genealogies and time series data in a Bayesian framework. We demonstrate our approach using a nonlinear Susceptible-Infected-Recovered (SIR) model for the transmission dynamics of an infectious disease and show through simulations that it provides accurate estimates of past disease dynamics and key epidemiological parameters from genealogies with or without accompanying time series data.Item Open Access Likelihoods from summary statistics: recent divergence between species.(Genetics, 2005-11) Leman, Scotland C; Chen, Yuguo; Stajich, Jason E; Noor, Mohamed AF; Uyenoyama, Marcy KWe describe an importance-sampling method for approximating likelihoods of population parameters based on multiple summary statistics. In this first application, we address the demographic history of closely related members of the Drosophila pseudoobscura group. We base the maximum-likelihood estimation of the time since speciation and the effective population sizes of the extant and ancestral populations on the pattern of nucleotide variation at DPS2002, a noncoding region tightly linked to a paracentric inversion that strongly contributes to reproductive isolation. Consideration of summary statistics rather than entire nucleotide sequences permits a compact description of the genealogy of the sample. We use importance sampling first to propose a genealogical and mutational history consistent with the observed array of summary statistics and then to correct the likelihood with the exact probability of the history determined from a system of recursions. Analysis of a subset of the data, for which recursive computation of the exact likelihood was feasible, indicated close agreement between the approximate and exact likelihoods. Our results for the complete data set also compare well with those obtained through Metropolis-Hastings sampling of fully resolved genealogies of entire nucleotide sequences.Item Open Access Linked Sensitivity Analysis, Calibration, and Uncertainty Analysis Using a System Dynamics Model for Stroke Comparative Effectiveness Research.(Medical decision making : an international journal of the Society for Medical Decision Making, 2016-11) Tian, Yuan; Hassmiller Lich, Kristen; Osgood, Nathaniel D; Eom, Kirsten; Matchar, David BBackground
As health services researchers and decision makers tackle more difficult problems using simulation models, the number of parameters and the corresponding degree of uncertainty have increased. This often results in reduced confidence in such complex models to guide decision making.Objective
To demonstrate a systematic approach of linked sensitivity analysis, calibration, and uncertainty analysis to improve confidence in complex models.Methods
Four techniques were integrated and applied to a System Dynamics stroke model of US veterans, which was developed to inform systemwide intervention and research planning: Morris method (sensitivity analysis), multistart Powell hill-climbing algorithm and generalized likelihood uncertainty estimation (calibration), and Monte Carlo simulation (uncertainty analysis).Results
Of 60 uncertain parameters, sensitivity analysis identified 29 needing calibration, 7 that did not need calibration but significantly influenced key stroke outcomes, and 24 not influential to calibration or stroke outcomes that were fixed at their best guess values. One thousand alternative well-calibrated baselines were obtained to reflect calibration uncertainty and brought into uncertainty analysis. The initial stroke incidence rate among veterans was identified as the most influential uncertain parameter, for which further data should be collected. That said, accounting for current uncertainty, the analysis of 15 distinct prevention and treatment interventions provided a robust conclusion that hypertension control for all veterans would yield the largest gain in quality-adjusted life years.Conclusions
For complex health care models, a mixed approach was applied to examine the uncertainty surrounding key stroke outcomes and the robustness of conclusions. We demonstrate that this rigorous approach can be practical and advocate for such analysis to promote understanding of the limits of certainty in applying models to current decisions and to guide future data collection.Item Open Access Monte Carlo methods for localization of cones given multielectrode retinal ganglion cell recordings.(Network (Bristol, England), 2013-01) Sadeghi, K; Gauthier, JL; Field, GD; Greschner, M; Agne, M; Chichilnisky, EJ; Paninski, LIt has recently become possible to identify cone photoreceptors in primate retina from multi-electrode recordings of ganglion cell spiking driven by visual stimuli of sufficiently high spatial resolution. In this paper we present a statistical approach to the problem of identifying the number, locations, and color types of the cones observed in this type of experiment. We develop an adaptive Markov Chain Monte Carlo (MCMC) method that explores the space of cone configurations, using a Linear-Nonlinear-Poisson (LNP) encoding model of ganglion cell spiking output, while analytically integrating out the functional weights between cones and ganglion cells. This method provides information about our posterior certainty about the inferred cone properties, and additionally leads to improvements in both the speed and quality of the inferred cone maps, compared to earlier "greedy" computational approaches.Item Open Access Optimal management of Riata leads with no known electrical abnormalities or externalization: a decision analysis.(Journal of cardiovascular electrophysiology, 2015-02) Pokorney, Sean D; Zhou, Ke; Matchar, David B; Love, Sean; Zeitler, Emily P; Lewis, Robert; Piccini, Jonathan PIntroduction
Riata and Riata ST implantable cardioverter-defibrillator (ICD) leads (St. Jude Medical, Sylmar, CA, USA) can develop conductor cable externalization and/or electrical failure. Optimal management of these leads remains unknown.Methods and results
A Markov model compared 4 lead management strategies: (1) routine device interrogation for electrical failure, (2) systematic yearly fluoroscopic screening and routine device interrogation, (3) implantation of new ICD lead with capping of the in situ lead, and (4) implantation of new ICD lead with extraction of the in situ lead. The base case was a 64-year-old primary prevention ICD patient. Modeling demonstrated average life expectancies as follows: capping with new lead implanted at 134.5 months, extraction with new lead implanted at 134.0 months, fluoroscopy with routine interrogation at 133.9 months, and routine interrogation at 133.5 months. One-way sensitivity analyses identified capping as the preferred strategy with only one parameter having a threshold value: when risk of nonarrhythmic death associated with lead abandonment is greater than 0.05% per year, lead extraction is preferred over capping. A second-order Monte Carlo simulation (n = 10,000), as a probabilistic sensitivity analysis, found that lead revision was favored with 100% certainty (extraction 76% and capping 24%).Conclusions
Overall there were minimal differences in survival with monitoring versus active lead management approaches. There is no evidence to support fluoroscopic screening for externalization of Riata or Riata ST leads.Item Open Access Performance metrics of an optical spectral imaging system for intra-operative assessment of breast tumor margins.(Opt Express, 2010-04-12) Bydlon, TM; Kennedy, SA; Richards, LM; Brown, JQ; Yu, B; Junker, MS; Gallagher, J; Geradts, J; Wilke, LG; Ramanujam, NAs many as 20-70% of patients undergoing breast conserving surgery require repeat surgeries due to a close or positive surgical margin diagnosed post-operatively [1]. Currently there are no widely accepted tools for intra-operative margin assessment which is a significant unmet clinical need. Our group has developed a first-generation optical visible spectral imaging platform to image the molecular composition of breast tumor margins and has tested it clinically in 48 patients in a previously published study [2]. The goal of this paper is to report on the performance metrics of the system and compare it to clinical criteria for intra-operative tumor margin assessment. The system was found to have an average signal to noise ratio (SNR) >100 and <15% error in the extraction of optical properties indicating that there is sufficient SNR to leverage the differences in optical properties between negative and close/positive margins. The probe had a sensing depth of 0.5-2.2 mm over the wavelength range of 450-600 nm which is consistent with the pathologic criterion for clear margins of 0-2 mm. There was <1% cross-talk between adjacent channels of the multi-channel probe which shows that multiple sites can be measured simultaneously with negligible cross-talk between adjacent sites. Lastly, the system and measurement procedure were found to be reproducible when evaluated with repeated measures, with a low coefficient of variation (<0.11). The only aspect of the system not optimized for intra-operative use was the imaging time. The manuscript includes a discussion of how the speed of the system can be improved to work within the time constraints of an intra-operative setting.Item Open Access Phylodynamic inference for structured epidemiological models.(PLoS Comput Biol, 2014-04) Rasmussen, David A; Volz, Erik M; Koelle, KatiaCoalescent theory is routinely used to estimate past population dynamics and demographic parameters from genealogies. While early work in coalescent theory only considered simple demographic models, advances in theory have allowed for increasingly complex demographic scenarios to be considered. The success of this approach has lead to coalescent-based inference methods being applied to populations with rapidly changing population dynamics, including pathogens like RNA viruses. However, fitting epidemiological models to genealogies via coalescent models remains a challenging task, because pathogen populations often exhibit complex, nonlinear dynamics and are structured by multiple factors. Moreover, it often becomes necessary to consider stochastic variation in population dynamics when fitting such complex models to real data. Using recently developed structured coalescent models that accommodate complex population dynamics and population structure, we develop a statistical framework for fitting stochastic epidemiological models to genealogies. By combining particle filtering methods with Bayesian Markov chain Monte Carlo methods, we are able to fit a wide class of stochastic, nonlinear epidemiological models with different forms of population structure to genealogies. We demonstrate our framework using two structured epidemiological models: a model with disease progression between multiple stages of infection and a two-population model reflecting spatial structure. We apply the multi-stage model to HIV genealogies and show that the proposed method can be used to estimate the stage-specific transmission rates and prevalence of HIV. Finally, using the two-population model we explore how much information about population structure is contained in genealogies and what sample sizes are necessary to reliably infer parameters like migration rates.Item Open Access Power and sample size calculations for the Wilcoxon-Mann-Whitney test in the presence of death-censored observations.(Stat Med, 2015-02-10) Matsouaka, Roland A; Betensky, Rebecca AWe consider a clinical trial of a potentially lethal disease in which patients are randomly assigned to two treatment groups and are followed for a fixed period of time; a continuous endpoint is measured at the end of follow-up. For some patients; however, death (or severe disease progression) may preclude measurement of the endpoint. A statistical analysis that includes only patients with endpoint measurements may be biased. An alternative analysis includes all randomized patients, with rank scores assigned to the patients who are available for the endpoint measurement on the basis of the magnitude of their responses and with 'worst-rank' scores assigned to those patients whose death precluded the measurement of the continuous endpoint. The worst-rank scores are worse than all observed rank scores. The treatment effect is then evaluated using the Wilcoxon-Mann-Whitney test. In this paper, we derive closed-form formulae for the power and sample size of the Wilcoxon-Mann-Whitney test when missing measurements of the continuous endpoints because of death are replaced by worst-rank scores. We distinguish two approaches for assigning the worst-rank scores. In the tied worst-rank approach, all deaths are weighted equally, and the worst-rank scores are set to a single value that is worse than all measured responses. In the untied worst-rank approach, the worst-rank scores further rank patients according to their time of death, so that an earlier death is considered worse than a later death, which in turn is worse than all measured responses. In addition, we propose four methods for the implementation of the sample size formulae for a trial with expected early death. We conduct Monte Carlo simulation studies to evaluate the accuracy of our power and sample size formulae and to compare the four sample size estimation methods.Item Open Access Rapid ratiometric determination of hemoglobin concentration using UV-VIS diffuse reflectance at isosbestic wavelengths.(Opt Express, 2010-08-30) Phelps, Janelle E; Vishwanath, Karthik; Chang, Vivide TC; Ramanujam, NirmalaWe developed a ratiometric method capable of estimating total hemoglobin concentration from optically measured diffuse reflectance spectra. The three isosbestic wavelength ratio pairs that best correlated to total hemoglobin concentration independent of saturation and scattering were 545/390, 452/390, and 529/390 nm. These wavelength pairs were selected using forward Monte Carlo simulations which were used to extract hemoglobin concentration from experimental phantom measurements. Linear regression coefficients from the simulated data were directly applied to the phantom data, by calibrating for instrument throughput using a single phantom. Phantoms with variable scattering and hemoglobin saturation were tested with two different instruments, and the average percent errors between the expected and ratiometrically-extracted hemoglobin concentration were as low as 6.3%. A correlation of r = 0.88 between hemoglobin concentration extracted using the 529/390 nm isosbestic ratio and a scalable inverse Monte Carlo model was achieved for in vivo dysplastic cervical measurements (hemoglobin concentrations have been shown to be diagnostic for the detection of cervical pre-cancer by our group). These results indicate that use of such a simple ratiometric method has the potential to be used in clinical applications where tissue hemoglobin concentrations need to be rapidly quantified in vivo.Item Open Access The chronic kidney disease model: a general purpose model of disease progression and treatment.(BMC medical informatics and decision making, 2011-06-16) Orlando, Lori A; Belasco, Eric J; Patel, Uptal D; Matchar, David BBackground
Chronic kidney disease (CKD) is the focus of recent national policy efforts; however, decision makers must account for multiple therapeutic options, comorbidities and complications. The objective of the Chronic Kidney Disease model is to provide guidance to decision makers. We describe this model and give an example of how it can inform clinical and policy decisions.Methods
Monte Carlo simulation of CKD natural history and treatment. Health states include myocardial infarction, stroke with and without disability, congestive heart failure, CKD stages 1-5, bone disease, dialysis, transplant and death. Each cycle is 1 month. Projections account for race, age, gender, diabetes, proteinuria, hypertension, cardiac disease, and CKD stage. Treatment strategies include hypertension control, diabetes control, use of HMG-CoA reductase inhibitors, use of angiotensin converting enzyme inhibitors, nephrology specialty care, CKD screening, and a combination of these. The model architecture is flexible permitting updates as new data become available. The primary outcome is quality adjusted life years (QALYs). Secondary outcomes include health state events and CKD progression rate.Results
The model was validated for GFR change/year -3.0 ± 1.9 vs. -1.7 ± 3.4 (in the AASK trial), and annual myocardial infarction and mortality rates 3.6 ± 0.9% and 1.6 ± 0.5% vs. 4.4% and 1.6% in the Go study. To illustrate the model's utility we estimated lifetime impact of a hypothetical treatment for primary prevention of vascular disease. As vascular risk declined, QALY improved but risk of dialysis increased. At baseline, 20% and 60% reduction: QALYs = 17.6, 18.2, and 19.0 and dialysis = 7.7%, 8.1%, and 10.4%, respectively.Conclusions
The CKD Model is a valid, general purpose model intended as a resource to inform clinical and policy decisions improving CKD care. Its value as a tool is illustrated in our example which projects a relationship between decreasing cardiac disease and increasing ESRD.Item Open Access Towards a field-compatible optical spectroscopic device for cervical cancer screening in resource-limited settings: effects of calibration and pressure.(Opt Express, 2011-09-12) Chang, Vivide Tuan-Chyan; Merisier, Delson; Yu, Bing; Walmer, David K; Ramanujam, NirmalaQuantitative optical spectroscopy has the potential to provide an effective low cost, and portable solution for cervical pre-cancer screening in resource-limited communities. However, clinical studies to validate the use of this technology in resource-limited settings require low power consumption and good quality control that is minimally influenced by the operator or variable environmental conditions in the field. The goal of this study was to evaluate the effects of two sources of potential error: calibration and pressure on the extraction of absorption and scattering properties of normal cervical tissues in a resource-limited setting in Leogane, Haiti. Our results show that self-calibrated measurements improved scattering measurements through real-time correction of system drift, in addition to minimizing the time required for post-calibration. Variations in pressure (tested without the potential confounding effects of calibration error) caused local changes in vasculature and scatterer density that significantly impacted the tissue absorption and scattering properties Future spectroscopic systems intended for clinical use, particularly where operator training is not viable and environmental conditions unpredictable, should incorporate a real-time self-calibration channel and collect diffuse reflectance spectra at a consistent pressure to maximize data integrity.Item Open Access Tricuspid regurgitation and right ventricular function after mitral valve surgery with or without concomitant tricuspid valve procedure.(J Thorac Cardiovasc Surg, 2013-11) Desai, Ravi R; Vargas Abello, Lina Maria; Klein, Allan L; Marwick, Thomas H; Krasuski, Richard A; Ye, Ying; Nowicki, Edward R; Rajeswaran, Jeevanantham; Blackstone, Eugene H; Pettersson, Gösta BOBJECTIVES: To study the effect of mitral valve repair with or without concomitant tricuspid valve repair on functional tricuspid regurgitation and right ventricular function. METHODS: From 2001 to 2007, 1833 patients with degenerative mitral valve disease, a structurally normal tricuspid valve, and no coronary artery disease underwent mitral valve repair, and 67 underwent concomitant tricuspid valve repair. Right ventricular function (myocardial performance index and tricuspid annular plane systolic excursion) was measured before and after surgery using transthoracic echocardiography for randomly selected patients with tricuspid regurgitation grade 0, 1+, and 2+ (100 patients for each grade) and 93 with grade 3+/4+, 393 patients in total. RESULTS: In patients with mild (<3+) preoperative tricuspid regurgitation, mitral valve repair alone was associated with reduced tricuspid regurgitation and mild worsening of right ventricular function. Tricuspid regurgitation of 2+ or greater developed in fewer than 20%, and right ventricular function had improved, but not to preoperative levels, at 3 years. In patients with severe (3+/4+) preoperative tricuspid regurgitation, mitral valve repair alone reduced tricuspid regurgitation and improved right ventricular function; however, tricuspid regurgitation of 2+ or greater returned and right ventricular function worsened toward preoperative levels within 3 years. Concomitant tricuspid valve repair effectively eliminated severe tricuspid regurgitation and improved right ventricular function. Also, over time, tricuspid regurgitation did not return and right ventricular function continued to improve to levels comparable to that of patients with lower grades of preoperative tricuspid regurgitation. CONCLUSIONS: In patients with mitral valve disease and severe tricuspid regurgitation, mitral valve repair alone was associated with improved tricuspid regurgitation and right ventricular function. However, the improvements were incomplete and temporary. In contrast, concomitant tricuspid valve repair effectively and durably eliminated severe tricuspid regurgitation and improved right ventricular function toward normal, supporting an aggressive approach to important functional tricuspid regurgitation.Item Open Access Wavelength optimization for quantitative spectral imaging of breast tumor margins.(PloS one, 2013-01) Lo, Justin Y; Brown, J Quincy; Dhar, Sulochana; Yu, Bing; Palmer, Gregory M; Jokerst, Nan M; Ramanujam, NirmalaA wavelength selection method that combines an inverse Monte Carlo model of reflectance and a genetic algorithm for global optimization was developed for the application of spectral imaging of breast tumor margins. The selection of wavelengths impacts system design in cost, size, and accuracy of tissue quantitation. The minimum number of wavelengths required for the accurate quantitation of tissue optical properties is 8, with diminishing gains for additional wavelengths. The resulting wavelength choices for the specific probe geometry used for the breast tumor margin spectral imaging application were tested in an independent pathology-confirmed ex vivo breast tissue data set and in tissue-mimicking phantoms. In breast tissue, the optical endpoints (hemoglobin, β-carotene, and scattering) that provide the contrast between normal and malignant tissue specimens are extracted with the optimized 8-wavelength set with <9% error compared to the full spectrum (450-600 nm). A multi-absorber liquid phantom study was also performed to show the improved extraction accuracy with optimization and without optimization. This technique for selecting wavelengths can be used for designing spectral imaging systems for other clinical applications.