Browsing by Subject "Software"
Results Per Page
Sort Options
Item Open Access A cloud-compatible bioinformatics pipeline for ultrarapid pathogen identification from next-generation sequencing of clinical samples.(Genome Res, 2014-07) Naccache, Samia N; Federman, Scot; Veeraraghavan, Narayanan; Zaharia, Matei; Lee, Deanna; Samayoa, Erik; Bouquet, Jerome; Greninger, Alexander L; Luk, Ka-Cheung; Enge, Barryett; Wadford, Debra A; Messenger, Sharon L; Genrich, Gillian L; Pellegrino, Kristen; Grard, Gilda; Leroy, Eric; Schneider, Bradley S; Fair, Joseph N; Martínez, Miguel A; Isa, Pavel; Crump, John A; DeRisi, Joseph L; Sittler, Taylor; Hackett, John; Miller, Steve; Chiu, Charles YUnbiased next-generation sequencing (NGS) approaches enable comprehensive pathogen detection in the clinical microbiology laboratory and have numerous applications for public health surveillance, outbreak investigation, and the diagnosis of infectious diseases. However, practical deployment of the technology is hindered by the bioinformatics challenge of analyzing results accurately and in a clinically relevant timeframe. Here we describe SURPI ("sequence-based ultrarapid pathogen identification"), a computational pipeline for pathogen identification from complex metagenomic NGS data generated from clinical samples, and demonstrate use of the pipeline in the analysis of 237 clinical samples comprising more than 1.1 billion sequences. Deployable on both cloud-based and standalone servers, SURPI leverages two state-of-the-art aligners for accelerated analyses, SNAP and RAPSearch, which are as accurate as existing bioinformatics tools but orders of magnitude faster in performance. In fast mode, SURPI detects viruses and bacteria by scanning data sets of 7-500 million reads in 11 min to 5 h, while in comprehensive mode, all known microorganisms are identified, followed by de novo assembly and protein homology searches for divergent viruses in 50 min to 16 h. SURPI has also directly contributed to real-time microbial diagnosis in acutely ill patients, underscoring its potential key role in the development of unbiased NGS-based clinical assays in infectious diseases that demand rapid turnaround times.Item Open Access A functional analysis of the spacer of V(D)J recombination signal sequences.(PLoS Biol, 2003-10) Lee, Alfred Ian; Fugmann, Sebastian D; Cowell, Lindsay G; Ptaszek, Leon M; Kelsoe, Garnett; Schatz, David GDuring lymphocyte development, V(D)J recombination assembles antigen receptor genes from component V, D, and J gene segments. These gene segments are flanked by a recombination signal sequence (RSS), which serves as the binding site for the recombination machinery. The murine Jbeta2.6 gene segment is a recombinationally inactive pseudogene, but examination of its RSS reveals no obvious reason for its failure to recombine. Mutagenesis of the Jbeta2.6 RSS demonstrates that the sequences of the heptamer, nonamer, and spacer are all important. Strikingly, changes solely in the spacer sequence can result in dramatic differences in the level of recombination. The subsequent analysis of a library of more than 4,000 spacer variants revealed that spacer residues of particular functional importance are correlated with their degree of conservation. Biochemical assays indicate distinct cooperation between the spacer and heptamer/nonamer along each step of the reaction pathway. The results suggest that the spacer serves not only to ensure the appropriate distance between the heptamer and nonamer but also regulates RSS activity by providing additional RAG:RSS interaction surfaces. We conclude that while RSSs are defined by a "digital" requirement for absolutely conserved nucleotides, the quality of RSS function is determined in an "analog" manner by numerous complex interactions between the RAG proteins and the less-well conserved nucleotides in the heptamer, the nonamer, and, importantly, the spacer. Those modulatory effects are accurately predicted by a new computational algorithm for "RSS information content." The interplay between such binary and multiplicative modes of interactions provides a general model for analyzing protein-DNA interactions in various biological systems.Item Open Access A new fully automated approach for aligning and comparing shapes.(Anatomical record (Hoboken, N.J. : 2007), 2015-01) Boyer, Doug M; Puente, Jesus; Gladman, Justin T; Glynn, Chris; Mukherjee, Sayan; Yapuncich, Gabriel S; Daubechies, IngridThree-dimensional geometric morphometric (3DGM) methods for placing landmarks on digitized bones have become increasingly sophisticated in the last 20 years, including greater degrees of automation. One aspect shared by all 3DGM methods is that the researcher must designate initial landmarks. Thus, researcher interpretations of homology and correspondence are required for and influence representations of shape. We present an algorithm allowing fully automatic placement of correspondence points on samples of 3D digital models representing bones of different individuals/species, which can then be input into standard 3DGM software and analyzed with dimension reduction techniques. We test this algorithm against several samples, primarily a dataset of 106 primate calcanei represented by 1,024 correspondence points per bone. Results of our automated analysis of these samples are compared to a published study using a traditional 3DGM approach with 27 landmarks on each bone. Data were analyzed with morphologika(2.5) and PAST. Our analyses returned strong correlations between principal component scores, similar variance partitioning among components, and similarities between the shape spaces generated by the automatic and traditional methods. While cluster analyses of both automatically generated and traditional datasets produced broadly similar patterns, there were also differences. Overall these results suggest to us that automatic quantifications can lead to shape spaces that are as meaningful as those based on observer landmarks, thereby presenting potential to save time in data collection, increase completeness of morphological quantification, eliminate observer error, and allow comparisons of shape diversity between different types of bones. We provide an R package for implementing this analysis.Item Open Access A new phylogenetic data standard for computable clade definitions: the Phyloreference Exchange Format (Phyx).(PeerJ, 2022-01) Vaidya, Gaurav; Cellinese, Nico; Lapp, HilmarTo be computationally reproducible and efficient, integration of disparate data depends on shared entities whose matching meaning (semantics) can be computationally assessed. For biodiversity data one of the most prevalent shared entities for linking data records is the associated taxon concept. Unlike Linnaean taxon names, the traditional way in which taxon concepts are provided, phylogenetic definitions are native to phylogenetic trees and offer well-defined semantics that can be transformed into formal, computationally evaluable logic expressions. These attributes make them highly suitable for phylogeny-driven comparative biology by allowing computationally verifiable and reproducible integration of taxon-linked data against Tree of Life-scale phylogenies. To achieve this, the first step is transforming phylogenetic definitions from the natural language text in which they are published to a structured interoperable data format that maintains strong ties to semantics and lends itself well to sharing, reuse, and long-term archival. To this end, we developed the Phyloreference Exchange Format (Phyx), a JSON-LD-based text format encompassing rich metadata for all elements of a phylogenetic definition, and we created a supporting software library, phyx.js, to streamline computational management of such files. Together they form a foundation layer for digitizing and computing with phylogenetic definitions of clades.Item Open Access A robotics platform for automated batch fabrication of high density, microfluidics-based DNA microarrays, with applications to single cell, multiplex assays of secreted proteins.(The Review of scientific instruments, 2011-09) Ahmad, Habib; Sutherland, Alex; Shin, Young Shik; Hwang, Kiwook; Qin, Lidong; Krom, Russell-John; Heath, James RMicrofluidics flow-patterning has been utilized for the construction of chip-scale miniaturized DNA and protein barcode arrays. Such arrays have been used for specific clinical and fundamental investigations in which many proteins are assayed from single cells or other small sample sizes. However, flow-patterned arrays are hand-prepared, and so are impractical for broad applications. We describe an integrated robotics/microfluidics platform for the automated preparation of such arrays, and we apply it to the batch fabrication of up to eighteen chips of flow-patterned DNA barcodes. The resulting substrates are comparable in quality with hand-made arrays and exhibit excellent substrate-to-substrate consistency. We demonstrate the utility and reproducibility of robotics-patterned barcodes by utilizing two flow-patterned chips for highly parallel assays of a panel of secreted proteins from single macrophage cells.Item Open Access A tutorial on Bayesian multi-model linear regression with BAS and JASP.(Behavior research methods, 2021-12) Bergh, Don van den; Clyde, Merlise A; Gupta, Akash R Komarlu Narendra; de Jong, Tim; Gronau, Quentin F; Marsman, Maarten; Ly, Alexander; Wagenmakers, Eric-JanLinear regression analyses commonly involve two consecutive stages of statistical inquiry. In the first stage, a single 'best' model is defined by a specific selection of relevant predictors; in the second stage, the regression coefficients of the winning model are used for prediction and for inference concerning the importance of the predictors. However, such second-stage inference ignores the model uncertainty from the first stage, resulting in overconfident parameter estimates that generalize poorly. These drawbacks can be overcome by model averaging, a technique that retains all models for inference, weighting each model's contribution by its posterior probability. Although conceptually straightforward, model averaging is rarely used in applied research, possibly due to the lack of easily accessible software. To bridge the gap between theory and practice, we provide a tutorial on linear regression using Bayesian model averaging in JASP, based on the BAS package in R. Firstly, we provide theoretical background on linear regression, Bayesian inference, and Bayesian model averaging. Secondly, we demonstrate the method on an example data set from the World Happiness Report. Lastly, we discuss limitations of model averaging and directions for dealing with violations of model assumptions.Item Open Access Advances to Bayesian network inference for generating causal networks from observational biological data.(Bioinformatics, 2004-12-12) Yu, Jing; Smith, V Anne; Wang, Paul P; Hartemink, Alexander J; Jarvis, Erich DMOTIVATION: Network inference algorithms are powerful computational tools for identifying putative causal interactions among variables from observational data. Bayesian network inference algorithms hold particular promise in that they can capture linear, non-linear, combinatorial, stochastic and other types of relationships among variables across multiple levels of biological organization. However, challenges remain when applying these algorithms to limited quantities of experimental data collected from biological systems. Here, we use a simulation approach to make advances in our dynamic Bayesian network (DBN) inference algorithm, especially in the context of limited quantities of biological data. RESULTS: We test a range of scoring metrics and search heuristics to find an effective algorithm configuration for evaluating our methodological advances. We also identify sampling intervals and levels of data discretization that allow the best recovery of the simulated networks. We develop a novel influence score for DBNs that attempts to estimate both the sign (activation or repression) and relative magnitude of interactions among variables. When faced with limited quantities of observational data, combining our influence score with moderate data interpolation reduces a significant portion of false positive interactions in the recovered networks. Together, our advances allow DBN inference algorithms to be more effective in recovering biological networks from experimentally collected data. AVAILABILITY: Source code and simulated data are available upon request. SUPPLEMENTARY INFORMATION: http://www.jarvislab.net/Bioinformatics/BNAdvances/Item Open Access American Association of Physicists in Medicine Task Group 263: Standardizing Nomenclatures in Radiation Oncology.(International journal of radiation oncology, biology, physics, 2018-03) Mayo, Charles S; Moran, Jean M; Bosch, Walter; Xiao, Ying; McNutt, Todd; Popple, Richard; Michalski, Jeff; Feng, Mary; Marks, Lawrence B; Fuller, Clifton D; Yorke, Ellen; Palta, Jatinder; Gabriel, Peter E; Molineu, Andrea; Matuszak, Martha M; Covington, Elizabeth; Masi, Kathryn; Richardson, Susan L; Ritter, Timothy; Morgas, Tomasz; Flampouri, Stella; Santanam, Lakshmi; Moore, Joseph A; Purdie, Thomas G; Miller, Robert C; Hurkmans, Coen; Adams, Judy; Jackie Wu, Qing-Rong; Fox, Colleen J; Siochi, Ramon Alfredo; Brown, Norman L; Verbakel, Wilko; Archambault, Yves; Chmura, Steven J; Dekker, Andre L; Eagle, Don G; Fitzgerald, Thomas J; Hong, Theodore; Kapoor, Rishabh; Lansing, Beth; Jolly, Shruti; Napolitano, Mary E; Percy, James; Rose, Mark S; Siddiqui, Salim; Schadt, Christof; Simon, William E; Straube, William L; St James, Sara T; Ulin, Kenneth; Yom, Sue S; Yock, Torunn IA substantial barrier to the single- and multi-institutional aggregation of data to supporting clinical trials, practice quality improvement efforts, and development of big data analytics resource systems is the lack of standardized nomenclatures for expressing dosimetric data. To address this issue, the American Association of Physicists in Medicine (AAPM) Task Group 263 was charged with providing nomenclature guidelines and values in radiation oncology for use in clinical trials, data-pooling initiatives, population-based studies, and routine clinical care by standardizing: (1) structure names across image processing and treatment planning system platforms; (2) nomenclature for dosimetric data (eg, dose-volume histogram [DVH]-based metrics); (3) templates for clinical trial groups and users of an initial subset of software platforms to facilitate adoption of the standards; (4) formalism for nomenclature schema, which can accommodate the addition of other structures defined in the future. A multisociety, multidisciplinary, multinational group of 57 members representing stake holders ranging from large academic centers to community clinics and vendors was assembled, including physicists, physicians, dosimetrists, and vendors. The stakeholder groups represented in the membership included the AAPM, American Society for Radiation Oncology (ASTRO), NRG Oncology, European Society for Radiation Oncology (ESTRO), Radiation Therapy Oncology Group (RTOG), Children's Oncology Group (COG), Integrating Healthcare Enterprise in Radiation Oncology (IHE-RO), and Digital Imaging and Communications in Medicine working group (DICOM WG); A nomenclature system for target and organ at risk volumes and DVH nomenclature was developed and piloted to demonstrate viability across a range of clinics and within the framework of clinical trials. The final report was approved by AAPM in October 2017. The approval process included review by 8 AAPM committees, with additional review by ASTRO, European Society for Radiation Oncology (ESTRO), and American Association of Medical Dosimetrists (AAMD). This Executive Summary of the report highlights the key recommendations for clinical practice, research, and trials.Item Open Access apex: phylogenetics with multiple genes.(Mol Ecol Resour, 2017-01) Jombart, Thibaut; Archer, Frederick; Schliep, Klaus; Kamvar, Zhian; Harris, Rebecca; Paradis, Emmanuel; Goudet, Jérome; Lapp, HilmarGenetic sequences of multiple genes are becoming increasingly common for a wide range of organisms including viruses, bacteria and eukaryotes. While such data may sometimes be treated as a single locus, in practice, a number of biological and statistical phenomena can lead to phylogenetic incongruence. In such cases, different loci should, at least as a preliminary step, be examined and analysed separately. The r software has become a popular platform for phylogenetics, with several packages implementing distance-based, parsimony and likelihood-based phylogenetic reconstruction, and an even greater number of packages implementing phylogenetic comparative methods. Unfortunately, basic data structures and tools for analysing multiple genes have so far been lacking, thereby limiting potential for investigating phylogenetic incongruence. In this study, we introduce the new r package apex to fill this gap. apex implements new object classes, which extend existing standards for storing DNA and amino acid sequences, and provides a number of convenient tools for handling, visualizing and analysing these data. In this study, we introduce the main features of the package and illustrate its functionalities through the analysis of a simple data set.Item Restricted Application description and policy model in collaborative environment for sharing of information on epidemiological and clinical research data sets.(PLoS One, 2010-02-19) de Carvalho, EC; Batilana, AP; Simkins, J; Martins, H; Shah, J; Rajgor, D; Shah, A; Rockart, S; Pietrobon, RBACKGROUND: Sharing of epidemiological and clinical data sets among researchers is poor at best, in detriment of science and community at large. The purpose of this paper is therefore to (1) describe a novel Web application designed to share information on study data sets focusing on epidemiological clinical research in a collaborative environment and (2) create a policy model placing this collaborative environment into the current scientific social context. METHODOLOGY: The Database of Databases application was developed based on feedback from epidemiologists and clinical researchers requiring a Web-based platform that would allow for sharing of information about epidemiological and clinical study data sets in a collaborative environment. This platform should ensure that researchers can modify the information. A Model-based predictions of number of publications and funding resulting from combinations of different policy implementation strategies (for metadata and data sharing) were generated using System Dynamics modeling. PRINCIPAL FINDINGS: The application allows researchers to easily upload information about clinical study data sets, which is searchable and modifiable by other users in a wiki environment. All modifications are filtered by the database principal investigator in order to maintain quality control. The application has been extensively tested and currently contains 130 clinical study data sets from the United States, Australia, China and Singapore. Model results indicated that any policy implementation would be better than the current strategy, that metadata sharing is better than data-sharing, and that combined policies achieve the best results in terms of publications. CONCLUSIONS: Based on our empirical observations and resulting model, the social network environment surrounding the application can assist epidemiologists and clinical researchers contribute and search for metadata in a collaborative environment, thus potentially facilitating collaboration efforts among research communities distributed around the globe.Item Open Access Assessment of LD matrix measures for the analysis of biological pathway association.(Stat Appl Genet Mol Biol, 2010) Crosslin, David R; Qin, Xuejun; Hauser, Elizabeth RComplex diseases will have multiple functional sites, and it will be invaluable to understand the cross-locus interaction in terms of linkage disequilibrium (LD) between those sites (epistasis) in addition to the haplotype-LD effects. We investigated the statistical properties of a class of matrix-based statistics to assess this epistasis. These statistical methods include two LD contrast tests (Zaykin et al., 2006) and partial least squares regression (Wang et al., 2008). To estimate Type 1 error rates and power, we simulated multiple two-variant disease models using the SIMLA software package. SIMLA allows for the joint action of up to two disease genes in the simulated data with all possible multiplicative interaction effects between them. Our goal was to detect an interaction between multiple disease-causing variants by means of their linkage disequilibrium (LD) patterns with other markers. We measured the effects of marginal disease effect size, haplotype LD, disease prevalence and minor allele frequency have on cross-locus interaction (epistasis). In the setting of strong allele effects and strong interaction, the correlation between the two disease genes was weak (r=0.2). In a complex system with multiple correlations (both marginal and interaction), it was difficult to determine the source of a significant result. Despite these complications, the partial least squares and modified LD contrast methods maintained adequate power to detect the epistatic effects; however, for many of the analyses we often could not separate interaction from a strong marginal effect. While we did not exhaust the entire parameter space of possible models, we do provide guidance on the effects that population parameters have on cross-locus interaction.Item Open Access Automatic segmentation of seven retinal layers in SDOCT images congruent with expert manual segmentation.(Opt Express, 2010-08-30) Chiu, SJ; Li, XT; Nicholas, P; Toth, CA; Izatt, JA; Farsiu, SSegmentation of anatomical and pathological structures in ophthalmic images is crucial for the diagnosis and study of ocular diseases. However, manual segmentation is often a time-consuming and subjective process. This paper presents an automatic approach for segmenting retinal layers in Spectral Domain Optical Coherence Tomography images using graph theory and dynamic programming. Results show that this method accurately segments eight retinal layer boundaries in normal adult eyes more closely to an expert grader as compared to a second expert grader.Item Open Access CMOS-based carbon nanotube pass-transistor logic integrated circuits.(Nature communications, 2012-02) Ding, Li; Zhang, Zhiyong; Liang, Shibo; Pei, Tian; Wang, Sheng; Li, Yan; Zhou, Weiwei; Liu, Jie; Peng, Lian-MaoField-effect transistors based on carbon nanotubes have been shown to be faster and less energy consuming than their silicon counterparts. However, ensuring these advantages are maintained for integrated circuits is a challenge. Here we demonstrate that a significant reduction in the use of field-effect transistors can be achieved by constructing carbon nanotube-based integrated circuits based on a pass-transistor logic configuration, rather than a complementary metal-oxide semiconductor configuration. Logic gates are constructed on individual carbon nanotubes via a doping-free approach and with a single power supply at voltages as low as 0.4 V. The pass-transistor logic configurarion provides a significant simplification of the carbon nanotube-based circuit design, a higher potential circuit speed and a significant reduction in power consumption. In particular, a full adder, which requires a total of 28 field-effect transistors to construct in the usual complementary metal-oxide semiconductor circuit, uses only three pairs of n- and p-field-effect transistors in the pass-transistor logic configuration.Item Open Access Compressive holography.(2012) Lim, Se HoonCompressive holography estimates images from incomplete data by using sparsity priors. Compressive holography combines digital holography and compressive sensing. Digital holography consists of computational image estimation from data captured by an electronic focal plane array. Compressive sensing enables accurate data reconstruction by prior knowledge on desired signal. Computational and optical co-design optimally supports compressive holography in the joint computational and optical domain. This dissertation explores two examples of compressive holography : estimation of 3D tomographic images from 2D data and estimation of images from under sampled apertures. Compressive holography achieves single shot holographic tomography using decompressive inference. In general, 3D image reconstruction suffers from underdetermined measurements with a 2D detector. Specifically, single shot holographic tomography shows the uniqueness problem in the axial direction because the inversion is ill-posed. Compressive sensing alleviates the ill-posed problem by enforcing some sparsity constraints. Holographic tomography is applied for video-rate microscopic imaging and diffuse object imaging. In diffuse object imaging, sparsity priors are not valid in coherent image basis due to speckle. So incoherent image estimation is designed to hold the sparsity in incoherent image basis by support of multiple speckle realizations. High pixel count holography achieves high resolution and wide field-of-view imaging. Coherent aperture synthesis can be one method to increase the aperture size of a detector. Scanning-based synthetic aperture confronts a multivariable global optimization problem due to time-space measurement errors. A hierarchical estimation strategy divides the global problem into multiple local problems with support of computational and optical co-design. Compressive sparse aperture holography can be another method. Compressive sparse sampling collects most of significant field information with a small fill factor because object scattered fields are locally redundant. Incoherent image estimation is adopted for the expanded modulation transfer function and compressive reconstruction.Item Open Access Computational Methods for RNA Structure Validation and Improvement.(Methods Enzymol, 2015) Jain, Swati; Richardson, David C; Richardson, Jane SWith increasing recognition of the roles RNA molecules and RNA/protein complexes play in an unexpected variety of biological processes, understanding of RNA structure-function relationships is of high current importance. To make clean biological interpretations from three-dimensional structures, it is imperative to have high-quality, accurate RNA crystal structures available, and the community has thoroughly embraced that goal. However, due to the many degrees of freedom inherent in RNA structure (especially for the backbone), it is a significant challenge to succeed in building accurate experimental models for RNA structures. This chapter describes the tools and techniques our research group and our collaborators have developed over the years to help RNA structural biologists both evaluate and achieve better accuracy. Expert analysis of large, high-resolution, quality-conscious RNA datasets provides the fundamental information that enables automated methods for robust and efficient error diagnosis in validating RNA structures at all resolutions. The even more crucial goal of correcting the diagnosed outliers has steadily developed toward highly effective, computationally based techniques. Automation enables solving complex issues in large RNA structures, but cannot circumvent the need for thoughtful examination of local details, and so we also provide some guidance for interpreting and acting on the results of current structure validation for RNA.Item Embargo Computational Tools to Improve Stereo-EEG Implantation and Resection Surgery for Patients with Epilepsy(2024) Thio, BrandonApproximately 1 million Americans live with drug-resistant epilepsy. Surgical resection of the brain areas where seizures originate can be curative. However, successful surgical outcomes require delineation of the epileptogenic zone (EZ), the minimum amount of tissue that needs to be resected to eliminate a patient’s seizures. EZ localization is often accomplished using stereo-EEG where 5-30 wires are implanted into the brain through small holes drilled through the skull to map widespread regions of the epileptic network. However, despite the technical advances in surgical planning and epilepsy monitoring, seizure freedom rates following epilepsy surgery have remained at ~60% for decades. In part, seizure freedom rates have not increased because epilepsy neurologists do not have appropriate software tools to optimize stereo-EEG. In this dissertation, we report on the development and analysis of foundational models and software tools to improve the use of stereo-EEG technology and ultimately increase seizure-freedom rates following epilepsy surgery.We developed an automated image-based head-modeling pipeline to generate patient-specific models for stereo-EEG analysis. We assessed the key dipole source model assumption, which assumes that voltages generated by a population of active neurons can be simplified to a single dipole. We found that the dipole source model is appropriate to reproduce the spatial voltage distribution generated by neurons and for source localization applications. Our findings validate a key model parameter for stereo-EEG head-models, which are foundational to all computational tools developed to optimize stereo-EEG. Using the dipole source model, we systematically assessed the origin of recorded brain electrophysiological signals using computational models. We found that, counter to dogma, action potentials contribute appreciably to brain electrophysiological signals. Our findings reshape the cellular interpretation of brain electrophysiological signals and should impact modeling efforts to reproduce neural recordings. We also developed a recording sensitivity metric, which quantifies the cortical areas that are recordable by a set of stereo-EEG electrodes. We used the recording sensitivity metric to develop two software tools to visualize the recording sensitivity on patient-specific brain geometry and to optimize the trajectories of stereo-EEG electrodes. Using the same number of electrodes, our optimization approach identified trajectories that had greater recording sensitivity than clinician-defined trajectories. Using the same target recording sensitivity, our optimization approach found trajectories that mapped the same amount of cortex with fewer electrodes compared to the clinician-defined trajectories. Thus, our optimization approach can improve the outcomes following epilepsy surgery by increasing the chances that an electrode records from the EZ or reduce the risk of surgery by minimizing the number of necessary implanted electrodes. We finally developed a propagating source reconstruction algorithm using a novel TEmporally Dependent Iterative Expansion approach (TEDIE). TEDIE takes as inputs stereo-EEG recordings and patient-specific anatomical images, produces movies of dynamic (moving) neural activity displayed on patient-specific anatomy, and distills the immense intracranial stereo-EEG dataset into an objective reconstruction of the EZ. We validated TEDIE using seizure recordings from 40 patients from two centers. TEDIE consistently localized the EZ closer to the resected regions for patients who are currently seizure-free. Further, TEDIE identified new EZs in 13 of the 23 patients who are currently not seizure-free. Therefore, TEDIE is expected to improve the accuracy of the evaluation of surgical epilepsy candidates, result in increased numbers of patients advancing to surgery, and increase the proportion of patients who achieve seizure freedom through surgery. Together, our suite of software tools constitute important advances to optimize stereo-EEG implantation and analysis, which should lead to more patients achieving seizure freedom following epilepsy surgery.
Item Open Access CPAG: software for leveraging pleiotropy in GWAS to reveal similarity between human traits links plasma fatty acids and intestinal inflammation.(Genome Biol, 2015-09-15) Wang, L; Oehlers, SH; Espenschied, ST; Rawls, JF; Tobin, DM; Ko, DCMeta-analyses of genome-wide association studies (GWAS) have demonstrated that the same genetic variants can be associated with multiple diseases and other complex traits. We present software called CPAG (Cross-Phenotype Analysis of GWAS) to look for similarities between 700 traits, build trees with informative clusters, and highlight underlying pathways. Clusters are consistent with pre-defined groups and literature-based validation but also reveal novel connections. We report similarity between plasma palmitoleic acid and Crohn's disease and find that specific fatty acids exacerbate enterocolitis in zebrafish. CPAG will become increasingly powerful as more genetic variants are uncovered, leading to a deeper understanding of complex traits. CPAG is freely available at www.sourceforge.net/projects/CPAG/.Item Open Access Creating and parameterizing patient-specific deep brain stimulation pathway-activation models using the hyperdirect pathway as an example.(PloS one, 2017-01) Gunalan, Kabilar; Chaturvedi, Ashutosh; Howell, Bryan; Duchin, Yuval; Lempka, Scott F; Patriat, Remi; Sapiro, Guillermo; Harel, Noam; McIntyre, Cameron CBackground
Deep brain stimulation (DBS) is an established clinical therapy and computational models have played an important role in advancing the technology. Patient-specific DBS models are now common tools in both academic and industrial research, as well as clinical software systems. However, the exact methodology for creating patient-specific DBS models can vary substantially and important technical details are often missing from published reports.Objective
Provide a detailed description of the assembly workflow and parameterization of a patient-specific DBS pathway-activation model (PAM) and predict the response of the hyperdirect pathway to clinical stimulation.Methods
Integration of multiple software tools (e.g. COMSOL, MATLAB, FSL, NEURON, Python) enables the creation and visualization of a DBS PAM. An example DBS PAM was developed using 7T magnetic resonance imaging data from a single unilaterally implanted patient with Parkinson's disease (PD). This detailed description implements our best computational practices and most elaborate parameterization steps, as defined from over a decade of technical evolution.Results
Pathway recruitment curves and strength-duration relationships highlight the non-linear response of axons to changes in the DBS parameter settings.Conclusion
Parameterization of patient-specific DBS models can be highly detailed and constrained, thereby providing confidence in the simulation predictions, but at the expense of time demanding technical implementation steps. DBS PAMs represent new tools for investigating possible correlations between brain pathway activation patterns and clinical symptom modulation.Item Open Access Detecting structure of haplotypes and local ancestry.(Genetics, 2014-03) Guan, YongtaoWe present a two-layer hidden Markov model to detect the structure of haplotypes for unrelated individuals. This allows us to model two scales of linkage disequilibrium (one within a group of haplotypes and one between groups), thereby taking advantage of rich haplotype information to infer local ancestry of admixed individuals. Our method outperforms competing state-of-the-art methods, particularly for regions of small ancestral track lengths. Applying our method to Mexican samples in HapMap3, we found two regions on chromosomes 6 and 8 that show significant departure of local ancestry from the genome-wide average. A software package implementing the methods described in this article is freely available at http://bcm.edu/cnrc/mcmcmc.Item Open Access f-treeGC: a questionnaire-based family tree-creation software for genetic counseling and genome cohort studies.(BMC medical genetics, 2017-07-14) Tokutomi, Tomoharu; Fukushima, Akimune; Yamamoto, Kayono; Bansho, Yasushi; Hachiya, Tsuyoshi; Shimizu, AtsushiThe Tohoku Medical Megabank project aims to create a next-generation personalized healthcare system by conducting large-scale genome-cohort studies involving three generations of local residents in the areas affected by the Great East Japan Earthquake. We collected medical and genomic information for developing a biobank to be used for this healthcare system. We designed a questionnaire-based pedigree-creation software program named "f-treeGC," which enables even less experienced medical practitioners to accurately and rapidly collect family health history and create pedigree charts.f-treeGC may be run on Adobe AIR. Pedigree charts are created in the following manner: 1) At system startup, the client is prompted to provide required information on the presence or absence of children; f-treeGC is capable of creating a pedigree up to three generations. 2) An interviewer fills out a multiple-choice questionnaire on genealogical information. 3) The information requested includes name, age, gender, general status, infertility status, pregnancy status, fetal status, and physical features or health conditions of individuals over three generations. In addition, information regarding the client and the proband, and birth order information, including multiple gestation, custody, multiple individuals, donor or surrogate, adoption, and consanguinity may be included. 4) f-treeGC shows only marriages between first cousins via the overlay function. 5) f-treeGC automatically creates a pedigree chart, and the chart-creation process is visible for inspection on the screen in real time. 6) The genealogical data may be saved as a file in the original format. The created/modified date and time may be changed as required, and the file may be password-protected and/or saved in read-only format. To enable sorting or searching from the database, the file name automatically contains the terms typed into the entry fields, including physical features or health conditions, by default. 7) Alternatively, family histories are collected using a completed foldable interview paper sheet named "f-sheet", which is identical to the questionnaire in f-treeGC.We developed a questionnaire-based family tree-creation software, named f-treeGC, which is fully compliant with international recommendations for standardized human pedigree nomenclature. The present software simplifies the process of collecting family histories and pedigrees, and has a variety of uses, from genome cohort studies or primary care to genetic counseling.
- «
- 1 (current)
- 2
- 3
- »