Browsing by Subject "Automation"
- Results Per Page
- Sort Options
Item Open Access A Cloud-Based Infrastructure for Cancer Genomics(2020) Panea, Razvan IoanThe advent of new genomic approaches, particularly next generation sequencing (NGS) has resulted in explosive growth of biological data. As the size of biological data keeps growing at exponential rates, new methods for data management and data processing are becoming essential in bioinformatics and computational biology. Indeed, data analysis has now become the central challenge in genomics.
NGS has provided rich tools for defining genomic alterations that cause cancer. The processing time and computing requirements have now become a serious bottleneck to the characterization and analysis of these genomic alterations. Moreover, as the adoption of NGS continues to increase, the computing power required often exceeds what any single institution can provide, leading to major restraints in the type and number of analyses that can be performed.
Cloud computing represents a potential solution to this problem. On a cloud platform, computing resources can be available on-demand, thus allowing users to implement scalable and highly parallel methods. However, few centralized frameworks exist to allow the average researcher the ability to apply bioinformatics workflows using cloud resources. Moreover, bioinformatics approaches are associated with multiple processing challenges, such as the variability in the methods or data used and the reproducibility requirements of the research analysis.
Here, we present CloudConductor, a software system that is specifically designed to harness the power of cloud computing to perform complex analysis pipelines on large biological datasets. CloudConductor was designed with five central features in mind: scalability, modularity, parallelism, reproducibility and platform agnosticism.
We demonstrate the processing power afforded by CloudConductor on a real-world genomics problem. Using CloudConductor, we processed and analyzed 101 whole genome tumor-normal paired samples from Burkitt lymphoma subtypes to identify novel genomic alterations. We identified a total of 72 driver genes associated with the disease. Somatic events were identified in both coding and non-coding regions of nearly all driver genes, notably in genes IGLL5, BACH2, SIN3A, and DNMT1. We have developed the analysis framework by implementing a graphical user interface, a back-end database system, a data loader and a workflow management system.
In this thesis, we develop the concepts and describe an implementation of automated cloud-based infrastructure to analyze genomics data, creating a fast and efficient analysis resource for genomics researchers.
Item Open Access A new fully automated approach for aligning and comparing shapes.(Anatomical record (Hoboken, N.J. : 2007), 2015-01) Boyer, Doug M; Puente, Jesus; Gladman, Justin T; Glynn, Chris; Mukherjee, Sayan; Yapuncich, Gabriel S; Daubechies, IngridThree-dimensional geometric morphometric (3DGM) methods for placing landmarks on digitized bones have become increasingly sophisticated in the last 20 years, including greater degrees of automation. One aspect shared by all 3DGM methods is that the researcher must designate initial landmarks. Thus, researcher interpretations of homology and correspondence are required for and influence representations of shape. We present an algorithm allowing fully automatic placement of correspondence points on samples of 3D digital models representing bones of different individuals/species, which can then be input into standard 3DGM software and analyzed with dimension reduction techniques. We test this algorithm against several samples, primarily a dataset of 106 primate calcanei represented by 1,024 correspondence points per bone. Results of our automated analysis of these samples are compared to a published study using a traditional 3DGM approach with 27 landmarks on each bone. Data were analyzed with morphologika(2.5) and PAST. Our analyses returned strong correlations between principal component scores, similar variance partitioning among components, and similarities between the shape spaces generated by the automatic and traditional methods. While cluster analyses of both automatically generated and traditional datasets produced broadly similar patterns, there were also differences. Overall these results suggest to us that automatic quantifications can lead to shape spaces that are as meaningful as those based on observer landmarks, thereby presenting potential to save time in data collection, increase completeness of morphological quantification, eliminate observer error, and allow comparisons of shape diversity between different types of bones. We provide an R package for implementing this analysis.Item Open Access A robotics platform for automated batch fabrication of high density, microfluidics-based DNA microarrays, with applications to single cell, multiplex assays of secreted proteins.(The Review of scientific instruments, 2011-09) Ahmad, Habib; Sutherland, Alex; Shin, Young Shik; Hwang, Kiwook; Qin, Lidong; Krom, Russell-John; Heath, James RMicrofluidics flow-patterning has been utilized for the construction of chip-scale miniaturized DNA and protein barcode arrays. Such arrays have been used for specific clinical and fundamental investigations in which many proteins are assayed from single cells or other small sample sizes. However, flow-patterned arrays are hand-prepared, and so are impractical for broad applications. We describe an integrated robotics/microfluidics platform for the automated preparation of such arrays, and we apply it to the batch fabrication of up to eighteen chips of flow-patterned DNA barcodes. The resulting substrates are comparable in quality with hand-made arrays and exhibit excellent substrate-to-substrate consistency. We demonstrate the utility and reproducibility of robotics-patterned barcodes by utilizing two flow-patterned chips for highly parallel assays of a panel of secreted proteins from single macrophage cells.Item Embargo Algorithmic Dispossession: Automating Warfare in Israel and Palestine(2024) Goodfriend, Sophia LouiseThis dissertation offers an anthropological portrait of how algorithms are transforming what it means to wage and live with war across Israel and Palestine. My findings emerge from three years of ethnographic research with Israeli intelligence veterans, Palestinian advocates and influencers, and ordinary civilians living at the cross-hairs of regional conflict. I begin in the early 2000s, as Israel’s surveillance apparatus across Palestine proliferated amidst the violence of the Second Intifada and receding visions of regional peace. I conclude more than two decades later, as AI-powered surveillance and weapons systems intensify warfare across the region. I argue that the imperatives of a globalized information economy tangle with violent forms of dispossession across the occupied Palestinian territories to entrench warfare, a process I call algorithmic dispossession. Bringing critical algorithm studies to bear on an anthropological portrait of warfare in Israel and Palestine, I show how the buildup of algorithmic systems embedded the Israeli army into the most intimate domains of Palestinians’ lives. As new technologies drove up arrests, displacement, and death for Palestinians, the economic value placed on algorithmic development cleaved Israeli soldiers and military strategy writ large off from the imperatives of reducing bloodshed, ensuring warfare continued at a profound human cost to Israelis and Palestinians across the region. By placing ethnographic evidence gathered through years of fieldwork in Israel/Palestine alongside urgent debates surrounding the ethics and impact of new technologies, this dissertation ultimately foregrounds the iterative relationship between war and automation today.
Item Open Access An automated method for comparing motion artifacts in cine four-dimensional computed tomography images.(Journal of applied clinical medical physics, 2012-11-08) Cui, Guoqiang; Jew, Brian; Hong, Julian C; Johnston, Eric W; Loo, Billy W; Maxim, Peter GThe aim of this study is to develop an automated method to objectively compare motion artifacts in two four-dimensional computed tomography (4D CT) image sets, and identify the one that would appear to human observers with fewer or smaller artifacts. Our proposed method is based on the difference of the normalized correlation coefficients between edge slices at couch transitions, which we hypothesize may be a suitable metric to identify motion artifacts. We evaluated our method using ten pairs of 4D CT image sets that showed subtle differences in artifacts between images in a pair, which were identifiable by human observers. One set of 4D CT images was sorted using breathing traces in which our clinically implemented 4D CT sorting software miscalculated the respiratory phase, which expectedly led to artifacts in the images. The other set of images consisted of the same images; however, these were sorted using the same breathing traces but with corrected phases. Next we calculated the normalized correlation coefficients between edge slices at all couch transitions for all respiratory phases in both image sets to evaluate for motion artifacts. For nine image set pairs, our method identified the 4D CT sets sorted using the breathing traces with the corrected respiratory phase to result in images with fewer or smaller artifacts, whereas for one image pair, no difference was noted. Two observers independently assessed the accuracy of our method. Both observers identified 9 image sets that were sorted using the breathing traces with corrected respiratory phase as having fewer or smaller artifacts. In summary, using the 4D CT data of ten pairs of 4D CT image sets, we have demonstrated proof of principle that our method is able to replicate the results of two human observers in identifying the image set with fewer or smaller artifacts.Item Open Access Analysis and Error Correction in Structures of Macromolecular Interiors and Interfaces(2009) Headd, Jeffrey JohnAs of late 2009, the Protein Data Bank (PDB) has grown to contain over 70,000 models. This recent increase in the amount of structural data allows for more extensive explication of the governing principles of macromolecular folding and association to complement traditional studies focused on a single molecule or complex. PDB-wide characterization of structural features yields insights that are useful in prediction and validation of the 3D structure of macromolecules and their complexes. Here, these insights lead to a deeper understanding of protein--protein interfaces, full-atom critical assessment of increasingly more accurate structure predictions, a better defined library of RNA backbone conformers for validation and building 3D models, and knowledge-based automatic correction of errors in protein sidechain rotamers.
My study of protein--protein interfaces identifies amino acid pairing preferences in a set of 146 transient interfaces. Using a geometric interface surface definition devoid of arbitrary cutoffs common to previous studies of interface composition, I calculate inter- and intrachain amino acid pairing preferences. As expected, salt-bridges and hydrophobic patches are prevalent, but likelihood correction of observed pairing frequencies reveals some surprising pairing preferences, such as Cys-His interchain pairs and Met-Met intrachain pairs. To complement my statistical observations, I introduce a 2D visualization of the 3D interface surface that can display a variety of interface characteristics, including residue type, atomic distance and backbone/sidechain composition.
My study of protein interiors finds that 3D structure prediction from sequence (as part of the CASP experiment) is very close to full-atom accuracy. Validation of structure prediction should therefore consider all atom positions instead of the traditional Calpha-only evaluation. I introduce six new full-model quality criteria to assess the accuracy of CASP predictions, which demonstrate that groups who use structural knowledge culled from the PDB to inform their prediction protocols produce the most accurate results.
My study of RNA backbone introduces a set of rotamer-like "suite" conformers. Initially hand-identified by the Richardson laboratory, these 7D conformers represent backbone segments that are found to be genuine and favorable. X-ray crystallographers can use backbone conformers for model building in often poor backbone density and in validation after refinement. Increasing amounts of high quality RNA data allow for improved conformer identification, but also complicate hand-curation. I demonstrate that affinity propagation successfully differentiates between two related but distinct suite conformers, and is a useful tool for automated conformer clustering.
My study of protein sidechain rotamers in X-ray structures identifies a class of systematic errors that results in sidechains misfit by approximately 180 degrees. I introduce Autofix, a method for automated detection and correction of such errors. Autofix corrects over 40% of errors for Leu, Thr, and Val residues, and a significant number of Arg residues. On average, Autofix made four corrections per PDB file in 945 X-ray structures. Autofix will be implemented into MolProbity and PHENIX for easy integration into X-ray crystallography workflows.
Item Open Access Automated Detection of P. falciparum Using Machine Learning Algorithms with Quantitative Phase Images of Unstained Cells.(PloS one, 2016-01) Park, Han Sang; Rinehart, Matthew T; Walzer, Katelyn A; Chi, Jen-Tsan Ashley; Wax, AdamMalaria detection through microscopic examination of stained blood smears is a diagnostic challenge that heavily relies on the expertise of trained microscopists. This paper presents an automated analysis method for detection and staging of red blood cells infected by the malaria parasite Plasmodium falciparum at trophozoite or schizont stage. Unlike previous efforts in this area, this study uses quantitative phase images of unstained cells. Erythrocytes are automatically segmented using thresholds of optical phase and refocused to enable quantitative comparison of phase images. Refocused images are analyzed to extract 23 morphological descriptors based on the phase information. While all individual descriptors are highly statistically different between infected and uninfected cells, each descriptor does not enable separation of populations at a level satisfactory for clinical utility. To improve the diagnostic capacity, we applied various machine learning techniques, including linear discriminant classification (LDC), logistic regression (LR), and k-nearest neighbor classification (NNC), to formulate algorithms that combine all of the calculated physical parameters to distinguish cells more effectively. Results show that LDC provides the highest accuracy of up to 99.7% in detecting schizont stage infected cells compared to uninfected RBCs. NNC showed slightly better accuracy (99.5%) than either LDC (99.0%) or LR (99.1%) for discriminating late trophozoites from uninfected RBCs. However, for early trophozoites, LDC produced the best accuracy of 98%. Discrimination of infection stage was less accurate, producing high specificity (99.8%) but only 45.0%-66.8% sensitivity with early trophozoites most often mistaken for late trophozoite or schizont stage and late trophozoite and schizont stage most often confused for each other. Overall, this methodology points to a significant clinical potential of using quantitative phase imaging to detect and stage malaria infection without staining or expert analysis.Item Open Access Automated quality control in nuclear medicine using the structured noise index.(Journal of applied clinical medical physics, 2020-04) Nelson, Jeffrey S; Samei, EhsanPurpose
Daily flood-field uniformity evaluation serves as the central element of nuclear medicine (NM) quality control (QC) programs. Uniformity images are traditionally analyzed using pixel value-based metrics, that is, integral uniformity (IU), which often fail to capture subtle structure and patterns caused by changes in gamma camera performance, requiring visual inspections which are subjective and time demanding. The goal of this project was to implement an advanced QC metrology for NM to effectively identify nonuniformity issues, and report issues in a timely manner for efficient correction prior to clinical use. The project involved the implementation of the program over a 2-year period at a multisite major medical institution.Methods
Using a previously developed quantitative uniformity analysis metric, the structured noise index (SNI) [Nelson et al. (2014), \textit{J Nucl Med.}, \textbf{55}:169-174], an automated QC process was developed to analyze, archive, and report on daily NM QC uniformity images. Clinical implementation of the newly developed program ran in parallel with the manufacturer's reported IU-based QC program. The effectiveness of the SNI program was evaluated over a 21-month period using sensitivity and coefficient of variation statistics.Results
A total of 7365 uniformity QC images were analyzed. Lower level SNI alerts were generated in 12.5% of images and upper level alerts in 1.7%. Intervention due to image quality issues occurred on 26 instances; the SNI metric identified 24, while the IU metric identified eight. The SNI metric reported five upper level alerts where no clinical engineering intervention was deemed necessary.Conclusion
An SNI-based QC program provides a robust quantification of the performance of gamma camera uniformity. It operates seamlessly across a fleet of multiple camera models and, additionally, provides effective workflow among the clinical staff. The reliability of this process could eliminate the need for visual inspection of each image, saving valuable time, while enabling quantitative analysis of inter- and intrasystem performance.Item Open Access Automation and the Fate of Young Workers: Evidence from Telephone Operation in the Early 20th Century(2020-10-31) Feigenbaum, James; Gross, Daniel PItem Open Access Beam Angle Optimization for Automated Coplanar IMRT Lung Plans(2016) Hedrick, Kathryn MariePurpose: To investigate the effect of incorporating a beam spreading parameter in a beam angle optimization algorithm and to evaluate its efficacy for creating coplanar IMRT lung plans in conjunction with machine learning generated dose objectives.
Methods: Fifteen anonymized patient cases were each re-planned with ten values over the range of the beam spreading parameter, k, and analyzed with a Wilcoxon signed-rank test to determine whether any particular value resulted in significant improvement over the initially treated plan created by a trained dosimetrist. Dose constraints were generated by a machine learning algorithm and kept constant for each case across all k values. Parameters investigated for potential improvement included mean lung dose, V20 lung, V40 heart, 80% conformity index, and 90% conformity index.
Results: With a confidence level of 5%, treatment plans created with this method resulted in significantly better conformity indices. Dose coverage to the PTV was improved by an average of 12% over the initial plans. At the same time, these treatment plans showed no significant difference in mean lung dose, V20 lung, or V40 heart when compared to the initial plans; however, it should be noted that these results could be influenced by the small sample size of patient cases.
Conclusions: The beam angle optimization algorithm, with the inclusion of the beam spreading parameter k, increases the dose conformity of the automatically generated treatment plans over that of the initial plans without adversely affecting the dose to organs at risk. This parameter can be varied according to physician preference in order to control the tradeoff between dose conformity and OAR sparing without compromising the integrity of the plan.
Item Open Access Characterizing and predicting the interaction of proteins with nanoparticles(2023) Poulsen, KarstenNanoparticles are being used or implemented in a wide array of applications including health care, cosmetics, automotive, food, beverage, coatings, consumer electronics, and coatings. Despite this widespread use, we are unable to predict how nanoparticles will interact with biological systems. This is important for both nanotoxicity resulting from human exposure to nanomaterials and the development of new nano-based biotechnologies. The first step in the interaction of nanoparticles with biological systems is often with blood, for biomedical applications, or lung fluid, when nanoparticles are inhaled. In both cases, the nanoparticles are exposed to proteins that form a "corona" by adsorbing on the nanoparticle surface. The subsequent cellular response is determined by this protein corona rather than the bare nanoparticle.Our goal is to develop a predictive capability for protein-nanoparticle interactions. This requires lab automation, large datasets, machine learning, and mechanistic studies. We first developed and validated a semi-automated approach to generate, purify, and characterize protein coronas using a liquid handling robot and low-cost proteomics. Using this semi-automated approach, we characterized the formation of protein coronas with increasing incubation time and serum concentration. Incubation time was found to be an important parameter for corona composition and concentration at high incubation concentrations but yielded only a small effect at low serum incubation concentrations. To better understand how the protein corona affects biological responses, we investigated the response of macrophage cells to titanium dioxide nanoparticles as a function of the protein corona. As in our previous work with serum proteins, we measured the concentration and composition of murine lung fluid proteins adsorbed on the surface of titanium dioxide nanoparticles. We found that a low concentration of lung fluid corona, relative to fetal bovine serum and bovine serum albumin coronas, led to an increased expression of cytokines (interleukin 6 (IL-6), tumor necrosis factor-alpha (TNF-α), and macrophage inflammatory protein 2 (MIP-2)), indicating an inflammation response. This underscores the importance of understanding how the composition and concentration of the protein corona governs organism responses to nanoparticle exposures. Our validated high-throughput lab automation work allowed us to synthesize a library of magnetic nanoparticles and characterize their resulting protein coronas. The resulting nanoparticle dataset has 12 unique NP surfaces, seven surface charges, two core sizes, and two core materials. We used this dataset to generate and characterize, via proteomics, a variety of protein coronas varying incubation concentration and purification methods. We used the resulting proteomic dataset in conjunction with a database of protein physicochemical properties to build a machine learning model that identifies the most important nanoparticle and protein properties for protein corona formation. The model was tested using external datasets and found that it can predict human serum and lung fluid coronas on varying nanoparticle surfaces. Overall, this combination of lab automation, machine learning, and mechanistic analysis suggests that a generalizable understanding of the protein corona formation and its effects is forthcoming.
Item Embargo Ethics of Artificial Intelligence, Robotics and Supra-Intelligence(2020) Kasbe, Timothy DAll things were created by Him and for Him:
Ethics of Artificial Intelligence, Robotics and Supra-Intelligence
Fascination with automation has captured the human imagination for thousands of years. As far back as 800 CE, when Baghdad was at its height as one of the world’s most cultured cities, its House of Wisdom produced a remarkable text, “The Book of Ingenious Devices.” In it were beautiful schematic drawings of machines years ahead of anything in Europe—clocks, hydraulic instruments, even a water-powered organ with swappable pin-cylinders that was effectively a programmable device.
The fascination with automation has come a long way since then. Technological advancements in the last seventy years have provided unprecedented opportunities for humans to explore not only automation, but now also the creation of intelligent and superintelligent machines. These machines promise to mimic human qualities and even supersede humanity in every manner of task and intelligence. The explosion of, and ready access to, information through the internet has proved to be challenging in some regards but has also eased other aspects of life. An example of this would be the way long-lost friends can be reunited through the click of a mouse. Similarly, news accompanied by pictures and videos is now readily available in real-time. These conveniences have also brought unintended consequences. Despite this newfound connectivity, social challenges such as loneliness and suicide are on the rise. Technology has also opened the door to problems such as cyberbullying, election manipulation, and fake news. Information, whether it be accurate or not, spreads across the world at unprecedented speeds, carrying with it change, sometimes for the better, but not always. This is all happening before the anticipated age of superintelligence.
This thesis examines the distinct nature of humanity and God in view of the emergence of superintelligence. Can we see this “new creation” as an addition to God’s creation of humans, angels, and Satan? If that be the case, then questions of ethics and theology need to be addressed. For instance, who gets to program these new superintelligent “beings?” As things stand today, the individuals and corporations with the deepest pockets are racing to be the first to produce superintelligent beings. The so-called “technology horse” has already bolted, with government policy struggling to keep up. Unseen in this race is the prophetic and ethical voice of the church, regarding the meaning of life, and what living in this new reality will look like.
More questions are raised than can be answered in this paper. How does the Church stay true to its message of hope in a world where robots will likely take over everyday jobs? Where will humanity find meaning and contentment? What are we to think about the idea of a basic universal wage? How will such a shift impact migrant and the poor? In this paper I establish a framework for the church to consider different aspects of these challenges, even as people are welcomed weekly into the community of faith.
This thesis represents extensive research into the philosophy and practice of safety engineering, paired with personal experiences as a professional in the technology industry who is also deeply committed to being a disciple of Christ. Primary works I have drawn from extensively include Hauerwas and Wells’ Blackwell Companion to Christian Ethics, and Jungian archetypes in comparing and contrasting biological beings to technological creations. The paper starts with creation accounts from Genesis and the Enuma Elish as a way of exploring the “being” category as it appears on this planet. Personal insights gained working in both enterprise and startup businesses, as well as in my own professional development, have contributed to this work and may be found throughout. This thesis represents a labor of love through which I have learned a great deal about my own profession and faith. However, it is my sincere hope that it will be much more. Through this dissertation I hope to see companies both big and small taking note of the ethical issues discussed here, even as they find themselves unleashing artificial intelligence in the marketplace. At the same time, I expect churches and religious organizations will benefit from this discussion and will, I hope, move to engage more deeply with culture and the marketplace as new opportunities and risks emerge from the implementation of artificial intelligence. If the observations that I have made and the recommendations that I have set forth can inspire even one person to carefully examine his or her identity in Christ, then this work will be successful beyond its original purpose as an academic work.
Item Open Access Hardware Design and Fault-Tolerant Synthesis for Digital Acoustofluidic Biochips.(IEEE transactions on biomedical circuits and systems, 2020-10) Zhong, Zhanwei; Zhu, Haodong; Zhang, Peiran; Morizio, James; Huang, Tony Jun; Chakrabarty, KrishnenduA digital microfluidic biochip (DMB) is an attractive platform for automating laboratory procedures in microbiology. To overcome the problem of cross-contamination due to fouling of the electrode surface in traditional DMBs, a contactless liquid-handling biochip technology, referred to as acoustofluidics, has recently been proposed. A major challenge in operating this platform is the need for a control signal of frequency 24 MHz and voltage range ±10/±20 V to activate the IDT units in the biochip. In this paper, we present a hardware design that can efficiently activate/de-activated each IDT, and can fully automate an bio-protocol. We also present a fault-tolerant synthesis technique that allows us to automatically map biomolecular protocols to acoustofluidic biochips. We develop and experimentally validate a velocity model, and use it to guide co-optimization for operation scheduling, module placement, and droplet routing in the presence of IDT faults. Simulation results demonstrate the effectiveness of the proposed synthesis method. Our results are expected to open new research directions on design automation of digital acoustofluidic biochips.Item Open Access Inhibition of the futalosine pathway for menaquinone biosynthesis suppresses Chlamydia trachomatis infection.(FEBS letters, 2021-12) Dudiak, Brianne M; Nguyen, Tri M; Needham, David; Outlaw, Taylor C; McCafferty, Dewey GChlamydia trachomatis, an obligate intracellular bacterium with limited metabolic capabilities, possesses the futalosine pathway for menaquinone biosynthesis. Futalosine pathway enzymes have promise as narrow-spectrum antibiotic targets, but the activity and essentiality of chlamydial menaquinone biosynthesis have yet to be established. In this work, menaquinone-7 (MK-7) was identified as a C. trachomatis-produced quinone through liquid chromatography-tandem mass spectrometry. An immunofluorescence-based assay revealed that treatment of C. trachomatis-infected HeLa cells with the futalosine pathway inhibitor docosahexaenoic acid (DHA) reduced inclusion number, inclusion size, and infectious progeny. Supplementation with MK-7 nanoparticles rescued the effect of DHA on inclusion number, indicating that the futalosine pathway is a target of DHA in this system. These results open the door for menaquinone biosynthesis inhibitors to be pursued in antichlamydial development.Item Open Access Microfluidic platform versus conventional real-time polymerase chain reaction for the detection of Mycoplasma pneumoniae in respiratory specimens.(Diagn Microbiol Infect Dis, 2010-05) Wulff-Burchfield, Elizabeth; Schell, Wiley A; Eckhardt, Allen E; Pollack, Michael G; Hua, Zhishan; Rouse, Jeremy L; Pamula, Vamsee K; Srinivasan, Vijay; Benton, Jonathan L; Alexander, Barbara D; Wilfret, David A; Kraft, Monica; Cairns, Charles B; Perfect, John R; Mitchell, Thomas GRapid, accurate diagnosis of community-acquired pneumonia (CAP) due to Mycoplasma pneumoniae is compromised by low sensitivity of culture and serology. Polymerase chain reaction (PCR) has emerged as a sensitive method to detect M. pneumoniae DNA in clinical specimens. However, conventional real-time PCR is not cost-effective for routine or outpatient implementation. Here, we evaluate a novel microfluidic real-time PCR platform (Advanced Liquid Logic, Research Triangle Park, NC) that is rapid, portable, and fully automated. We enrolled patients with CAP and extracted DNA from nasopharyngeal wash (NPW) specimens using a biotinylated capture probe and streptavidin-coupled magnetic beads. Each extract was tested for M. pneumoniae-specific DNA by real-time PCR on both conventional and microfluidic platforms using Taqman probe and primers. Three of 59 (5.0%) NPWs were positive, and agreement between the methods was 98%. The microfluidic platform was equally sensitive but 3 times faster and offers an inexpensive and convenient diagnostic test for microbial DNA.Item Open Access Quantitative comparison of automatic and manual IMRT optimization for prostate cancer: the benefits of DVH prediction.(Journal of applied clinical medical physics, 2015-03-08) Yang, Yun; Li, Taoran; Yuan, Lunlin; Ge, Yaorong; Yin, Fang-Fang; Lee, W Robert; Wu, Q JackieA recent publication indicated that the patient anatomical feature (PAF) model was capable of predicting optimal objectives based on past experience. In this study, the benefits of IMRT optimization using PAF-predicted objectives as guidance for prostate were evaluated. Three different optimization methods were compared.1) Expert Plan: Ten prostate cases (16 plans) were planned by an expert planner using conventional trial-and-error approach started with institutional modified OAR and PTV constraints. Optimization was stopped at 150 iterations and that plan was saved as Expert Plan. 2) Clinical Plan: The planner would keep working on the Expert Plan till he was satisfied with the dosimetric quality and the final plan was referred to as Clinical Plan. 3) PAF Plan: A third sets of plans for the same ten patients were generated fully automatically using predicted DVHs as guidance. The optimization was based on PAF-based predicted objectives, and was continued to 150 iterations without human interaction. DMAX and D98% for PTV, DMAX for femoral heads, DMAX, D10cc, D25%/D17%, and D40% for bladder/rectum were compared. Clinical Plans are further optimized with more iterations and adjustments, but in general provided limited dosimetric benefits over Expert Plans. PTV D98% agreed within 2.31% among Expert, Clinical, and PAF plans. Between Clinical and PAF Plans, differences for DMAX of PTV, bladder, and rectum were within 2.65%, 2.46%, and 2.20%, respectively. Bladder D10cc was higher for PAF but < 1.54% in general. Bladder D25% and D40% were lower for PAF, by up to 7.71% and 6.81%, respectively. Rectum D10cc, D17%, and D40% were 2.11%, 2.72%, and 0.27% lower for PAF, respectively. DMAX for femoral heads were comparable (< 35 Gy on average). Compared to Clinical Plan (Primary + Boost), the average optimization time for PAF plan was reduced by 5.2 min on average, with a maximum reduction of 7.1min. Total numbers of MUs per plan for PAF Plans were lower than Clinical Plans, indicating better delivery efficiency. The PAF-guided planning process is capable of generating clinical-quality prostate IMRT plans with no human intervention. Compared to manual optimization, this automatic optimization increases planning and delivery efficiency, while maintainingplan quality.Item Open Access SplicerEX: a tool for the automated detection and classification of mRNA changes from conventional and splice-sensitive microarray expression data.(RNA (New York, N.Y.), 2012-08) Robinson, Timothy J; Forte, Eleonora; Salinas, Raul E; Puri, Shaan; Marengo, Matthew; Garcia-Blanco, Mariano A; Luftig, Micah AThe key postulate that one gene encodes one protein has been overhauled with the discovery that one gene can generate multiple RNA transcripts through alternative mRNA processing. In this study, we describe SplicerEX, a novel and uniquely motivated algorithm designed for experimental biologists that (1) detects widespread changes in mRNA isoforms from both conventional and splice sensitive microarray data, (2) automatically categorizes mechanistic changes in mRNA processing, and (3) mitigates known technological artifacts of exon array-based detection of alternative splicing resulting from 5' and 3' signal attenuation, background detection limits, and saturation of probe set signal intensity. In this study, we used SplicerEX to compare conventional and exon-based Affymetrix microarray data in a model of EBV transformation of primary human B cells. We demonstrated superior detection of 3'-located changes in mRNA processing by the Affymetrix U133 GeneChip relative to the Human Exon Array. SplicerEX-identified exon-level changes in the EBV infection model were confirmed by RT-PCR and revealed a novel set of EBV-regulated mRNA isoform changes in caspases 6, 7, and 8. Finally, SplicerEX as compared with MiDAS analysis of publicly available microarray data provided more efficiently categorized mRNA isoform changes with a significantly higher proportion of hits supported by previously annotated alternative processing events. Therefore, SplicerEX provides an important tool for the biologist interested in studying changes in mRNA isoform usage from conventional or splice-sensitive microarray platforms, especially considering the expansive amount of archival microarray data generated over the past decade. SplicerEX is freely available upon request.Item Open Access The Impact of Skill-based Training Across Different Levels of Autonomy for Drone Inspection Tasks(2018) Kim, MinwooGiven their low operating costs and flight capabilities, Unmanned Aircraft Vehicles(UAVs), especially small size UAVs, have a wide range of applications, from civilian rescue missions to military surveillance. Easy control from a highly automated system has made these compact UAVs particularly efficient and effective devices by alleviating human operator workload. However, whether or not automation can lead to increased performance is not just a matter of system design but requires operators’ thorough understanding of the behavior of the system. Then, a question arises: which type of training and level of automation can help UAV operators perform the best?
To address this problem, an experiment was designed and conducted to compare the differences in performance between 3 groups of UAV operators. For this experiment, 2 different interfaces were first developed - Manual Control, which represents low LOA interface, and Supervisory Control, which represents high LOA interface - and people were recruited and randomly divided into 3 groups. Group 1 was trained using Manual Control, and Group 3 was trained using Supervisory Control while Group 2 was trained using both Manual and Supervisory Control. Participants then flew a drone in the Test Mission stage to compare performance.
The results of the experiment were rather surprising. Although group 3 outperformed group 1, as expected, the poor performance of group 2 was unexpected and gave us new perspectives on additional training. That is, additional training could lead not just to a mere surplus of extra skills but also a degradation of existing skills. An extended work using a more mathematical approach should allow for a more precise, quantitative description on the relation between extra training and performance.