Scientific Program

Conference Series Ltd invites all the participants across the globe to attend 4th International Conference and Exhibition on Biometrics & Biostatistics San Antonio, USA.

Day 3 :

Keynote Forum

Ajit Kumar Roy

Central Agricultural University-Tripura, India

Keynote: Impact of big data analytics on business, economy, health care and society

Time : 09:00-09:30

Conference Series Biostatistics 2015 International Conference Keynote Speaker Ajit Kumar Roy photo
Biography:

Ajit Kumar Roy obtained his MSc degree in Statistics and joined Agricultural Research Service (ARS) of Indian Council of Agricultural Research (ICAR) as a Scientist in 1976. He has edited eighteen books and several conference proceedings covering the areas of statistics, bioinformatics, economics, and analytics applications in aquaculture/fisheries/agriculture and allied fields besides published over 100 articles in refereed journals & conference proceedings. He is a highly acclaimed researcher and consultant. His recent popular books are ‘Applied Big Data Analytics’; ‘Self Learning of Bioinformatics Online’; ‘Applied Bioinformatics, Statistics and Economics in Fisheries Research’ and ‘Applied Computational Biology and Statistics in Biotechnology and Bioinformatics’. He is widely recognized as an expert research scientist, teacher, author, hands-on leader in advanced analytics. He served as National Consultant (Impact Assessment), Consultant (Statistics), Computer Specialist and Principal Scientist at various organizations at national and international levels. Presently he is a visiting Professor of four Indian Universities.

Abstract:

Big data exists in a wide variety of data-intensive areas such as atmospheric science, genome research, astronomical studies and network traffic monitor. Huge data is created every day by the interactions of billions of people using computers, GPS devices, cell phones, censors and medical devices. Due to the tremendous amount of data generated daily from business, research and sciences, big data is everywhere and represents huge opportunities to those who can use it effectively. In the past, this information was simply ignored and opportunities were missed. In the big data era, realizing the great importance of big data, many analytical organizations are moving beyond process improvements to find hidden information buried in big data and trying to make the best use of it. The growing technological ability to collect and analyze massive sets of information, known as Big Data, could lead to revolutionary changes in business, political and social enterprises, according to a new survey of internet experts. Till date a lot of work is done on big data covering the areas of tools, software, platforms, analytics etc. Presently many companies are successfully using these for benefits. National and International organisations entering the areas of application of big data analytics for development, education, disaster management, health care, natural resource management etc. for benefit of society. Therefore, it is attempted to compile and document the real use cases, benefits, advantages, impact and future challenges of big data. To evaluate the effectiveness of harnessing big data for development, UN Global Pulse has worked on several research projects in collaboration with public and private partners demonstrates how big data analytics can be beneficial to the work of policy makers in different contexts from monitoring early indicators of unemployment hikes to tracking fluctuations of commodity prices before they are recorded in official statistics. According to thought leaders Big data is already showing that potential in areas as far ranging as genetic mapping and personalized e-commerce and big data backed by the exponential growth in processing power and software technologies such as Hadoop, are allowing organizations “to make decisions that simply could not be made before, to handle all sorts of data questions.”And that will have resounding impact. Big data will have an impact on all industries and every process. Its influence will be felt in business planning, research, sales, production and elsewhere and these amounts to nothing less than new industrial revolution. The advances in capturing and analyzing big data allow us to decode human DNA in minutes, find cures for cancer, accurately predict human behaviour, foil terrorist attacks, pinpoint marketing efforts, prevent diseases and so much more. Finally, increasing concerns about privacy, as many have been expressed about how retailers, credit card companies, search engine providers and mail or social media companies use our private information. The presentation is focused around the real-life implementation of Big Data Analytics and discusses and describes impact in details that will provide bold vision from leading innovators across the data-driven spectrum and help gain fresh insights.rn

  • Track 7: Modern Data Analysis; Track 8: Adaptive Biometric Systems; Track 9: Biometric Security; Track 10: Emerging and Future Applications of Biometrics; Track 11: Cyber Security
Speaker
Biography:

Xing Li has completed his PhD in Bioinformatics from The University of Michigan at Ann Arbor. He is an Assistant Professor in Division of Biomedical Statistics and Informatics, Department of Health Science Research at Mayo Clinic which has been recognized as the best hospital for 2014-2015 by U.S. News & World Report. He has published more than 17 papers in reputed journals and has been serving as reviewers of many journals, such as Genomics, BMC Bioinformatics, Stem Cell Research, PLOS ONE, Physiological Genomics, etc.

Abstract:

Applying high-throughput next generation sequencing technology - RNA-seq and whole genome sequencing (WGS) - provides an unprecedented opportunity to investigate the disease-specific transcription profiles linked to potential genetic causes in Hypoplastic left heart syndrome (HLHS). Bioengineered HLHS patient-specific iPSCs and differentiated cardiac tissues offer a platform to recapitulate the individual developmental process to study the disease molecular causes. In this study, transcriptome profiling using RNA-seq was performed on iPSCs and differentiated cardiomyocytes. The results of RNA-seq data revealed over 4000 and 6000 differential genes between the family members in iPSC and differentiated cells respectively. WGS was done on blood samples of proband and parents to identify millions of variants. Variant filtering according to rarity, predicted damage and mode of inheritance pinpointed 34 genes with uncommon variants potentially involved in the pathogenesis of the disease.Ten of the 34 mutated genes displayed transcriptional differences in iPSCs while 16 of 34 mutated genes showed significantly differential expression in differentiated cells. Expression profiles for genes fulfilling both criteria (9 in total) were further characterized in iPSCs from proband and controls in a guided time-course cardiac differentiation. Two genes, ELF4 and HSPG2, displayed significantly different profiles. Of note, none of these genes had been previously linked to HLHS. In summary, by integrating the data from WGS, RNA-seq, naturally expressed time-course developmental roadmap, we triangulated a list of prioritized candidate genes that may contribute to HLHS and could be a target for future mechanistic studies for disease-specific clinical applications.

Speaker
Biography:

Huda Al-Ghaib is an Assistant Professor. She received her Undergraduate degree in computer engineering from University of Technology, Baghdad, Iraq, 2006. She is a recipient of Fulbright Scholar in 2009. She earned her Masters’ and PhD degrees in electrical engineering in 2011 and 2015, respectively, from University of Alabama in Huntsville (UAH). During her graduate studies, she was awarded an Outstanding Graduate Student in Engineering in 2014. Her research interests are in the area of pattern recognition and data mining with applications in medical imaging. She is the Author/Co-Author of more than 10 journals and conference articles. She is a Member of IEEE.

Abstract:

American Cancer Society documented that around 240,000 women are diagnosed with cancer in the United States during 2015 with morality rate of around 40,000. 1 out of 8 women would develop breast cancer during her life time. Unfortunately, breast cancer is a symptomatic disease. Screening mammography is a widely used procedure in developed countries to fight against breast cancer. In this procedure, women within a certain age are recommended to perform screening mammography regularly to search for abnormalities such as masses, calcifications, and architectural distortions. One challenge is detecting subtle malignant abnormalities in consecutive, temporal, images acquired for the same patient over time. Currently, a radiologist is visually comparing temporal mammograms to search for these subtle changes. This is a time consuming procedure. Also, limitations of human visual systems have led to misinterpreting the temporal mammograms and hence produce false negative and false positive rates. One method to increase the accuracy of temporal mammogram registration is through applying arithmetic algorithms in machines to detect these subtle changes. In this research, structural similarity index measurement is applied to register temporal mammograms for that purpose. Factors such as image rotation and translation are taken into consideration in this algorithm. The algorithm is compared with two well-known algorithms, i.e., Mutual Information (MI) and Correlation Coefficients (CORR). Based on the radiologist outcome, our algorithm provided better results compared with the other two algorithms. Using metric measurements, SSIM was found to reduce the error rate by 59.3% compared with 61.1% and 63.2% for CORR and MI, respectively.

Adeola Stephen Oladele

Federal University of Technology
Nigeria

Title: Fuzzy-based multi-modal architecture for electioneering in Nigeria

Time : 10:15-10:35

Speaker
Biography:

Oladele Stephen Adeola is a certified Digital & Computer Forensic Investigator (CFI) as well as an Oracle Certified Professional (OCP). He holds Ph.D. degree in Computer Science and he is also a member of professional bodies such as Computer Forensic Institute, Nigeria (CFIN), Computer Professionals of Nigeria (CPN), Nigeria Computer Society (NCS), Institute of Electrical and Electronics Engineers (IEEE), International Association of Engineers (IAENG) and Association of Computer Machinery (ACM). He has worked in different companies as a Network Engineer as well as a Programmer. His research interest includes Intelligent Systems, Computer Imaging, Biometrics, Land Information Systems, Digital & Computer Forensic and Database Systems. Adeola has published works in a number of local and international Journals. Also, he reviews for local and international Journals such as Journal of Information Technology and General Studies, Rufus Giwa Polytechnic, Nigeria, Net Journal of Social Sciences, Nigeria, Journal: Information Technology & People, Emerald Publication, United Kingdom, and the Journal of Educational Research and Reviews (JERR), United Kingdom. He has served, at various times, as an Information Technology consultant to a number of establishments in Nigeria including ALCATEL Nigeria, Nigeria Police Force, Information Technology unit and Ondo State Property and Development Corporation.

Abstract:

The 2015 election has been adjured to be the best in the history of electioneering process in Nigeria, thanks to deployment of Information Technology. But, in reality, the election was not, after all, flawless as many would want to believe. There were lots of problems associated with it particularly in the areas of verification and authentication of eligible voters by the fingerprint reader. This paper examines these problems and proposed a fuzzy based multi-modal architecture for future election in Nigeria. The architecture is based on the extraction of feature of fingerprint and iris of the prospective voters. Also examined were the features of fingerprints and iris as relating to authentication for electioneering purposes. Further discussed were the advantages of the proposed Architecture over the present Independent National Electoral Commission (INEC) method of voter authentication.

Biography:

Abstract:

Data was collected on the morphological features of Tilapia Cabrea. The weights and lengths were measured in grams and millimeters respectively. The data was subjected to a multivariate data analysis; the principal component analysis and the analysis showed that four principal components accounted for about 88% of the total variability viz: Body weights (X3), body depth (X8), snout length (X6), and standard length (X2). The analysis also included the total length (X1) as one of the least contributors to the size of the fish. This could be justified as it is known that the fins and distance between the anterior and posterior extremity of the mouth of the fish are like chaff and have no weights.

Break: Coffee Break 10:55-11:25 @ Foyer

Ali Alkhalifah

Qassim University College of Computer
Saudi Arabia

Title: Online Identity : The trajectory of the web-based identity management systems migration

Time : 11:10-11:30

Speaker
Biography:

Ali Alkhalifah received a BS in Computer science from Qassim University in 2007, and Master (honor) in IT from the University of Newcastle in 2010, and PhD in Information systems from the University of New South Wales, Australia in 2013.He is an Assistant Professor in Computer College at Qassim University. Until recently he was the head of IT department. He has been involved in several program committees and is being a Reviewer in different international conferences and journals. Ali has a number of research interests including e-business, identity management systems, evaluation of the World Wide Web, and the semantic Web.

Abstract:

Web-based identity management systems (IdMS), a new and innovative information technology (IT) artefact, involve the integration of emerging technologies and business processes to create identity-centric approaches for the management of users, their attributes, authentication factors and security privileges across the Internet within multiple websites. With the growth of online identities on the Internet, IdMS enable the use of the same user data, managed identifiers and authentication credentials across multiple websites, reducing the number of identifiers (e.g. passwords) and profiles with which a user has to deal. As digital identity becomes more and more important in the online world, the emergence of IdMS has brought about primary changes to different online contexts. The trajectory of the IdMS migration can be understood in relation to the proprietary system and the openness of the system to exchanging identity information .This study chooses to make the distinction and classification of three types of IdMS models – isolated model, centralized model and decentralized model –because of their differences in architecture and standards and their different impact on security, privacy and usability issues.We develop guidelines for IdMS designers and provides for the employment of more targeted implementation efforts. Also, we discuss some implications and highlight some opportunities for creating and enhancing new IdMS.

Speaker
Biography:

Abdullah K. Alqallaf is an assistant professor at the Electrical Engineering Department, College of Engineering and Petroleum, Kuwait University. His research interests include Microwave, Microwave Imaging Techniques and Analysis for Tumor Detection- Genomics Signal Processing and Bioinformatics- Statistical and Wavelet Signal Processing- Speech and Multimedia Signal Processing- Signal Processing for Communications and Networking- Machine Learning for Signal Processing- Design and Implementation of Signal Processing- Image and Multidimensional Signal Processing- Medical image analysis / Feature extraction, Feature selection, Segmentation, Detection / Estimation, and Classification.

Abstract:

Data on genome structural and functional features for various organisms is being accumulated and analyzed aiming to explore in depth the biological information and to convert data into meaningful biological knowledge. Recent developments in the experimental technologies and approaches, such as microarray and DNA sequencing, generate high-resolution genetic data and make it possible to enhance our understanding of the complex dynamic interactions between complex diseases and the biological systems through computational-based models. My talk will be about the choice of the statistical-based algorithms for processing the big genomic data and that may affect the findings which may lead to superior diagnosing directions.

Speaker
Biography:

Rosa V Dacosta has completed her MPH from Drexel University School of Public Health, Philadelphia, USA. She has been involved with over 4 publications in the fields of vascular biology and public health.

Abstract:

There is limited published research about the dietary impacts of farmers' markets. We sought to understand whether market managers collect data about markets and to examine the instruments and strategies used. Of the 359 market managers contacted across the United States, representing 543 markets, 185 managers participated in a telephone survey. A subset supplied copies of data collection tools for further analysis. Ninety-three percent of market managers collect data such as customer surveys, vendor applications, customer counts, or demographics. The potential utility of the data collected by mangers and suggestions for study of the dietary impacts of farmers markets are discussed.

Vu Thi Kim Ngoc

Center Of Analytical Services And
Experimentation Of HCMC (CASE)
Vietnam

Title: Statistical models for colorectal cancer screening using urine NMR spectra
Speaker
Biography:

Vu Thi Kim Ngoc is Vice Director of Center of Analytical Services And Experimentation HoChiMinh City–Vietnam. She is an NMR specialist in biomolecules structures and interactions (protein, DNA…) with 18 year-experience. She is currently developing statistical and computing methods in analytical chemistry in Vietnam.

Abstract:

Colorectal cancer (CRC) is one of the most common types of cancer. Detecting CRC at an early stage improves survival rates dramatically. Statistical models for Colorectal Cancer (CRC) identification was built by metabolomics, based on the 1H NMR data of urine. Result of principle component analysis (PCA) and partial least square (PLS) on urine NMR data of 64 cases and 76 controls, collected at MEDIC Centre (HCMC-Vietnam). Specific differences have been observed, in particular in the spectral range corresponding to some metabolites. This analysis was to compare and verify important metabolic alterations between CRC patients and healthy persons and would be extended to the diagnosis of colorectal cancer based on the profile of common and abundant metabolites. Key word: colorectal cancer, metabolomics, 1H NMR, multivariate statistics, PCA, PLS.

K Muralidharan

The Maharajah Sayajirao University of Baroda
India

Title: Theory of inliers: Modeling and applications
Speaker
Biography:

K Muralidharan is currently working as a Professor and Head of Department of Statistics, Faculty of Science, The Maharajah Sayajirao University of Baroda, India. He is also the Director of Population Research Centre, MSU Baroda. He has obtained his MSc degree in Statistics from Calicut University, Kerala; MPhil and PhD in Statistics from Sardar Patel University and has completed Post-Doctoral Fellowship from Institute of Statistical Science at Academia Sinica, Taiwan. He is an internationally qualified Six Sigma Master Black Belt. He has won number of Awards and Fellowships including the Commonwealth Academic Fellowship sponsored by British Council, UK. Recently, he was awarded the “Young emerging future leader of Quality and Reliability” by Society for Reliability Engineering, Quality and Operations Management (SREQOM), New Delhi. He is a Nominated Principal Member of the Bureau of Indian Standards (BIS), New Delhi. He is currently the Secretary of Indian Society for Probability and Statistics.

Abstract:

An inlier in a set of data is an observation or sub-set of observations not necessarily all zeroes, which appears to be inconsistent with the remaining data set. They are the resultant of instantaneous or early failures usually encountered in life testing and reliability, financial, management, clinical trials and many other studies. Unlike in outlier theory, here, inliers form a group of observations which are defined by the model itself. With the inclusion of inliers, the model will become either a non-standard distribution or one having more than two modes and hence usual method of statistical inference may not be appropriate to proceed with. We discuss some inliers prone models with some assumptions to study the estimation of inliers in exponential distribution. Various inlier prone models and estimation procedures are discussed. The detection of inliers and the problems associated with detections are presented. An illustration and a real life example are also discussed.

Speaker
Biography:

Ramón Santana Fernández is a PhD student who graduated as Engineer in Informatics Sciences at the University of Informatics Sciences in 2011. He is a Researcher in Biometrics field with 7 years of experience started as researcher assistant in 3rd year of the career in Dactilab project in 2009. He has worked as student and worker in biometric software development and process research, obtaining awards for his participation in investigations and software development in the field of automatic fingerprint identification systems at the Identification and Digital security Center. He has published articles in journals and events at the University of the Informatics Science

Abstract:

The implementation of biometrics solutions for users’ authentication in daily tasks has caused great concern about the safety and privacy of the biometric data. Different vulnerabilities detected on automated fingerprint identification systems could expose the minutiae if the templates are in plain text. To solve this security issue several minutiae template protection models have been proposed like fuzzy vault, biohashing and cancelable templates; however the minutiae alignment process is required before the template matching is executed to increase the probabilities to find a positive match. In order to protect efficiently the biometric data it is necessary to meet three basic requirements: Cryptographic security, revocability and performance; however, most of the models described to date fail in this task. The fingerprint minutiae template protection scheme must capture as much identifying information of the fingerprint as possible and solve the problem of the template alignment before the matching process is executed in the protected domain. A study on the fingerprint minutiae template protection models, specifically those that start the process from features derived of the minutiae using minutiae structures, their main strengths, weaknesses and vulnerabilities was conducted in this work. Analyzing the types of attacks described in the bibliography to obtain the original biometric data from protected templates and the attacks performed to the minutiae triplets like minutiae vicinity decomposition is the main objective of this research. As a result, the vulnerabilities of each minutiae structure are identified, the elements to propose a new minutiae structure are analyzed and initial results are discussed.

Mikhail Moshkov

King Abdullah University of Science and Technology
Saudi Arabia

Title: Extensions of dynamic programming for decision tree study
Speaker
Biography:

Mikhail Moshkov is professor in the CEMSE Division at King Abdullah University of Science and Technology, Saudi Arabia since October 1, 2008. He earned master’s degree from Nizhni Novgorod State University, received his doctorate from Saratov State University, and habilitation from Moscow State University. From 1977 to 2004, Dr. Moshkov was with Nizhni Novgorod State University. Since 2003 he worked in Poland in the Institute of Computer Science, University of Silesia, and since 2006 also in the Katowice Institute of Information Technologies. His main areas of research are complexity of algorithms, combinatorial optimization, and machine learning. Dr. Moshkov is author or coauthor of five research monographs published by Springer.

Abstract:

In the presentation, we consider extensions of dynamic programming approach to the study of decision trees as algorithms for problem solving, as a way for knowledge extraction and representation, and as classifiers which, for a new object given by values of conditional attributes, define a value of the decision attribute. These extensions allow us (i) to describe the set of optimal decision trees, (ii) to count the number of these trees, (iii) to make sequential optimization of decision trees relative to different criteria, (iv) to find the set of Pareto optimal points for two criteria, and (v) to describe relationships between two criteria. The results include the minimization of average depth for decision trees sorting eight elements (this question was open since 1968), improvement of upper bounds on the depth of decision trees for diagnosis of 0-1-faults in read-once combinatorial circuits, existence of totally optimal (with minimum depth and minimum number of nodes) decision trees for monotone Boolean functions with at most six variables, study of time-memory tradeoff for decision trees for corner point detection, study of relationships between number and maximum length of decision rules derived from decision trees, study of accuracy-size tradeoff for decision trees which allows us to construct enough small and accurate decision trees for knowledge representation, and decision trees that, as classifiers, outperform often decision trees constructed by CART. The end of the presentation is devoted to the introduction to KAUST.

Speaker
Biography:

Imran Mahmood is currently working as Assistant Professor at the College of Computer Science, University of Dammam and is currently involved in research activities in the area of Healthcare Information Technology, with specialty in Epidemiological Informatics. He worked as Assistant Professor at University of Engineering & Technology, Lahore as Lead Researcher at the Center for Visual Analytics Research, Supervised research and development of different Epidemic Surveillance projects. He earned Doctoral degree in Computer Systems at School of Information and Communication Technology (ICT), KTH-Royal Institute of Technology, Sweden in 2013. He earned Master’s degree in Software Engineering of Distributed Systems at the same school in 2007 along with Comprehensive Knowledge of Modeling & Simulation and Visual Analytics. He worked in collaboration with Swedish Defense Research Agency (FOI) during the Master’s and Doctoral research. He has delivered at different workshops, lectures and invited talks on the topics of research interests.

Abstract:

In this talk, we will discuss fundamental concepts of Epidemiological Surveillance (ES). ES is an ongoing systematic collection, visualization, analysis and interpretation of health data, collected for the purpose of timely dissemination of outbreak forecasts. It is an investigational approach where health experts are provided with automated set of tools for real-time data collection from various health departments, monitoring of disease indicators to detect outbreak earlier than would otherwise be possible with traditional diagnosis based methods. Hence the detection of adverse effects can be made at the earliest possible time, possibly even before disease diagnoses can be confirmed through clinical procedures and laboratory tests. We will highlight key challenges faced in the development and operations of Epidemiological Surveillance systems, mainly due to: (A) complex characteristics and the diverse nature of the infectious diseases, (B) the distinct nature of population dynamics, mobility, demographic factors and (C) the geographic nature, environment and the weather conditions of the area under study. We will discuss evolutionary development in the trends, methods and technologies of the surveillance systems and discuss how this progress is addressing the key challenges. In the end, we will argue how a sophisticated health surveillance system helps in alleviating potential health risks and minimize the threats of natural or man-made disasters and eventually supports effective decision making in emergency management.

Speaker
Biography:

Mohammad Imran received PhD in Computer Science in 2012. The title of his thesis is ‘Some Issues Concerning Biometric system’ under the guidance of Professor G Hemantha Kumar. During 2012-13, he had been a Post-Doctorate Fellow under TRC (The Research Council, Oman) sponsored program. Currently, he is working as an Assistance Professor in King Faisal University, Saudi Arabia. Prior to this, he was working as an Associate Consultant at WIPRO Technolgoies, Bangalore. His areas of research interests includes machine learning, pattern recognition, computer vision, biometrics, image processing, predictive analysis, algorithms, data structure, linear algebra. He authored 25 international publications which include journals and peer-reviewed conferences.

Abstract:

There is a global concern to implement accurate person verification in various facets of social and professional life. This includes banking, travel, medical and secure access to social security services. While biometrics has been deployed with various choices such as face, fingerprint, iris, etc., the importance of higher level of security has influenced two main things. One is of finding newer, more universal biometric traits and the other one is multimodal options. Most of the biometric systems employed in the real-world applications are unimodal. They rely on the evidence of a single source of information for authentication which is easier to install and computationally less hectic. The unimodal systems have to contend with a variety of problems. This, in turn, increases False Acceptance Rate (FAR) and False Reject Rate (FRR). A good system needs very low FAR and very low FRR. This can be achieved by the multimodal system. The multimodal system is a sub-set of multi-biometric system which establishes identity based on the evidence of multiple biometric traits. Thus, in this presentation, we address critical issues in designing a multimodal biometric system, i.e., choices of biometric modalities, feature extraction algorithms and fusion strategies. Complementary and supplementary information acquired by feature extraction algorithms are addressed in our work for their contribution towards the improvement of recognition rate. A fundamental issue in designing a multimodal system lies in fusing the information from sensed data. The fusion methodologies at four different levels viz., sensor, feature, score and decision level have been evaluated for the performance with appropriate fusion rules. Fusion methodologies have been exploited for addressing different combinations of multimodal systems.

Ong’ala J

Kenya Agricultural and Livestock Research Organization
Kenya

Title: The use of principal component analysis in sugarcane cone selection
Speaker
Biography:

Ong’ala J has BSc and MSc degrees in Applied Statistics. He is the Head of Sub-Unit, Research Methods and Analytics, Kenya Agricultural and Livestock Research Organization.

Abstract:

In the process of phenotypic evaluation of sugarcane, many traits are simultaneously evaluated. These traits are often highly interrelated. Evaluation of all these traits is costly and may not enhance selection response. In this study, we aim at using the Principal Component Analysis (PCA) to identify representative traits for phenotypic characterization of sugarcane, and thereby to select superior clones in the breeding process. The results indicate that when PCA is used, only 10 out of 19 traits will be significant in identifying the superior clones and their contributions to the selected traits are quantified.

Speaker
Biography:

Tintu Thomas completed her Master’s degree in Biostatistics from Mahatma Gandhi University. She completed Postgraduate diploma in Epidemiology from Indian Institute of Public Health, India. She is currently working as a Lecturer in Biostatistics in Department of Community Medicine. She has more than 7 years of experience as Lecturer in Biostatistics, teaching paramedical and Postgraduate medical students. She has 6 research publications on her name. She was one of invited speakers for national seminar in Stochastistic medicine in Kerala, India. She also worked as Course-In-Charge in Master’s level Biostatistics. She has five years of research experience in the applied statistics area, especially in gene expression data analysis.

Abstract:

Microarrays technology is used to find the expression of many thousands of genes simultaneously. Identification and classification of set of genes out of these thousands is a complex process. In microarray data, a major challenge is the need of robust method for proper identification of differential expression genes. In this paper, we made a comparative study on the genomic classification performance of conventional neural networks method and fuzzy inference method. We used fuzzy inference based classification rules for extracting significant genes from the gene expression data set. Fuzzy rules were utilized to train the Fuzzy Inference System (FIS) and classified the gene expression level in to useful output form namely expressed and non-expressed genes. It is found that the adaptive neural network fuzzy inference methods worked better in classifying differentially expressed genes compared to other conventional methods.

Speaker
Biography:

Michael A Idowu earned both his PhD and MSc qualifications in Complex Systems Modelling (Systems Biology) and Software Engineering (Computer Games Technology), respectively with distinction from Abertay University. As a Software Engineer and Theoretician, his research focuses on the invention and further development of model theory for instantaneous development of new models of complex systems, including biological systems. Working at the interface between mathematics and computer science, his expertise lies in mathematical and data analysis, the development of novel analytical methods for time series data and inference of interaction networks among measurables in time series data.

Abstract:

Dynamic processes in complex systems may be profiled by measuring system properties over time. One way of capturing and representing such complex processes or phenomena is through ODE models of measured time series data. However, construction of ODE models purely from time series data is extremely difficult. First, the system targeted must be identified. Second, the parameters of the model must be estimated in a data consistent manner. Lastly, the constructed model must be capable of exact simulation of the measured historical data as though the constructed model was the means (source) of the acquired data. Hence, intelligent modelling of exact data may be a necessity in modelling systems that are not well-studied or well-known. The requirement to achieve the above-mentioned objectives within a short period of time, i.e., in order to cope with occasional or necessary demands of rapid data utilisation, makes both model construction and complex systems identification a modeller’s nightmare. In this presentation, a novel dynamic modelling technique (framework), invented and currently being further developed by the author, is proposed and presented as an effective computational method for reconstructing data-consistent ODE models, which adequately addresses the challenges of instantaneous systems identification and automated parameter estimation, under limited data and under-determined conditions. These dynamic modelling techniques (algorithms) enable data-consistent models of complex systems to be automatically constructed, with or without making a priori assumptions about the underlying network, which guarantees successful construction of feasible models in a matter of seconds. These claims are then justified with applications and examples.

  • Young Researchers Forum

Session Introduction

Hissah Alzahrani

Florida State University School
USA

Title: A comparison of three models in multivariate binary longitudinal analysis

Time : 11:25-11:35

Speaker
Biography:

Hissah alzahrani has studied in computer science and statistics departments. She completed her master degree in 2009 from statistics department at King Abdul-Aziz University and started the joint program of master and PhD in biostatistics at Florida state university in 2012. Her research interests include multivariate longitudinal data analyses and survival analysis that applies in biomedical applications and clinical trials. She is working on improvement of her skills in skills in SAS and R software to accommodate the advanced statistical analysis in different biostatistics fields.

Abstract:

Multivariate longitudinal data analysis plays an important role in many biomedical and social problems. In this article, we present three methods for analyzing multiple and correlated binary outcomes; each one can be benefecial for determined aims. We review method one and method two and we proposed method three. The three methods estimate the marginal means using the GEE approach for multivariate binary longitudinal data. The first method addresses the question of estimating one group of covariate parameters for many binary outcomes while accounting for their multivariate structure. The second method addresses the question of estimating the covariate parameters for each binary outcome separately. The third method is an estimation of the covariate parameters for each combination of outcomes. Our goal is to investigate the difference among the parameter estimations of the three methods. In the simulation element, we present many scenarios related to diffeerent correlation structures. In the application element, we present a follow up study (Florida Dental Care Study ) that measured three binary outcomes and five covariates in four intervals. That particular study is a useful explanation of the variation between outcomes since the outcomes were highly correlated.

Wilmar López Oviedo

Universidad Nacional de Colombia
Colombia

Title: Study of the growth rate of neo-tropical trees via non-linear mixed models

Time : 11:35-11:45

Speaker
Biography:

Wilmar López Oviedo has completed his education from National University of Colombia and his Master’s degree studies from National University of Colombia School of Statistics and he has a PhD in Statistical Science. He is the Director of Raúl Alberto Pérez Agámez and a Professor in the School of Statistics. He has published more than 10 papers in national and international journals and he has participated in various research projects national level.

Abstract:

The growth rate of trees is essential information for understanding the dynamics in tropical forests, and ecological restoration plans. However, this information is limited for tropical tree species. The growth is measured as the change in the diameter of the trunk (mm/year). The growth rate changes over time results in a relationship between the size and time and it’s a basic assumption for models that use initial diameter like time indicator. However, the size of a tree is not age indicator, being an assumption of evaluating. In this sense, we apply non-linear mixed models for analyzing the growth according to the initial diameter. We include three different models and compare them using the AICW. This data was obtained from permanent plots, where individuals of different sizes co-exist. We added some own characteristics of the species that influence their performance and storage capacity of biomass. We conducted a proper transformation process to the ecology of the system to normalize the data. Split by diameter class, we eliminate data iteratively to have symmetric distributions in each class. This is to eliminate the effect of the trees that are sick. Finally, the generated models were evaluated using data from measurements of trees on the same plot, measured for 20 years. If the size indicates the time, our spatial data models adequately predict the increase in temporary data, otherwise the assumption is invalid. We found that 25% of said diameter variation, wood density affects growth and models using the initial diameter as time indicator are biased as temporary data were weakly predicted by the resulting models.

Biography:

Abstract:

Introduction: In Burkina Faso, very few are known about the quality of life of people living with HIV in their routine follow up. The aim of the study was to measure the quality of life, in the routine follow-up of people living with HIV and its change over time. Methods & Materials: 424 people living with HIV were followed up during 12 months in Ouagadougou-Burkina Faso. The quality of life was measured through three interviews over time, using the World Health Organization Quality of Life assessment brief tool in patients with Human Immunodeficiency Virus infection (WHOQOL HIV-BREF). The Friedman test was used to assess significant differences in quantitative variables at each of the three follow up interviews. Groups at baseline, at month 6 and at month 12 were compared using Wilcoxon signed rank test for quantitative data and McNemar test for qualitative variables. Pearson Chi² was used when needed. Multivariable logistic regression models were fit to estimate adjusted odds ratio (OR) and 95% confidence intervals (95% CI). Trends in global score of the quality of life in subgroups (status related to HAART) were assessed using repeated measures univariate analysis of variance. A p-value less than 0.05 was considered as significant. Results: At baseline, the highest scores of quality of life were recorded in the domain of spirituality, religion and personal beliefs and the lowest scores were recorded in the environmental domain. This trend was maintained during the 12-months follow-up. The overall score increased significantly over time. Over the twelve months of follow up, not having support from family for medical care, being under Highly Active Anti-Retroviral Treatment (HAART), self-perception as healthy, and having a global score of quality of life less than 77 were the baseline factors that were likely to predict an increase in the overall score of quality of life. Conclusions: Our findings suggest conducting interventions linked to environmental domain to enhance the quality of life of people living with HIV/AIDS in Burkina Faso. Particular attention could be paid to people without family support, not yet under HAART, those who perceive themselves as ill.

Abdul Basit

National College of Business Administration and Economics (NCBA&E)
Pakistan

Title: Entropy of size-biased moment exponential distribution

Time : 11:45-11:55

Speaker
Biography:

Abdul Basit has completed his MS in Social Sciences at the age of 31 years from SAZABIST Karachi, Pakistan. Currently he is the PhD research scholar in the discipline of Statistics in National College of Business Administration & Economics Lahore, Pakistan. He is Assistant Director of Statistics & DWH Department of State Bank of Pakistan. He has published 04 research papers in the journals and many articles were presented in national and international conferences.

Abstract:

In this article we consider the entropies for some life time distributions and compare them for exponential distribution and size-biased moment exponential distribution. For this purpose, for each, a mathematical expression of entropy has been derived. We consider the two different distributions rather than the truncated distribution as studied in the literature. A new estimator of entropy has also been introduced and derived its properties. We calculate relative loss of entropy for size-biased moment exponential distribution. Empirical study and graphical presentation has been conducted to illustrate which entropy measure has advantages in terms of relative loss in entropy.

Break: Lunch Break @ Texas E 11:55-13:30
Closing Ceremony
Speaker
Biography:

Nnadozie C Dandy has completed HND in Mechanical Engineering from Institute of Management & Technology (I M T), Enugu, Nigeria. He also attends Regional Maritime University, Accra, Ghana, where he got his Marine Mandatory as a Marine Engineer. He is an American Sign Language Instructor. He is also a Member of Special Heart Royal Foundation, a non-governmental organization in Rivers State, Nigeria.

Abstract:

CSA Technology Research Database is bibliographic platform that is updated monthly with temporal coverage from 1962 to the present. It combines a number of secondary databases: the Materials Research Database with METADEX, the CSA High Technology Research Database with Aerospace, and the CSA Engineering Research Database. Mechanical Engineering Abstracts is a continuation of the formerly named \"ISMEC Bulletin\" (v.1, no.1, July 10, 1973) (issn: 0306-0039), which appears to have ceased under this title in December, 1987 (v. 20, no.6). ISMEC Bulletin was published by Cambridge Scientific Abstracts. Now, Mechanical Engineering Abstracts is also known as \"ISMEC, Mechanical Engineering Abstracts\". Another title is \"Information service in Mechanical Engineering Bulletin. The oldest record in the database has a publication date of 1939. However, about 50% of its records have publication dates of 1986 or later. METADEX is updated once a month. Approximately 45,000 new records are added per year. As of June 2010, this database contained more than 7,058,162 records. Temporal coverage for Mechanical Engineering Abstracts is from 1981 to 2002. Current information is located in Mechanical & Transportation Engineering. As of May 2010, this database contained more than 6,000 methods and it will be updated regularly, further subject coverage includes all aspects of Mechanical engineering. Other subjects, which are covered in this database, are Aerospace engineering, Automotive engineering, Naval architecture and Marine engineering, Railroad engineering, and Materials handling. Nuclear technology is also part of this database covering: Fluid flow, Hydraulics, Pneumatics, and Vacuum Technology. Heat and Thermodynamics covers Industrial furnaces, process heating, space heating, air conditioning, refrigeration, an cryogenics.

Cosmas Chidiebere Nsirim

University of Canberra
Australia

Title: Big data information system: Challenges and analysis

Time : 12:50-13:00

Speaker
Biography:

Abstract:

Books have been written, research is constantly going on concerning big data. This has created prospects for researchers to achieve high significance in information system. With the materialization of new data collection technologies, highly developed data mining and analytics support, there have been a lot of changes that are occurring with the research methodology we apply. Given these promises, there are still many unanswered questions and research challenges that need to be investigated at greater length. The contexts include political discourse, digital journalism, social networks and blogs, financial services, corporate announcements, mobile telephony, home entertainment, online gaming, online shopping, social advertising, and social commerce. The ongoing progress and implementation of big data will, at the end of the day, provide clarity on whether big data is a fad or if it represents substantial progress in information systems research. Three theses also show how future technological developments can be used to advance the discipline of information systems. Technological progress should be used for a cumulative supplement of existing models, tools, and methods. By contrast, scientific revolutions are independent of technological progress.

Break: Lunch Break @ Texas E 13:00-13:30
Speaker
Biography:

Tahira is an M.Phil scholar (Biostatistics) and one of the activists in medical research, here in Lahore, Pakistan. She secured top position with 3.96/4 CGPA from University of the Punjab Lahore in M.Sc Biostatistics. She is currently serving as a research associate in one of the leading research organizations. She has secured more than 10 scientific publications and presently working on self financed project on domestic violence on women.

Abstract:

The objective of this study was to highlight determinants of domestic violence and its consequences in metropolitan City of Lahore, Pakistan. This cross sectional survey is ongoing and till “final presentation in November 2015” we will comprise the results of at least 500 victims. On available data we analyzed that there were 60% married and rest of 40% were unmarried and divorced females. Their education level was very low and none of them were graduated. According to their socio-economic status, 35% were living below poverty and the rest of the 65% were of lower to middle class. Minor to major injuries were seen in 75%, 20% had fractures of their bones and the rest of the 5% had blunt traumas. Among assaulters, 55% were husbands, 20% were brothers, 10% were sisters and 15% were fathers-in-law or mothers-in-law. A total of 50% of assaulters were drug abusers and 35% used weapons for violence. According to their current physical conditions, 15% subjects were critical and were admitted to intensive care unit.

Redeat Belaineh

National Animal Health Diagnostic and Investigation Center (NAHDIC)
Ethiopia

Title: Characterization of Newcastle disease virus and poultry-handling practices in live poultry markets, Ethiopia
Speaker
Biography:

Redeat Belaineh has acquired a Doctor of Veterinary Medicine in Ukraine and her Master’s degree in Microbiology from Addis Ababa University. Currently, she is an Employee of the Ministry of Agriculture working for the National Animal Health Diagnostic and Investigation Center (NAHDIC). She is Head of Molecular Biology Laboratory. This center is currently implementing ISO17.025 international quality management system in its laboratories and has a bio-safety level 3 facilities which has been approved as east Africa’s supportive laboratory for diagnosis of trans-boundary animal diseases.

Abstract:

Newcastle disease represents the most severe poultry disease responsible for marked economic losses in Ethiopia. To provide a molecular characterization of Newcastle disease viruses circulating in our country, and classified, our phylogenetic analysis of the 260 fragment of the fusion gene of all the 29 sequenced isolates. A cross sectional survey was conducted at five selected live poultry market sites in Addis Ababa. In addition, baseline data on the live poultry market system were acquired through a detailed questionnaire submitted to poultry traders. We identified 44/146 positive samples, 65.9% of which was virulent strains belonging to sub-genotype VIf. The very poor biosecurity practices, which have resulted from responses of the participants, suggest that they might have had a heavy impact in the spread of the disease. This study provides important information on the epidemiology and control of NDV in Ethiopia and highlights the importance of implementing surveillance and biosecurity practices in live poultry markets.