Journal articles: 'Data-driven Bayesian methods' – Grafiati (2024)

  • Bibliography
  • Subscribe
  • News
  • Referencing guides Blog Automated transliteration Relevant bibliographies by topics

Log in

Українська Français Italiano Español Polski Português Deutsch

We are proudly a Ukrainian website. Our country was attacked by Russian Armed Forces on Feb. 24, 2022.
You can support the Ukrainian Army by following the link: https://u24.gov.ua/. Even the smallest donation is hugely appreciated!

Relevant bibliographies by topics / Data-driven Bayesian methods / Journal articles

To see the other types of publications on this topic, follow the link: Data-driven Bayesian methods.

Author: Grafiati

Published: 4 June 2021

Last updated: 11 February 2022

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Consult the top 50 journal articles for your research on the topic 'Data-driven Bayesian methods.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Jirasek, Fabian, Robert Bamler, and Stephan Mandt. "Hybridizing physical and data-driven prediction methods for physicochemical properties." Chemical Communications 56, no.82 (2020): 12407–10. http://dx.doi.org/10.1039/d0cc05258b.

Full text

APA, Harvard, Vancouver, ISO, and other styles

2

Zhao, Jianjun, and Shen-Shyang Ho. "Improving Bayesian network local structure learning via data-driven symmetry correction methods." International Journal of Approximate Reasoning 107 (April 2019): 101–21. http://dx.doi.org/10.1016/j.ijar.2019.02.004.

Full text

APA, Harvard, Vancouver, ISO, and other styles

3

Moradi, Rizan, Khalil Taheri, and MaryamS.Mirian. "Data-Driven Methods to Create Knowledge Maps for Decision Making in Academic Contexts." Journal of Information & Knowledge Management 16, no.01 (March 2017): 1750008. http://dx.doi.org/10.1142/s0219649217500083.

Full text

Abstract:

Knowledge is the primary asset of today’s organisations; thus, knowledge management has been focused on discovery, representation, modification, transformation, and creation of knowledge within an enterprise. A knowledge map is a knowledge management tool that makes organisational processes more visible, feasible, and practicable. It is a graphical representation of decision-related information. What happens, how various events can be managed, and why they happened: all can be demonstrated very precisely by a well-designed knowledge map. There are diverse knowledge-related roles; for example, each university dean’s office — as an instance of a knowledge-based organisation — usually relies upon their institutional memory to make daily decisions. However, utilising a knowledge map greatly facilitates any individual’s or group’s decision-making process, by proposing or establishing key required information. In this study, two important managerial roles — Associate Deans of Research and Education — were selected; then we reviewed their key managerial decisions and proposed three different techniques for supporting their decisions. The chief superiority of the approach offered here was in the creation of role-based knowledge maps, including an expertness map and a collaboration map for the Associate Dean of Research, which were formed using clustering, taxonomy formation, and information retrieval methods. A third map was created for the Associate Dean of Education, including a Bayesian reasoning map based on an Improved PC (IPC) algorithm, which learned the structure and the parameters of a Bayesian network to describe decision-making in the domain of education. To evaluate the proposed approaches, structural and functional evaluation measures and standard datasets (in the available cases) were chosen. The results found that the approaches were comparable to the selected benchmarks within the real data; even after considering the challenging nature of the real data, which included problems such as incomplete and unclean data extracted from the University of Tehran’s education and research management information systems.

APA, Harvard, Vancouver, ISO, and other styles

4

Gao, Tianhong, Yuxiong Li, Xianzhen Huang, and Changli Wang. "Data-Driven Method for Predicting Remaining Useful Life of Bearing Based on Bayesian Theory." Sensors 21, no.1 (December29, 2020): 182. http://dx.doi.org/10.3390/s21010182.

Full text

Abstract:

Bearings are some of the most critical industrial parts and are widely used in various types of mechanical equipment. Bearing health status can have a significant impact on the overall equipment performance, and bearing failures often cause serious economic losses and even casualties. Thus, estimating the remaining useful life (RUL) of bearings in real time is of utmost importance. This paper proposes a data-driven RUL prediction method for bearings based on Bayesian theory. First, time-domain features are extracted from the bearing vibration signal and data are fused to build a health indicator (HI) and a state model of bearing degradation. Then, according to Bayesian theory, a Bayesian model of state parameters and bearing life is established. The parameters of the Bayesian model are updated and bearing RUL is predicted by the Metropolis–Hastings algorithm. The method was validated by the XJTU-SY bearing open datasets and the prediction results are compared with the existing methods. Accuracy of the proposed method was demonstrated.

APA, Harvard, Vancouver, ISO, and other styles

5

Bang, Jung-Wook, DerekJ.Crockford, Elaine Holmes, Florencio Pazos, MichaelJ.E.Sternberg, StephenH.Muggleton, and JeremyK.Nicholson. "Integrative Top-Down System Metabolic Modeling in Experimental Disease States via Data-Driven Bayesian Methods." Journal of Proteome Research 7, no.3 (March 2008): 1352. http://dx.doi.org/10.1021/pr800098n.

Full text

APA, Harvard, Vancouver, ISO, and other styles

6

Bang, Jung-Wook, DerekJ.Crockford, Elaine Holmes, Florencio Pazos, MichaelJ.E.Sternberg, StephenH.Muggleton, and JeremyK.Nicholson. "Integrative Top-Down System Metabolic Modeling in Experimental Disease States via Data-Driven Bayesian Methods." Journal of Proteome Research 7, no.2 (February 2008): 497–503. http://dx.doi.org/10.1021/pr070350l.

Full text

APA, Harvard, Vancouver, ISO, and other styles

7

Bassamzadeh, Nastaran, and Roger Ghanem. "Probabilistic Data-Driven Prediction of Wellbore Signatures in High-Dimensional Data Using Bayesian Networks." SPE Journal 23, no.04 (February6, 2018): 1090–104. http://dx.doi.org/10.2118/189966-pa.

Full text

Abstract:

Summary Accurate, data-driven, stochastic models for fluid-flow prediction in hydrocarbon reservoirs are of particular interest to reservoir engineers. Being computationally less costly than conventional physical simulations, such predictive models can serve as rapid-risk-assessment tools. In this research, we seek to probabilistically predict the oil-production rate at locations where limited data are observed using the available data at other spatial points in the oil field. To do so, we use the Bayesian network (BN), which is a modeling framework for capturing dependencies between uncertain variables in a high-dimensional system. The model is applied to a real data set from the Gulf of Mexico (GOM) and it is shown that BN is able to predict the production rate with 86% accuracy. The results are compared with neural-network and co-Kriging methods. Moreover, BN structure enables us to select the most-relevant variables for prediction, and thus we managed to reduce the input dimension from 36 to 17 variables while preserving the same prediction accuracy. Similarly, we use the local-linear-embedding (LLE) method as a feature-extraction tool to nonlinearly reduce the input dimension from 36 to 10 variables with negligible loss in accuracy. Accordingly, we claim that BN is a valuable modeling tool that can be efficiently used for probabilistic prediction and dimension reduction in the oil industry.

APA, Harvard, Vancouver, ISO, and other styles

8

Wu, Yuqiang, Qinhui Wang, Ge Li, and Jidong Li. "Data-driven runoff forecasting for Minjiang River: a case study." Water Supply 20, no.6 (June26, 2020): 2284–95. http://dx.doi.org/10.2166/ws.2020.134.

Full text

Abstract:

Abstract Long-term runoff forecasting has the characteristics of a long forecast period, which can be widely applied in environmental protection, hydropower operation, flood prevention and waterlogging management, water transport management, and optimal allocation of water resources. Many models and methods are currently used for runoff prediction, and data-driven models for runoff prediction are now mainstream methods, but their prediction accuracy cannot meet the needs of production departments. To this end, the present research starts with this method and, based on a support vector machine (SVM), it introduces ant colony optimization (ACO) to optimize its penalty coefficient C, Kernel function parameter g, and insensitivity coefficient p, to construct a data-driven ACO-SVM model. The validity of the method is confirmed by taking the Minjiang River Basin as an example. The results show that the runoff predicted by use of ACO-SVM is more accurate than that of the default parameter SVM and the Bayesian method.

APA, Harvard, Vancouver, ISO, and other styles

9

Tian, Xingda, Handong Huang, Jun Gao, Yaneng Luo, Jing Zeng, Gang Cui, and Tong Zhu. "Pre-Stack Seismic Data-Driven Pre-Salt Carbonate Reef Reservoirs Characterization Methods and Application." Minerals 11, no.9 (September7, 2021): 973. http://dx.doi.org/10.3390/min11090973.

Full text

Abstract:

Carbonate reservoirs have significant reserves globally, but the substantial heterogeneity brings intractable difficulties to exploration. In the work area, the thick salt rock reduces the resolution of pre-salt seismic signals and increases the difficulty of reservoir characterization. Therefore, this paper proposes to utilize wavelet frequency decomposition technology to depict the seismic blank reflection area’s signal and improve the pre-salt signal’s resolution. The high-precision pre-stack inversion based on Bayesian theory makes full use of information from various angles and simultaneously inverts multiple elastic parameters, effectively depicting reservoirs with substantial heterogeneity. Integrating the high-precision inversion results and the Kuster-Toksöz model, a porosity prediction method is proposed. The inversion results are consistent with the drilling rock samples and well-logging porosity results. Moreover, the reef’s accumulation and growth, which conform to the geological information, proves the accuracy of the above methods. This paper also discusses the seismic reflection characteristics of reefs and the influence of different lithological reservoirs on the seismic waveform response characteristics through forward modeling, which better proves the rationality of porosity inversion results. It provides a new set of ideas for future pre-salt carbonate reef reservoirs’ prediction and characterization methods.

APA, Harvard, Vancouver, ISO, and other styles

10

Cartocci, Nicholas, MarcelloR.Napolitano, Gabriele Costante, and MarioL.Fravolini. "A Comprehensive Case Study of Data-Driven Methods for Robust Aircraft Sensor Fault Isolation." Sensors 21, no.5 (February26, 2021): 1645. http://dx.doi.org/10.3390/s21051645.

Full text

Abstract:

Recent catastrophic events in aviation have shown that current fault diagnosis schemes may not be enough to ensure a reliable and prompt sensor fault diagnosis. This paper describes a comparative analysis of consolidated data-driven sensor Fault Isolation (FI) and Fault Estimation (FE) techniques using flight data. Linear regression models, identified from data, are derived to build primary and transformed residuals. These residuals are then implemented to develop fault isolation schemes for 14 sensors of a semi-autonomous aircraft. Specifically, directional Mahalanobis distance-based and fault reconstruction-based techniques are compared in terms of their FI and FE performance. Then, a bank of Bayesian filters is proposed to compute, in flight, the fault belief for each sensor. Both the training and the validation of the schemes are performed using data from multiple flights. Artificial faults are injected into the fault-free sensor measurements to reproduce the occurrence of failures. A detailed evaluation of the techniques in terms of FI and FE performance is presented for failures on the air-data sensors, with special emphasis on the True Air Speed (TAS), Angle of Attack (AoA), and Angle of Sideslip (AoS) sensors.

APA, Harvard, Vancouver, ISO, and other styles

11

Freestone,DeanR., KelvinJ.Layton, Levin Kuhlmann, and MarkJ.Cook. "Statistical Performance Analysis of Data-Driven Neural Models." International Journal of Neural Systems 27, no.01 (November8, 2016): 1650045. http://dx.doi.org/10.1142/s0129065716500453.

Full text

Abstract:

Data-driven model-based analysis of electrophysiological data is an emerging technique for understanding the mechanisms of seizures. Model-based analysis enables tracking of hidden brain states that are represented by the dynamics of neural mass models. Neural mass models describe the mean firing rates and mean membrane potentials of populations of neurons. Various neural mass models exist with different levels of complexity and realism. An ideal data-driven model-based analysis framework will incorporate the most realistic model possible, enabling accurate imaging of the physiological variables. However, models must be sufficiently parsimonious to enable tracking of important variables using data. This paper provides tools to inform the realism versus parsimony trade-off, the Bayesian Cramer-Rao (lower) Bound (BCRB). We demonstrate how the BCRB can be used to assess the feasibility of using various popular neural mass models to track epilepsy-related dynamics via stochastic filtering methods. A series of simulations show how optimal state estimates relate to measurement noise, model error and initial state uncertainty. We also demonstrate that state estimation accuracy will vary between seizure-like and normal rhythms. The performance of the extended Kalman filter (EKF) is assessed against the BCRB. This work lays a foundation for assessing feasibility of model-based analysis. We discuss how the framework can be used to design experiments to better understand epilepsy.

APA, Harvard, Vancouver, ISO, and other styles

12

Md Nor, Norazwan, Che Rosmani Che Hassan, and Mohd Azlan Hussain. "A review of data-driven fault detection and diagnosis methods: applications in chemical process systems." Reviews in Chemical Engineering 36, no.4 (May26, 2020): 513–53. http://dx.doi.org/10.1515/revce-2017-0069.

Full text

Abstract:

AbstractFault detection and diagnosis (FDD) systems are developed to characterize normal variations and detect abnormal changes in a process plant. It is always important for early detection and diagnosis, especially in chemical process systems to prevent process disruptions, shutdowns, or even process failures. However, there have been only limited reviews of data-driven FDD methods published in the literature. Therefore, the aim of this review is to provide the state-of-the-art reference for chemical engineers and to promote the application of data-driven FDD methods in chemical process systems. In general, there are two different groups of data-driven FDD methods: the multivariate statistical analysis and the machine learning approaches, which are widely accepted and applied in various industrial processes, including chemicals, pharmaceuticals, and polymers. Many different multivariate statistical analysis methods have been proposed in the literature, such as principal component analysis, partial least squares, independent component analysis, and Fisher discriminant analysis, while the machine learning approaches include artificial neural networks, neuro-fuzzy methods, support vector machine, Gaussian mixture model, K-nearest neighbor, and Bayesian network. In the first part, this review intends to provide a comprehensive literature review on applications of data-driven methods in FDD systems for chemical process systems. In addition, the hybrid FDD frameworks have also been reviewed by discussing the distinct advantages and various constraints, with some applications as examples. However, the choice for the data-driven FDD methods is not a straightforward issue. Thus, in the second part, this paper provides a guideline for selecting the best possible data-driven method for FDD systems based on their faults. Finally, future directions of data-driven FDD methods are summarized with the intent to expand the use for the process monitoring community.

APA, Harvard, Vancouver, ISO, and other styles

13

He, Hua-Feng, Juan Li, Qing-Hua Zhang, and Guoxi Sun. "A Data-Driven Reliability Estimation Approach for Phased-Mission Systems." Mathematical Problems in Engineering 2014 (2014): 1–13. http://dx.doi.org/10.1155/2014/283740.

Full text

Abstract:

We attempt to address the issues associated with reliability estimation for phased-mission systems (PMS) and present a novel data-driven approach to achieve reliability estimation for PMS using the condition monitoring information and degradation data of such system under dynamic operating scenario. In this sense, this paper differs from the existing methods only considering the static scenario without using the real-time information, which aims to estimate the reliability for a population but not for an individual. In the presented approach, to establish a linkage between the historical data and real-time information of the individual PMS, we adopt a stochastic filtering model to model the phase duration and obtain the updated estimation of the mission time by Bayesian law at each phase. At the meanwhile, the lifetime of PMS is estimated from degradation data, which are modeled by an adaptive Brownian motion. As such, the mission reliability can be real time obtained through the estimated distribution of the mission time in conjunction with the estimated lifetime distribution. We demonstrate the usefulness of the developed approach via a numerical example.

APA, Harvard, Vancouver, ISO, and other styles

14

Kuparinen, Anna, Samu Mäntyniemi, JeffreyA.Hutchings, and Sakari Kuikka. "Increasing biological realism of fisheries stock assessment: towards hierarchical Bayesian methods." Environmental Reviews 20, no.2 (June 2012): 135–51. http://dx.doi.org/10.1139/a2012-006.

Full text

Abstract:

Excessively high rates of fishing mortality have led to rapid declines of several commercially important fish stocks. To harvest fish stocks sustainably, fisheries management requires accurate information about population dynamics, but the generation of this information, known as fisheries stock assessment, traditionally relies on conservative and rather narrowly data-driven modelling approaches. To improve the information available for fisheries management, there is a demand to increase the biological realism of stock-assessment practices and to better incorporate the available biological knowledge and theory. Here, we explore the development of fisheries stock-assessment models with an aim to increasing their biological realism, and focus particular attention on the possibilities provided by the hierarchical Bayesian modelling framework and ways to develop this approach as a means of efficiently incorporating different sources of information to construct more biologically realistic stock-assessment models. The main message emerging from our review is that to be able to efficiently improve the biological realism of stock-assessment models, fisheries scientists must go beyond the traditional stock-assessment data and explore the resources available in other fields of biological research, such as ecology, life-history theory and evolutionary biology, in addition to utilizing data available from other stocks of the same or comparable species. The hierarchical Bayesian framework provides a way of formally integrating these sources of knowledge into the stock-assessment protocol and to accumulate information from multiple sources and over time.

APA, Harvard, Vancouver, ISO, and other styles

15

Sedighi, Tabassom. "Using Dynamic and Hybrid Bayesian Network for Policy Decision Making." International Journal of Strategic Engineering 2, no.2 (July 2019): 22–34. http://dx.doi.org/10.4018/ijose.2019070103.

Full text

Abstract:

The Bayesian network (BN) method is one of the data-driven methods which have been successfully used to assist problem-solving in a wide range of disciplines including policy making, information technology, engineering, medicine, and more recently biology and ecology. BNs are particularly useful for diverse problems of varying size and complexity, where uncertainties are inherent in the system. BNs engage directly with subjective data in a transparent way and have become a state-of-the-art technology to support decision-making under uncertainty.

APA, Harvard, Vancouver, ISO, and other styles

16

May Lee, Kim, and J.JackLee. "Evaluating Bayesian adaptive randomization procedures with adaptive clip methods for multi-arm trials." Statistical Methods in Medical Research 30, no.5 (March10, 2021): 1273–87. http://dx.doi.org/10.1177/0962280221995961.

Full text

Abstract:

Bayesian adaptive randomization is a heuristic approach that aims to randomize more patients to the putatively superior arms based on the trend of the accrued data in a trial. Many statistical aspects of this approach have been explored and compared with other approaches; yet only a limited number of works has focused on improving its performance and providing guidance on its application to real trials. An undesirable property of this approach is that the procedure would randomize patients to an inferior arm in some circ*mstances, which has raised concerns in its application. Here, we propose an adaptive clip method to rectify the problem by incorporating a data-driven function to be used in conjunction with Bayesian adaptive randomization procedure. This function aims to minimize the chance of assigning patients to inferior arms during the early time of the trial. Moreover, we propose a utility approach to facilitate the selection of a randomization procedure. A cost that reflects the penalty of assigning patients to the inferior arm(s) in the trial is incorporated into our utility function along with all patients benefited from the trial, both within and beyond the trial. We illustrate the selection strategy for a wide range of scenarios.

APA, Harvard, Vancouver, ISO, and other styles

17

Azizi, Elham, Sandhya Prabhakaran, Ambrose Carr, and Dana Pe'er. "Bayesian Inference for Single-cell Clustering and Imputing." Genomics and Computational Biology 3, no.1 (January26, 2017): 46. http://dx.doi.org/10.18547/gcb.2017.vol3.iss1.e46.

Full text

Abstract:

Single-cell RNA-seq gives access to gene expression measurements for thousands of cells, allowing discovery and characterization of cell types. However, the data is noise-prone due to experimental errors and cell type-specific biases. Current computational approaches for analyzing single-cell data involve a global normalization step which introduces incorrect biases and spurious noise and does not resolve missing data (dropouts). This can lead to misleading conclusions in downstream analyses. Moreover, a single normalization removes important cell type-specific information. We propose a data-driven model, BISCUIT, that iteratively normalizes and clusters cells, thereby separating noise from interesting biological signals. BISCUIT is a Bayesian probabilistic model that learns cell-specific parameters to intelligently drive normalization. This approach displays superior performance to global normalization followed by clustering in both synthetic and real single-cell data compared with previous methods, and allows easy interpretation and recovery of the underlying structure and cell types.

APA, Harvard, Vancouver, ISO, and other styles

18

Munch,StephanB., Athanasios Kottas, and Marc Mangel. "Bayesian nonparametric analysis of stock–recruitment relationships." Canadian Journal of Fisheries and Aquatic Sciences 62, no.8 (August1, 2005): 1808–21. http://dx.doi.org/10.1139/f05-073.

Full text

Abstract:

The relationship between current abundance and future recruitment to the stock is fundamental to managing fish populations. However, many different recruitment models are plausible and the data are insufficient to distinguish among them. Although nonparametric methods may be used to circumvent this problem, these are devoid of biological underpinnings. Here, we present a Bayesian nonparametric approach that allows straightforward incorporation of prior biological information and use it to estimate several fishery reference points. We applied this method to artificial data sets generated from a variety of parametric models and compare the results with the fit of Ricker and Beverton–Holt models. We found that the Bayesian nonparametric method fit the data nearly as well as the true parametric model and always performed better than incorrect parametric alternatives. The estimated reference points agree closely with true values calculated for the underlying parametric model. Finally, we apply the method to empirical data for lingcod (Ophiodon elongatus) and several salmonids. Since this method is capable of reproducing the behavior of any of the parametric models and provides flexible, data-driven estimates of stock–recruitment relationships, it should be of great value in fisheries applications where the true functional relationship is always unknown.

APA, Harvard, Vancouver, ISO, and other styles

19

Liu, Ke, Zhu Liang Yu, Wei Wu, Zhenghui Gu, and Yuanqing Li. "STRAPS: A Fully Data-Driven Spatio-Temporally Regularized Algorithm for M/EEG Patch Source Imaging." International Journal of Neural Systems 25, no.04 (May25, 2015): 1550016. http://dx.doi.org/10.1142/s0129065715500161.

Full text

Abstract:

For M/EEG-based distributed source imaging, it has been established that the L2-norm-based methods are effective in imaging spatially extended sources, whereas the L1-norm-based methods are more suited for estimating focal and sparse sources. However, when the spatial extents of the sources are unknown a priori, the rationale for using either type of methods is not adequately supported. Bayesian inference by exploiting the spatio-temporal information of the patch sources holds great promise as a tool for adaptive source imaging, but both computational and methodological limitations remain to be overcome. In this paper, based on state-space modeling of the M/EEG data, we propose a fully data-driven and scalable algorithm, termed STRAPS, for M/EEG patch source imaging on high-resolution cortices. Unlike the existing algorithms, the recursive penalized least squares (RPLS) procedure is employed to efficiently estimate the source activities as opposed to the computationally demanding Kalman filtering/smoothing. Furthermore, the coefficients of the multivariate autoregressive (MVAR) model characterizing the spatial-temporal dynamics of the patch sources are estimated in a principled manner via empirical Bayes. Extensive numerical experiments demonstrate STRAPS's excellent performance in the estimation of locations, spatial extents and amplitudes of the patch sources with varying spatial extents.

APA, Harvard, Vancouver, ISO, and other styles

20

Chi, Zhexiang, Taotao Zhou, Simin Huang, and Yan-Fu Li. "A data-driven approach for the health prognosis of high-speed train wheels." Proceedings of the Institution of Mechanical Engineers, Part O: Journal of Risk and Reliability 234, no.6 (June19, 2020): 735–47. http://dx.doi.org/10.1177/1748006x20929158.

Full text

Abstract:

Polygonal wear is one of the most critical failure modes of high-speed train wheels that would significantly compromise the safety and reliability of high-speed train operation. However, the mechanism underpinning wheel polygon is complex and still not fully understood, which makes it challenging to track its evolution of the polygonal wheel. The large amount of data gathered through regular inspection and maintenance of Chinese high-speed trains provides a promising way to tackle this challenge with data-driven methods. This article proposes a data-driven approach to predict the degree of the polygonal wear, assess the reliability of individual wheels and the health index of all wheels of a high-speed train for maintenance priority ranking. The synthetic minority over-sampling technique—nominal continuous is adopted to augment the maintenance dataset of imbalanced and mixed features. The autoencoder is used to learn abstract features to represent the original datasets, which are then fed into a support vector machine classifier. The approach is coherently optimized by tuning the model hyper-parameters based on Bayesian optimization. The effectiveness of our proposed approach is demonstrated by the wheel maintenance data obtained from the year 2016 to 2017. The results can also be used to support practical maintenance priority allocation.

APA, Harvard, Vancouver, ISO, and other styles

21

Yang, Yibo, Mohamed Aziz Bhouri, and Paris Perdikaris. "Bayesian differential programming for robust systems identification under uncertainty." Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences 476, no.2243 (November 2020): 20200290. http://dx.doi.org/10.1098/rspa.2020.0290.

Full text

Abstract:

This paper presents a machine learning framework for Bayesian systems identification from noisy, sparse and irregular observations of nonlinear dynamical systems. The proposed method takes advantage of recent developments in differentiable programming to propagate gradient information through ordinary differential equation solvers and perform Bayesian inference with respect to unknown model parameters using Hamiltonian Monte Carlo sampling. This allows an efficient inference of the posterior distributions over plausible models with quantified uncertainty, while the use of sparsity-promoting priors enables the discovery of interpretable and parsimonious representations for the underlying latent dynamics. A series of numerical studies is presented to demonstrate the effectiveness of the proposed methods, including nonlinear oscillators, predator–prey systems and examples from systems biology. Taken together, our findings put forth a flexible and robust workflow for data-driven model discovery under uncertainty. All codes and data accompanying this article are available at https://bit.ly/34FOJMj .

APA, Harvard, Vancouver, ISO, and other styles

22

Lu, Feng, Jipeng Jiang, and Jinquan Huang. "Gas Turbine Engine Gas-path Fault Diagnosis Based on Improved SBELM Architecture." International Journal of Turbo & Jet-Engines 35, no.4 (December19, 2018): 351–63. http://dx.doi.org/10.1515/tjj-2016-0050.

Full text

Abstract:

Abstract Various model-based methods are widely used to aircraft engine fault diagnosis, and an accurate engine model is used in these approaches. However, it is difficult to obtain general engine model with high accuracy due to engine individual difference, lifecycle performance deterioration and modeling uncertainty. Recently, data-driven diagnostic approaches for aircraft engine become more popular with the development of machine learning technologies. While these data-driven methods to engine fault diagnosis tend to ignore experimental data sparse and uncertainty, which results in hardly achieve fast fault diagnosis for multiple patterns. This paper presents a novel data-driven diagnostic approach using Sparse Bayesian Extreme Learning Machine (SBELM) for engine fault diagnosis. This methodology addresses fast fault diagnosis without relying on engine model. To enhance the reliability of fast fault diagnosis and enlarge the detectable fault number, a SBELM-based multi-output classifier framework is designed. The reduced sparse topology of ELM is presented and utilized to fault diagnosis extended from single classifier to multi-output classifier. The effects of noise and measurement uncertainty are taken into consideration. Simulation results show the SBELM-based multi-output classifier for engine fault diagnosis is superior to the existing data-driven ones with regards to accuracy and computational efforts.

APA, Harvard, Vancouver, ISO, and other styles

23

Sükei, Emese, Agnes Norbury, M.MercedesPerez-Rodriguez, PabloM.Olmos, and Antonio Artés. "Predicting Emotional States Using Behavioral Markers Derived From Passively Sensed Data: Data-Driven Machine Learning Approach." JMIR mHealth and uHealth 9, no.3 (March22, 2021): e24465. http://dx.doi.org/10.2196/24465.

Full text

Abstract:

Background Mental health disorders affect multiple aspects of patients’ lives, including mood, cognition, and behavior. eHealth and mobile health (mHealth) technologies enable rich sets of information to be collected noninvasively, representing a promising opportunity to construct behavioral markers of mental health. Combining such data with self-reported information about psychological symptoms may provide a more comprehensive and contextualized view of a patient’s mental state than questionnaire data alone. However, mobile sensed data are usually noisy and incomplete, with significant amounts of missing observations. Therefore, recognizing the clinical potential of mHealth tools depends critically on developing methods to cope with such data issues. Objective This study aims to present a machine learning–based approach for emotional state prediction that uses passively collected data from mobile phones and wearable devices and self-reported emotions. The proposed methods must cope with high-dimensional and heterogeneous time-series data with a large percentage of missing observations. Methods Passively sensed behavior and self-reported emotional state data from a cohort of 943 individuals (outpatients recruited from community clinics) were available for analysis. All patients had at least 30 days’ worth of naturally occurring behavior observations, including information about physical activity, geolocation, sleep, and smartphone app use. These regularly sampled but frequently missing and heterogeneous time series were analyzed with the following probabilistic latent variable models for data averaging and feature extraction: mixture model (MM) and hidden Markov model (HMM). The extracted features were then combined with a classifier to predict emotional state. A variety of classical machine learning methods and recurrent neural networks were compared. Finally, a personalized Bayesian model was proposed to improve performance by considering the individual differences in the data and applying a different classifier bias term for each patient. Results Probabilistic generative models proved to be good preprocessing and feature extractor tools for data with large percentages of missing observations. Models that took into account the posterior probabilities of the MM and HMM latent states outperformed those that did not by more than 20%, suggesting that the underlying behavioral patterns identified were meaningful for individuals’ overall emotional state. The best performing generalized models achieved a 0.81 area under the curve of the receiver operating characteristic and 0.71 area under the precision-recall curve when predicting self-reported emotional valence from behavior in held-out test data. Moreover, the proposed personalized models demonstrated that accounting for individual differences through a simple hierarchical model can substantially improve emotional state prediction performance without relying on previous days’ data. Conclusions These findings demonstrate the feasibility of designing machine learning models for predicting emotional states from mobile sensing data capable of dealing with heterogeneous data with large numbers of missing observations. Such models may represent valuable tools for clinicians to monitor patients’ mood states.

APA, Harvard, Vancouver, ISO, and other styles

24

Soibam, Jerol, Achref Rabhi, Ioanna Aslanidou, Konstantinos Kyprianidis, and Rebei Bel Fdhila. "Derivation and Uncertainty Quantification of a Data-Driven Subcooled Boiling Model." Energies 13, no.22 (November16, 2020): 5987. http://dx.doi.org/10.3390/en13225987.

Full text

Abstract:

Subcooled flow boiling occurs in many industrial applications where enormous heat transfer is needed. Boiling is a complex physical process that involves phase change, two-phase flow, and interactions between heated surfaces and fluids. In general, boiling heat transfer is usually predicted by empirical or semiempirical models, which are horizontal to uncertainty. In this work, a data-driven method based on artificial neural networks has been implemented to study the heat transfer behavior of a subcooled boiling model. The proposed method considers the near local flow behavior to predict wall temperature and void fraction of a subcooled minichannel. The input of the network consists of pressure gradients, momentum convection, energy convection, turbulent viscosity, liquid and gas velocities, and surface information. The outputs of the models are based on the quantities of interest in a boiling system wall temperature and void fraction. To train the network, high-fidelity simulations based on the Eulerian two-fluid approach are carried out for varying heat flux and inlet velocity in the minichannel. Two classes of the deep learning model have been investigated for this work. The first one focuses on predicting the deterministic value of the quantities of interest. The second one focuses on predicting the uncertainty present in the deep learning model while estimating the quantities of interest. Deep ensemble and Monte Carlo Dropout methods are close representatives of maximum likelihood and Bayesian inference approach respectively, and they are used to derive the uncertainty present in the model. The results of this study prove that the models used here are capable of predicting the quantities of interest accurately and are capable of estimating the uncertainty present. The models are capable of accurately reproducing the physics on unseen data and show the degree of uncertainty when there is a shift of physics in the boiling regime.

APA, Harvard, Vancouver, ISO, and other styles

25

Das, Priyam, ChristineB.Peterson, Kim-Anh Do, Rehan Akbani, and Veerabhadran Baladandayuthapani. "NExUS: Bayesian simultaneous network estimation across unequal sample sizes." Bioinformatics 36, no.3 (August28, 2019): 798–804. http://dx.doi.org/10.1093/bioinformatics/btz636.

Full text

Abstract:

Abstract Motivation Network-based analyses of high-throughput genomics data provide a holistic, systems-level understanding of various biological mechanisms for a common population. However, when estimating multiple networks across heterogeneous sub-populations, varying sample sizes pose a challenge in the estimation and inference, as network differences may be driven by differences in power. We are particularly interested in addressing this challenge in the context of proteomic networks for related cancers, as the number of subjects available for rare cancer (sub-)types is often limited. Results We develop NExUS (Network Estimation across Unequal Sample sizes), a Bayesian method that enables joint learning of multiple networks while avoiding artefactual relationship between sample size and network sparsity. We demonstrate through simulations that NExUS outperforms existing network estimation methods in this context, and apply it to learn network similarity and shared pathway activity for groups of cancers with related origins represented in The Cancer Genome Atlas (TCGA) proteomic data. Availability and implementation The NExUS source code is freely available for download at https://github.com/priyamdas2/NExUS. Supplementary information Supplementary data are available at Bioinformatics online.

APA, Harvard, Vancouver, ISO, and other styles

26

Daly,AidanC., DavidJ.Gavaghan, Chris Holmes, and Jonathan Cooper. "Hodgkin–Huxley revisited: reparametrization and identifiability analysis of the classic action potential model with approximate Bayesian methods." Royal Society Open Science 2, no.12 (December 2015): 150499. http://dx.doi.org/10.1098/rsos.150499.

Full text

Abstract:

As cardiac cell models become increasingly complex, a correspondingly complex ‘genealogy’ of inherited parameter values has also emerged. The result has been the loss of a direct link between model parameters and experimental data, limiting both reproducibility and the ability to re-fit to new data. We examine the ability of approximate Bayesian computation (ABC) to infer parameter distributions in the seminal action potential model of Hodgkin and Huxley, for which an immediate and documented connection to experimental results exists. The ability of ABC to produce tight posteriors around the reported values for the gating rates of sodium and potassium ion channels validates the precision of this early work, while the highly variable posteriors around certain voltage dependency parameters suggests that voltage clamp experiments alone are insufficient to constrain the full model. Despite this, Hodgkin and Huxley's estimates are shown to be competitive with those produced by ABC, and the variable behaviour of posterior parametrized models under complex voltage protocols suggests that with additional data the model could be fully constrained. This work will provide the starting point for a full identifiability analysis of commonly used cardiac models, as well as a template for informative, data-driven parametrization of newly proposed models.

APA, Harvard, Vancouver, ISO, and other styles

27

Li, Taifu, and Zhiqiang Liao. "Robust Optimization of Industrial Process Operation Parameters Based on Data-Driven Model and Parameter Fluctuation Analysis." Mathematical Problems in Engineering 2019 (October8, 2019): 1–9. http://dx.doi.org/10.1155/2019/2474909.

Full text

Abstract:

The fluctuation of industrial process operation parameters will severely influence the production process. How to find the robust optimal process operation parameters is an effective method to address this problem. In this paper, a scheme based on data-driven model and variable fluctuation analysis is proposed to obtain the robust optimal operation parameters of industrial process. The data-driven modelling method: multivariate Gaussian process regression (MGPR) based on Bayesian statistical learning theory can map the process operation parameters to objective performance with the flexibility in nonparameter inferring and the self-adaptiveness to determinate hyperparameters. According to the minimum variance criterion, the parameter fluctuation analysis can be performed through multiobjective evolutionary algorithm based on the MGPR model. To analyze the robustness influence of a single parameter, cross validation is applied to evaluate the model output with 2% fluctuation. After that, the robust optimal process operation parameters can be obtained and applied to guide the production. The effectiveness and reliability of the proposed method have been verified with the hydrogen cyanide production process and compared with other model methods and single objective optimization method.

APA, Harvard, Vancouver, ISO, and other styles

28

Kenett,RonS. "Bayesian networks: Theory, applications and sensitivity issues." Encyclopedia with Semantic Computing and Robotic Intelligence 01, no.01 (March 2017): 1630014. http://dx.doi.org/10.1142/s2425038416300147.

Full text

Abstract:

This chapter is about an important tool in the data science workbench, Bayesian networks (BNs). Data science is about generating information from a given data set using applications of statistical methods. The quality of the information derived from data analysis is dependent on various dimensions, including the communication of results, the ability to translate results into actionable tasks and the capability to integrate various data sources [R. S. Kenett and G. Shmueli, On information quality, J. R. Stat. Soc. A 177(1), 3 (2014).] This paper demonstrates, with three examples, how the application of BNs provides a high level of information quality. It expands the treatment of BNs as a statistical tool and provides a wider scope of statistical analysis that matches current trends in data science. For more examples on deriving high information quality with BNs see [R. S. Kenett and G. Shmueli, Information Quality: The Potential of Data and Analytics to Generate Knowledge (John Wiley and Sons, 2016), www.wiley.com/go/information_quality.] The three examples used in the chapter are complementary in scope. The first example is based on expert opinion assessments of risks in the operation of health care monitoring systems in a hospital environment. The second example is from the monitoring of an open source community and is a data rich application that combines expert opinion, social network analysis and continuous operational variables. The third example is totally data driven and is based on an extensive customer satisfaction survey of airline customers. The first section is an introduction to BNs, Sec. 2 provides a theoretical background on BN. Examples are provided in Sec. 3. Section 4 discusses sensitivity analysis of BNs, Sec. 5 lists a range of software applications implementing BNs. Section 6 concludes the chapter.

APA, Harvard, Vancouver, ISO, and other styles

29

Sun, Congcong, Benjamí Parellada, Vicenç Puig, and Gabriela Cembrano. "Leak Localization in Water Distribution Networks Using Pressure and Data-Driven Classifier Approach." Water 12, no.1 (December21, 2019): 54. http://dx.doi.org/10.3390/w12010054.

Full text

Abstract:

Leaks in water distribution networks (WDNs) are one of the main reasons for water loss during fluid transportation. Considering the worldwide problem of water scarcity, added to the challenges that a growing population brings, minimizing water losses through leak detection and localization, timely and efficiently using advanced techniques is an urgent humanitarian need. There are numerous methods being used to localize water leaks in WDNs through constructing hydraulic models or analyzing flow/pressure deviations between the observed data and the estimated values. However, from the application perspective, it is very practical to implement an approach which does not rely too much on measurements and complex models with reasonable computation demand. Under this context, this paper presents a novel method for leak localization which uses a data-driven approach based on limit pressure measurements in WDNs with two stages included: (1) Two different machine learning classifiers based on linear discriminant analysis (LDA) and neural networks (NNET) are developed to determine the probabilities of each node having a leak inside a WDN; (2) Bayesian temporal reasoning is applied afterwards to rescale the probabilities of each possible leak location at each time step after a leak is detected, with the aim of improving the localization accuracy. As an initial illustration, the hypothetical benchmark Hanoi district metered area (DMA) is used as the case study to test the performance of the proposed approach. Using the fitting accuracy and average topological distance (ATD) as performance indicators, the preliminary results reaches more than 80% accuracy in the best cases.

APA, Harvard, Vancouver, ISO, and other styles

30

King, Benedict. "Which morphological characters are influential in a Bayesian phylogenetic analysis? Examples from the earliest osteichthyans." Biology Letters 15, no.7 (July 2019): 20190288. http://dx.doi.org/10.1098/rsbl.2019.0288.

Full text

Abstract:

There has been much recent debate about which method is best for reconstructing the tree of life from morphological datasets. However, little attention has been paid to which characters, if any, are responsible for topological differences between trees recovered from competing methods on empirical datasets. Indeed, a simple procedure for finding characters supporting conflicting tree topologies is available in a parsimony framework, but an equivalent procedure in a model-based framework is lacking. Here, I introduce such a procedure and apply it to the problem of the ‘psarolepid’ osteichthyans. The ‘psarolepids’, which include the earliest known osteichthyans, are weakly supported as stem osteichthyans under parsimony but strongly supported as sarcopterygians in Bayesian analysis. The Bayesian result is driven by just two characters, both of which relate to the intracranial joint of sarcopterygians. Important characters that support a stem osteichthyan affinity for ‘psarolepids’, such as the absence of tooth enamel, have virtually no effect in a Bayesian framework. This is because of a bias towards characters with relatively complete sampling, a bias that has previously been reported for molecular data. This has important implications for Bayesian analysis of morphological datasets in general, as characters from different body parts commonly have different levels of coding completeness. Methods to critically appraise character support for conflicting phylogenetic hypotheses, such as that used here, should form an important part of phylogenetic analyses.

APA, Harvard, Vancouver, ISO, and other styles

31

Cai, Chunhui, Lujia Chen, Xia Jiang, and Xinghua Lu. "Modeling Signal Transduction from Protein Phosphorylation to Gene Expression." Cancer Informatics 13s1 (January 2014): CIN.S13883. http://dx.doi.org/10.4137/cin.s13883.

Full text

Abstract:

Background Signaling networks are of great importance for us to understand the cell's regulatory mechanism. The rise of large-scale genomic and proteomic data, and prior biological knowledge has paved the way for the reconstruction and discovery of novel signaling pathways in a data-driven manner. In this study, we investigate computational methods that integrate proteomics and transcriptomic data to identify signaling pathways transmitting signals in response to specific stimuli. Such methods can be applied to cancer genomic data to infer perturbed signaling pathways. Method We proposed a novel Bayesian Network (BN) framework to integrate transcriptomic data with proteomic data reflecting protein phosphorylation states for the purpose of identifying the pathways transmitting the signal of diverse stimuli in rat and human cells. We represented the proteins and genes as nodes in a BN in which edges reflect the regulatory relationship between signaling proteins. We designed an efficient inference algorithm that incorporated the prior knowledge of pathways and searched for a network structure in a data-driven manner. Results We applied our method to infer rat and human specific networks given gene expression and proteomic datasets. We were able to effectively identify sparse signaling networks that modeled the observed transcriptomic and proteomic data. Our methods were able to identify distinct signaling pathways for rat and human cells in a data-driven manner, based on the facts that rat and human cells exhibited distinct transcriptomic and proteomics responses to a common set of stimuli. Our model performed well in the SBV IMPROVER challenge in comparison to other models addressing the same task. The capability of inferring signaling pathways in a data-driven fashion may contribute to cancer research by identifying distinct aberrations in signaling pathways underlying heterogeneous cancers subtypes.

APA, Harvard, Vancouver, ISO, and other styles

32

Nazir, Hafiza Mamona, Ijaz Hussain, Ishfaq Ahmad, Muhammad Faisal, and IbrahimM.Almanjahie. "An improved framework to predict river flow time series data." PeerJ 7 (July1, 2019): e7183. http://dx.doi.org/10.7717/peerj.7183.

Full text

Abstract:

Due to non-stationary and noise characteristics of river flow time series data, some pre-processing methods are adopted to address the multi-scale and noise complexity. In this paper, we proposed an improved framework comprising Complete Ensemble Empirical Mode Decomposition with Adaptive Noise-Empirical Bayesian Threshold (CEEMDAN-EBT). The CEEMDAN-EBT is employed to decompose non-stationary river flow time series data into Intrinsic Mode Functions (IMFs). The derived IMFs are divided into two parts; noise-dominant IMFs and noise-free IMFs. Firstly, the noise-dominant IMFs are denoised using empirical Bayesian threshold to integrate the noises and sparsities of IMFs. Secondly, the denoised IMF’s and noise free IMF’s are further used as inputs in data-driven and simple stochastic models respectively to predict the river flow time series data. Finally, the predicted IMF’s are aggregated to get the final prediction. The proposed framework is illustrated by using four rivers of the Indus Basin System. The prediction performance is compared with Mean Square Error, Mean Absolute Error (MAE) and Mean Absolute Percentage Error (MAPE). Our proposed method, CEEMDAN-EBT-MM, produced the smallest MAPE for all four case studies as compared with other methods. This suggests that our proposed hybrid model can be used as an efficient tool for providing the reliable prediction of non-stationary and noisy time series data to policymakers such as for planning power generation and water resource management.

APA, Harvard, Vancouver, ISO, and other styles

33

Pirot, Guillaume, Tipaluck Krityakierne, David Ginsbourger, and Philippe Renard. "Contaminant source localization via Bayesian global optimization." Hydrology and Earth System Sciences 23, no.1 (January21, 2019): 351–69. http://dx.doi.org/10.5194/hess-23-351-2019.

Full text

Abstract:

Abstract. Contaminant source localization problems require efficient and robust methods that can account for geological heterogeneities and accommodate relatively small data sets of noisy observations. As realism commands hi-fidelity simulations, computation costs call for global optimization algorithms under parsimonious evaluation budgets. Bayesian optimization approaches are well adapted to such settings as they allow the exploration of parameter spaces in a principled way so as to iteratively locate the point(s) of global optimum while maintaining an approximation of the objective function with an instrumental quantification of prediction uncertainty. Here, we adapt a Bayesian optimization approach to localize a contaminant source in a discretized spatial domain. We thus demonstrate the potential of such a method for hydrogeological applications and also provide test cases for the optimization community. The localization problem is illustrated for cases where the geology is assumed to be perfectly known. Two 2-D synthetic cases that display sharp hydraulic conductivity contrasts and specific connectivity patterns are investigated. These cases generate highly nonlinear objective functions that present multiple local minima. A derivative-free global optimization algorithm relying on a Gaussian process model and on the expected improvement criterion is used to efficiently localize the point of minimum of the objective functions, which corresponds to the contaminant source location. Even though concentration measurements contain a significant level of proportional noise, the algorithm efficiently localizes the contaminant source location. The variations of the objective function are essentially driven by the geology, followed by the design of the monitoring well network. The data and scripts used to generate objective functions are shared to favor reproducible research. This contribution is important because the functions present multiple local minima and are inspired from a practical field application. Sharing these complex objective functions provides a source of test cases for global optimization benchmarks and should help with designing new and efficient methods to solve this type of problem.

APA, Harvard, Vancouver, ISO, and other styles

34

Rannala, Bruce. "The art and science of species delimitation." Current Zoology 61, no.5 (October1, 2015): 846–53. http://dx.doi.org/10.1093/czoolo/61.5.846.

Full text

Abstract:

Abstract DNA-based approaches to systematics have changed dramatically during the last two decades with the rise of DNA barcoding methods and newer multi-locus methods for species delimitation. During the last half-decade, partly driven by the new sequencing technologies, the focus has shifted to multi-locus sequence data and the identification of species within the framework of the multi-species coalescent (MSC). In this paper, I discuss model-based Bayesian methods for species delimitation that have been developed in recent years using the MSC. Several approximate methods for species delimitation (and their limitations) are also discussed. Explicit species delimitation models have the advantage of clarifying more precisely what is being delimited and what assumptions we are making in doing so. Moreover, the methods can be very powerful when applied to large multi-locus datasets and thus take full advantage of data generated using today’s technologies.

APA, Harvard, Vancouver, ISO, and other styles

35

Song, Rongjia, Lei Huang, Weiping Cui, María Óskarsdóttir, and Jan Vanthienen. "Fraud Detection of Bulk Cargo Theft in Port Using Bayesian Network Models." Applied Sciences 10, no.3 (February5, 2020): 1056. http://dx.doi.org/10.3390/app10031056.

Full text

Abstract:

The fraud detection of cargo theft has been a serious issue in ports for a long time. Traditional research in detecting theft risk is expert- and survey-based, which is not optimal for proactive prediction. As we move into a pervasive and ubiquitous paradigm, the implications of external environment and system behavior are continuously captured as multi-source data. Therefore, we propose a novel data-driven approach for formulating predictive models for detecting bulk cargo theft in ports. More specifically, we apply various feature-ranking methods and classification algorithms for selecting an effective feature set of relevant risk elements. Then, implicit Bayesian networks are derived with the features to graphically present the relationship with the risk elements of fraud. Thus, various binary classifiers are compared to derive a suitable predictive model, and Bayesian network performs best overall. The resulting Bayesian networks are then comparatively analyzed based on the outcomes of model validation and testing, as well as essential domain knowledge. The experimental results show that predictive models are effective, with both accuracy and recall values greater than 0.8. These predictive models are not only useful for understanding the dependency between relevant risk elements, but also for supporting the strategy optimization of risk management.

APA, Harvard, Vancouver, ISO, and other styles

36

Soldevila, Adrià, Joaquim Blesa, RosaM.Fernandez-Canti, Sebastian Tornil-Sin, and Vicenç Puig. "Data-Driven Approach for Leak Localization in Water Distribution Networks Using Pressure Sensors and Spatial Interpolation." Water 11, no.7 (July19, 2019): 1500. http://dx.doi.org/10.3390/w11071500.

Full text

Abstract:

This paper presents a new data-driven method for leak localization in water distribution networks. The proposed method relies on the use of available pressure measurements in some selected internal network nodes and on the estimation of the pressure at the remaining nodes using Kriging spatial interpolation. Online leak localization is attained by comparing current pressure values with their reference values. Supported by Kriging; this comparison can be performed for all the network nodes, not only for those equipped with pressure sensors. On the one hand, reference pressure values in all nodes are obtained by applying Kriging to measurement data previously recorded under network operation without leaks. On the other hand, current pressure values at all nodes are obtained by applying Kriging to the current measured pressure values. The node that presents the maximum difference (residual) between current and reference pressure values is proposed as a leaky node candidate. Thereafter, a time horizon computation based on Bayesian reasoning is applied to consider the residual time evolution, resulting in an improved leak localization accuracy. As a data-driven approach, the proposed method does not need a hydraulic model; only historical data from normal operation is required. This is an advantage with respect to most data-driven methods that need historical data for the considered leak scenarios. Since, in practice, the obtained leak localization results will strongly depend on the number of available pressure measurements and their location, an optimal sensor placement procedure is also proposed in the paper. Three different case studies illustrate the performance of the proposed methodologies.

APA, Harvard, Vancouver, ISO, and other styles

37

Loskutov,E.M., D.N.Mukhin, A.S.Gavrilov, J.Kurths, and A.M.Feigin. "AN EMPIRICAL STUDY OF THE CRITICAL TRANSITION IN THE PLEISTOCENE CLIMATE BASED ON NONLINEAR DYNAMIC RECONSTRUCTION." XXII workshop of the Council of nonlinear dynamics of the Russian Academy of Sciences 47, no.1 (April30, 2019): 83–84. http://dx.doi.org/10.29006/1564-2291.jor-2019.47(1).24.

Full text

Abstract:

Until now, cause of the Mid-Pleistocene Transition (MPT), when the dominant periodicity of climate cycles changed from 41,000 to 100,000 years in the absence of significant change in orbital forcing, are still an open question in Paleoclimatology. Here we show how a Bayesian data analysis and nonlinear dynamical reconstruction methods can help to reveal the main mechanisms underlying the Pleistocene variability. Our Bayesian data-driven model from benthic d18O records (LR04 stack) accounts the main factors which may potentially impact climate of the Pleistocene: internal climate dynamics, gradual trends, variations of insolation, and millennial variability. In contrast to some theories, we uncover that under long-term trends in climate, the strong glacial cycles have appeared due to internal nonlinear oscillations induced by millennial noise. We find that while the orbital Milankovitch forcing does not matter for the MPT onset, the obliquity oscillation phase-locks the climate cycles through the meridional gradient of insolation. The research was supported by the RAS Presidium Program «Nonlinear dynamics: fundamental problems and applications».

APA, Harvard, Vancouver, ISO, and other styles

38

Jun, Sunghae. "Machines Imitating Human Thinking Using Bayesian Learning and Bootstrap." Symmetry 13, no.3 (February27, 2021): 389. http://dx.doi.org/10.3390/sym13030389.

Full text

Abstract:

In the field of cognitive science, much research has been conducted on the diverse applications of artificial intelligence (AI). One important area of study is machines imitating human thinking. Although there are various approaches to development of thinking machines, we assume that human thinking is not always optimal in this paper. Sometimes, humans are driven by emotions to make decisions that are not optimal. Recently, deep learning has been dominating most machine learning tasks in AI. In the area of optimal decisions involving AI, many traditional machine learning methods are rapidly being replaced by deep learning. Therefore, because of deep learning, we can expect the faster growth of AI technology such as AlphaGo in optimal decision-making. However, humans sometimes think and act not optimally but emotionally. In this paper, we propose a method for building thinking machines imitating humans using Bayesian decision theory and learning. Bayesian statistics involves a learning process based on prior and posterior aspects. The prior represents an initial belief in a specific domain. This is updated to posterior through the likelihood of observed data. The posterior refers to the updated belief based on observations. When the observed data are newly added, the current posterior is used as a new prior for the updated posterior. Bayesian learning such as this also provides an optimal decision; thus, this is not well-suited to the modeling of thinking machines. Therefore, we study a new Bayesian approach to developing thinking machines using Bayesian decision theory. In our research, we do not use a single optimal value expected by the posterior; instead, we generate random values from the last updated posterior to be used for thinking machines that imitate human thinking.

APA, Harvard, Vancouver, ISO, and other styles

39

Lawler,AndrewJ., and Viviana Acquaviva. "Detecting episodes of star formation using Bayesian model selection." Monthly Notices of the Royal Astronomical Society 502, no.3 (January19, 2021): 3993–4008. http://dx.doi.org/10.1093/mnras/stab138.

Full text

Abstract:

ABSTRACT Bayesian model comparison frameworks can be used when fitting models to data in order to infer the appropriate model complexity in a data-driven manner. We aim to use them to detect the correct number of major episodes of star formation from the analysis of the spectral energy distributions (SEDs) of galaxies, modelled after 3D-HST galaxies at z ∼ 1. Starting from the published stellar population properties of these galaxies, we use kernel density estimates to build multivariate input parameter distributions to obtain realistic simulations. We create simulated sets of spectra of varying degrees of complexity (identified by the number of parameters), and derive SED fitting results and pieces of evidence for pairs of nested models, including the correct model as well as more simplistic ones, using the bagpipes codebase with nested sampling algorithm multinest. We then ask the question: is it true – as expected in Bayesian model comparison frameworks – that the correct model has larger evidence? Our results indicate that the ratio of pieces of evidence (the Bayes factor) is able to identify the correct underlying model in the vast majority of cases. The quality of the results improves primarily as a function of the total S/N in the SED. We also compare the Bayes factors obtained using the evidence to those obtained via the Savage–Dickey density ratio (SDDR), an analytic approximation that can be calculated using samples from regular Markov Chain Monte Carlo methods. We show that the SDDR ratio can satisfactorily replace a full evidence calculation provided that the sampling density is sufficient.

APA, Harvard, Vancouver, ISO, and other styles

40

Xie, Wenyi, Xiankui Zeng, Dongwei Gui, Jichun Wu, and Dong Wang. "Modeling the Snowmelt Runoff Process of the Tizinafu River Basin, Northwest China, with GLDAS Data and Bayesian Uncertainty Analysis." Journal of Hydrometeorology 22, no.1 (January 2021): 169–82. http://dx.doi.org/10.1175/jhm-d-20-0162.1.

Full text

Abstract:

AbstractThe climate of the Tizinafu River basin is characterized by low temperature and sparse precipitation, and snow and glacier melt serve as the main water resource in this area. Modeling the snowmelt runoff process has great significance for local ecosystems and residents. The total streamflow of the Tizinafu River basin was divided into surface streamflow and baseflow. The surface streamflow was estimated using the routing model (RM) with Noah runoff data from Global Land Data Assimilation (GLDAS), and the parameter uncertainty of the RM was quantified through Markov chain Monte Carlo simulation. Additionally, the 10 commonly used baseflow separation methods of four categories [digital filter, hydrograph separation program (HYSEP), baseflow index, and Kalinlin methods] were used to generate the baseflow and were then evaluated by their performance in total streamflow simulation. The results demonstrated that the RM driven by GLDAS runoff data could reproduce the runoff process of the Tizinafu River basin. RM-Hl (local minimum HYSEP method) achieved the best performance in the total streamflow simulation, with Nash–Sutcliffe efficiency (NSE) coefficients of 0.82 and 0.93, relative errors of −0.40% and 10.50%, and observation inclusion ratios C of 62.07% and 68.52% for the calibration and verification periods, respectively. The local minimum HYSEP method was most suitable for describing the baseflow of the Tizinafu River basin among the 10 baseflow separation methods. However, digital filter methods exhibited weak performance in baseflow separation.

APA, Harvard, Vancouver, ISO, and other styles

41

Dahmen, Jessamyn, and DianeJ.Cook. "Indirectly Supervised Anomaly Detection of Clinically Meaningful Health Events from Smart Home Data." ACM Transactions on Intelligent Systems and Technology 12, no.2 (March 2021): 1–18. http://dx.doi.org/10.1145/3439870.

Full text

Abstract:

Anomaly detection techniques can extract a wealth of information about unusual events. Unfortunately, these methods yield an abundance of findings that are not of interest, obscuring relevant anomalies. In this work, we improve upon traditional anomaly detection methods by introducing Isudra, an Indirectly Supervised Detector of Relevant Anomalies from time series data. Isudra employs Bayesian optimization to select time scales, features, base detector algorithms, and algorithm hyperparameters that increase true positive and decrease false positive detection. This optimization is driven by a small amount of example anomalies, driving an indirectly supervised approach to anomaly detection. Additionally, we enhance the approach by introducing a warm-start method that reduces optimization time between similar problems. We validate the feasibility of Isudra to detect clinically relevant behavior anomalies from over 2M sensor readings collected in five smart homes, reflecting 26 health events. Results indicate that indirectly supervised anomaly detection outperforms both supervised and unsupervised algorithms at detecting instances of health-related anomalies such as falls, nocturia, depression, and weakness.

APA, Harvard, Vancouver, ISO, and other styles

42

AMENE,E., L.A.HANSON, E.A.ZAHN, S.R.WILD, and D.DÖPFER. "Variable selection and regression analysis for the prediction of mortality rates associated with foodborne diseases." Epidemiology and Infection 144, no.9 (January20, 2016): 1959–73. http://dx.doi.org/10.1017/s0950268815003234.

Full text

Abstract:

SUMMARYThe purpose of this study was to apply a novel statistical method for variable selection and a model-based approach for filling data gaps in mortality rates associated with foodborne diseases using the WHO Vital Registration mortality dataset. Correlation analysis and elastic net regularization methods were applied to drop redundant variables and to select the most meaningful subset of predictors. Whenever predictor data were missing, multiple imputation was used to fill in plausible values. Cluster analysis was applied to identify similar groups of countries based on the values of the predictors. Finally, a Bayesian hierarchical regression model was fit to the final dataset for predicting mortality rates. From 113 potential predictors, 32 were retained after correlation analysis. Out of these 32 predictors, eight with non-zero coefficients were selected using the elastic net regularization method. Based on the values of these variables, four clusters of countries were identified. The uncertainty of predictions was large for countries within clusters lacking mortality rates, and it was low for a cluster that had mortality rate information. Our results demonstrated that, using Bayesian hierarchical regression models, a data-driven clustering of countries and a meaningful subset of predictors can be used to fill data gaps in foodborne disease mortality.

APA, Harvard, Vancouver, ISO, and other styles

43

Boulet, Sandrine, Moreno Ursino, Peter Thall, Bruno Landi, Céline Lepère, Simon Pernot, Anita Burgun, et al. "Integration of elicited expert information via a power prior in Bayesian variable selection: Application to colon cancer data." Statistical Methods in Medical Research 29, no.2 (April9, 2019): 541–67. http://dx.doi.org/10.1177/0962280219841082.

Full text

Abstract:

Background Building tools to support personalized medicine needs to model medical decision-making. For this purpose, both expert and real world data provide a rich source of information. Currently, machine learning techniques are developing to select relevant variables for decision-making. Rather than using data-driven analysis alone, eliciting prior information from physicians related to their medical decision-making processes can be useful in variable selection. Our framework is electronic health records data on repeated dose adjustment of Irinotecan for the treatment of metastatic colorectal cancer. We propose a method that incorporates elicited expert weights associated with variables involved in dose reduction decisions into the Stochastic Search Variable Selection (SSVS), a Bayesian variable selection method, by using a power prior. Methods Clinician experts were first asked to provide numerical clinical relevance weights to express their beliefs about the importance of each variable in their medical decision making. Then, we modeled the link between repeated dose reduction, patient characteristics, and toxicities by assuming a logistic mixed-effects model. Simulated data were generated based on the elicited weights and combined with the observed dose reduction data via a power prior. We compared the Bayesian power prior-based SSVS performance to the usual SSVS in our case study, including a sensitivity analysis using the power prior parameter. Results The selected variables differ when using only expert knowledge, only the usual SSVS, or combining both. Our method enables one to select rare variables that may be missed using only the observed data and to discard variables that appear to be relevant based on the data but not relevant from the expert perspective. Conclusion We introduce an innovative Bayesian variable selection method that adaptively combines elicited expert information and real world data. The method selects a set of variables relevant to model medical decision process.

APA, Harvard, Vancouver, ISO, and other styles

44

Carlisle,AaronB., KennethJ.Goldman, StevenY.Litvin, DanielJ.Madigan, JenniferS.Bigman, AlanM.Swithenbank, ThomasC.Kline, and BarbaraA.Block. "Stable isotope analysis of vertebrae reveals ontogenetic changes in habitat in an endothermic pelagic shark." Proceedings of the Royal Society B: Biological Sciences 282, no.1799 (January22, 2015): 20141446. http://dx.doi.org/10.1098/rspb.2014.1446.

Full text

Abstract:

Ontogenetic changes in habitat are driven by shifting life-history requirements and play an important role in population dynamics. However, large portions of the life history of many pelagic species are still poorly understood or unknown. We used a novel combination of stable isotope analysis of vertebral annuli, Bayesian mixing models, isoscapes and electronic tag data to reconstruct ontogenetic patterns of habitat and resource use in a pelagic apex predator, the salmon shark ( Lamna ditropis ). Results identified the North Pacific Transition Zone as the major nursery area for salmon sharks and revealed an ontogenetic shift around the age of maturity from oceanic to increased use of neritic habitats. The nursery habitat may reflect trade-offs between prey availability, predation pressure and thermal constraints on juvenile endothermic sharks. The ontogenetic shift in habitat coincided with a reduction of isotopic niche, possibly reflecting specialization upon particular prey or habitats. Using tagging data to inform Bayesian isotopic mixing models revealed that adult sharks primarily use neritic habitats of Alaska yet receive a trophic subsidy from oceanic habitats. Integrating the multiple methods used here provides a powerful approach to retrospectively study the ecology and life history of migratory species throughout their ontogeny.

APA, Harvard, Vancouver, ISO, and other styles

45

Galagali, Nikhil, and YoussefM.Marzouk. "Exploiting network topology for large-scale inference of nonlinear reaction models." Journal of The Royal Society Interface 16, no.152 (March 2019): 20180766. http://dx.doi.org/10.1098/rsif.2018.0766.

Full text

Abstract:

The development of chemical reaction models aids understanding and prediction in areas ranging from biology to electrochemistry and combustion. A systematic approach to building reaction network models uses observational data not only to estimate unknown parameters but also to learn model structure. Bayesian inference provides a natural approach to this data-driven construction of models. Yet traditional Bayesian model inference methodologies that numerically evaluate the evidence for each model are often infeasible for nonlinear reaction network inference, as the number of plausible models can be combinatorially large. Alternative approaches based on model-space sampling can enable large-scale network inference, but their realization presents many challenges. In this paper, we present new computational methods that make large-scale nonlinear network inference tractable. First, we exploit the topology of networks describing potential interactions among chemical species to design improved ‘between-model’ proposals for reversible-jump Markov chain Monte Carlo. Second, we introduce a sensitivity-based determination of move types which, when combined with network-aware proposals, yields significant additional gains in sampling performance. These algorithms are demonstrated on inference problems drawn from systems biology, with nonlinear differential equation models of species interactions.

APA, Harvard, Vancouver, ISO, and other styles

46

Becker, Keith, Jim Sprigg, and Alex Cosmas. "Estimating individual promotional campaign impacts through Bayesian inference." Journal of Consumer Marketing 31, no.6/7 (November4, 2014): 541–52. http://dx.doi.org/10.1108/jcm-06-2014-1006.

Full text

Abstract:

Purpose – The purpose of this paper is to estimate individual promotional campaign impacts through Bayesian inference. Conventional statistics have worked well for analyzing the impact of direct marketing promotions on purchase behavior. However, many modern marketing programs must drive multiple purchase objectives, requiring more precise arbitration between multiple offers and collection of more data with which to differentiate individuals. This often results in datasets that are highly dimensional, yet also sparse, straining the power of statistical methods to properly estimate the effect of promotional treatments. Design/methodology/approach – Improvements in computing power have enabled new techniques for predicting individual behavior. This work investigates a probabilistic machine-learned Bayesian approach to predict individual impacts driven by promotional campaign offers for a leading global travel and hospitality chain. Comparisons were made to a linear regression, representative of the current state of practice. Findings – The findings of this work focus on comparing a machine-learned Bayesian approach with linear regression (which is representative of the current state of practice among industry practitioners) in the analysis of a promotional campaign across three key areas: highly dimensional data, sparse data and likelihood matching. Research limitations/implications – Because the findings are based on a single campaign, future work includes generalizing results across multiple promotional campaigns. Also of interest for future work are comparisons of the technique developed here with other techniques from academia. Practical implications – Because the Bayesian approach allows estimation of the influence of the promotion for each hypothetical customer’s set of promotional attributes, even when no exact look-alikes exist in the control group, a number of possible applications exist. These include optimal campaign design (given the ability to estimate the promotional attributes that are likely to drive the greatest incremental spend in a hypothetical deployment) and operationalizing efficient audience selection given the model’s individualized estimates, reducing the risk of marketing overcommunication, which can prompt costly unsubscriptions. Originality/value – The original contribution is the application of machine-learning to Bayesian Belief Network construction in the context of analyzing a multi-channel promotional campaign’s impact on individual customers. This is of value to practitioners seeking alternatives for campaign analysis for applications in which more commonly used models are not well-suited, such as the three key areas that this paper highlights: highly dimensional data, sparse data and likelihood matching.

APA, Harvard, Vancouver, ISO, and other styles

47

Pan, Yuangang, IvorW.Tsang, Yueming Lyu, AvinashK.Singh, and Chin-Teng Lin. "Online Mental Fatigue Monitoring via Indirect Brain Dynamics Evaluation." Neural Computation 33, no.6 (May13, 2021): 1616–55. http://dx.doi.org/10.1162/neco_a_01382.

Full text

Abstract:

Driver mental fatigue leads to thousands of traffic accidents. The increasing quality and availability of low-cost electroencephalogram (EEG) systems offer possibilities for practical fatigue monitoring. However, non-data-driven methods, designed for practical, complex situations, usually rely on handcrafted data statistics of EEG signals. To reduce human involvement, we introduce a data-driven methodology for online mental fatigue detection: self-weight ordinal regression (SWORE). Reaction time (RT), referring to the length of time people take to react to an emergency, is widely considered an objective behavioral measure for mental fatigue state. Since regression methods are sensitive to extreme RTs, we propose an indirect RT estimation based on preferences to explore the relationship between EEG and RT, which generalizes to any scenario when an objective fatigue indicator is available. In particular, SWORE evaluates the noisy EEG signals from multiple channels in terms of two states: shaking state and steady state. Modeling the shaking state can discriminate the reliable channels from the uninformative ones, while modeling the steady state can suppress the task-nonrelevant fluctuation within each channel. In addition, an online generalized Bayesian moment matching (online GBMM) algorithm is proposed to online-calibrate SWORE efficiently per participant. Experimental results with 40 participants show that SWORE can maximally achieve consistent with RT, demonstrating the feasibility and adaptability of our proposed framework in practical mental fatigue estimation.

APA, Harvard, Vancouver, ISO, and other styles

48

Pal, Ratnabali, Arif Ahmed Sekh, Samarjit Kar, and DilipK.Prasad. "Neural Network Based Country Wise Risk Prediction of COVID-19." Applied Sciences 10, no.18 (September16, 2020): 6448. http://dx.doi.org/10.3390/app10186448.

Full text

Abstract:

The recent worldwide outbreak of the novel coronavirus (COVID-19) has opened up new challenges to the research community. Artificial intelligence (AI) driven methods can be useful to predict the parameters, risks, and effects of such an epidemic. Such predictions can be helpful to control and prevent the spread of such diseases. The main challenges of applying AI is the small volume of data and the uncertain nature. Here, we propose a shallow long short-term memory (LSTM) based neural network to predict the risk category of a country. We have used a Bayesian optimization framework to optimize and automatically design country-specific networks. The results show that the proposed pipeline outperforms state-of-the-art methods for data of 180 countries and can be a useful tool for such risk categorization. We have also experimented with the trend data and weather data combined for the prediction. The outcome shows that the weather does not have a significant role. The tool can be used to predict long-duration outbreak of such an epidemic such that we can take preventive steps earlier.

APA, Harvard, Vancouver, ISO, and other styles

49

Li, Qiaofeng, and Qiuhai Lu. "Time domain force identification based on adaptive ℓq regularization." Journal of Vibration and Control 24, no.23 (March19, 2018): 5610–26. http://dx.doi.org/10.1177/1077546318761968.

Full text

Abstract:

Traditional time domain force identification methods require prior knowledge about the force profile to apply the appropriate regularization term. Generally speaking, ℓ1 and ℓ2 regularization are applied for sparse-type and continuous-type forces respectively. However, prior knowledge about the force type may be unavailable in engineering practice. It is then necessary to incorporate the determination of q (as in ℓq regularization) into the identification process. In this paper, we propose two methods to address the problem: the joint and marginal posterior modes of the force history. The identification problem is formulated within the Bayesian framework. The force history, precision parameters, and q are all treated as unknown random parameters, and estimated based on vibration measurements only. The proposed methods are numerically validated on a mass–spring system, an engineering-scale support structure and experimentally validated on a cantilever beam. It is shown that the proposed methods by considering the data-driven determination of q could adapt to the force profile and consistently provide satisfactory results.

APA, Harvard, Vancouver, ISO, and other styles

50

LI, FAYIN, and HARRY WECHSLER. "FACE AUTHENTICATION USING RECOGNITION-BY-PARTS, BOOSTING AND TRANSDUCTION." International Journal of Pattern Recognition and Artificial Intelligence 23, no.03 (May 2009): 545–73. http://dx.doi.org/10.1142/s0218001409007193.

Full text

Abstract:

The paper describes an integrated recognition-by-parts architecture for reliable and robust face recognition. Reliability and robustness are characteristic of the ability to deploy full-fledged and operational biometric engines, and handling adverse image conditions that include among others uncooperative subjects, occlusion, and temporal variability, respectively. The architecture proposed is model-free and non-parametric. The conceptual framework draws support from discriminative methods using likelihood ratios. At the conceptual level it links forensics and biometrics, while at the implementation level it links the Bayesian framework and statistical learning theory (SLT). Layered categorization starts with face detection using implicit rather than explicit segmentation. It proceeds with face authentication that involves feature selection of local patch instances including dimensionality reduction, exemplar-based clustering of patches into parts, and data fusion for matching using boosting driven by parts that play the role of weak-learners. Face authentication shares the same implementation with face detection. The implementation, driven by transduction, employs proximity and typicality (ranking) realized using strangeness and p-values, respectively. The feasibility and reliability of the proposed architecture are illustrated using FRGC data. The paper concludes with suggestions for augmenting and enhancing the scope and utility of the proposed architecture.

APA, Harvard, Vancouver, ISO, and other styles

We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography
Journal articles: 'Data-driven Bayesian methods' – Grafiati (2024)

References

Top Articles
Jere Beasley Report - September 2024
Which App Has Been Shown To Be The Most Used? Yelp, Foursquare,Eat24/ Grubhub, Urban Spoon, Zagat, Open
Dainty Rascal Io
Plaza Nails Clifton
Body Rubs Austin Texas
Tj Nails Victoria Tx
Polyhaven Hdri
Watch Mashle 2nd Season Anime Free on Gogoanime
Klustron 9
Craigslist In South Carolina - Craigslist Near You
Craigslist Estate Sales Tucson
2135 Royalton Road Columbia Station Oh 44028
Reddit Wisconsin Badgers Leaked
10 Best Places to Go and Things to Know for a Trip to the Hickory M...
Transfer Credits Uncc
Classic Lotto Payout Calculator
Locate At&T Store Near Me
Clear Fork Progress Book
Csi Tv Series Wiki
Bridge.trihealth
No Hard Feelings - Stream: Jetzt Film online anschauen
Water Trends Inferno Pool Cleaner
Lakers Game Summary
Barber Gym Quantico Hours
Chaos Space Marines Codex 9Th Edition Pdf
Yosemite Sam Hood Ornament
Wisconsin Volleyball Team Boobs Uncensored
Meridian Owners Forum
10 Best Places to Go and Things to Know for a Trip to the Hickory M...
Leben in Japan – das muss man wissen - Lernen Sie Sprachen online bei italki
The Clapping Song Lyrics by Belle Stars
5 Star Rated Nail Salons Near Me
Kelley Fliehler Wikipedia
Nextdoor Myvidster
Shaman's Path Puzzle
2016 Honda Accord Belt Diagram
Today's Final Jeopardy Clue
Craigslist Tulsa Ok Farm And Garden
Post A Bid Monticello Mn
814-747-6702
Brake Pads - The Best Front and Rear Brake Pads for Cars, Trucks & SUVs | AutoZone
Craigslist Rooms For Rent In San Fernando Valley
Mychart University Of Iowa Hospital
Matt Brickman Wikipedia
Iron Drop Cafe
Mytmoclaim Tracking
Mmastreams.com
Pelican Denville Nj
15:30 Est
Secondary Math 2 Module 3 Answers
Arre St Wv Srj
Honeybee: Classification, Morphology, Types, and Lifecycle
Latest Posts
Article information

Author: Prof. An Powlowski

Last Updated:

Views: 5353

Rating: 4.3 / 5 (64 voted)

Reviews: 95% of readers found this page helpful

Author information

Name: Prof. An Powlowski

Birthday: 1992-09-29

Address: Apt. 994 8891 Orval Hill, Brittnyburgh, AZ 41023-0398

Phone: +26417467956738

Job: District Marketing Strategist

Hobby: Embroidery, Bodybuilding, Motor sports, Amateur radio, Wood carving, Whittling, Air sports

Introduction: My name is Prof. An Powlowski, I am a charming, helpful, attractive, good, graceful, thoughtful, vast person who loves writing and wants to share my knowledge and understanding with you.