Climate change poses a major challenge to humanity, especially in its impact on agriculture, a challenge that a responsible AI should meet. In this paper, we examine a CBR system (PBI-CBR) designed to aid sustainable dairy farming by supporting grassland management, through accurate crop growth prediction. As climate changes, PBI-CBRs historical cases become less useful in predicting future grass growth. Hence, we extend PBI-CBR using data augmentation, to specifically handle disruptive climate events, using a counterfactual method (from XAI). Study 1 shows that historical, extreme climate-events (climate outlier cases) tend to be used by PBI-CBR to predict grass growth during climate disrupted periods. Study 2 shows that synthetic outliers, generated as counterfactuals on a outlier-boundary, improve the predictive accuracy of PBICBR, during the drought of 2018. This study also shows that an instance-based counterfactual method does better than a benchmark, constraint-guided method.

0
下载
关闭预览

相关内容

数据增强在机器学习领域多指采用一些方法(比如数据蒸馏,正负样本均衡等)来提高模型数据集的质量,增强数据。

Zero inflation is a common nuisance while monitoring disease progression over time. This article proposes a new observation driven model for zero inflated and over-dispersed count time series. The counts given the past history of the process and available information on covariates is assumed to be distributed as a mixture of a Poisson distribution and a distribution degenerate at zero, with a time dependent mixing probability, $\pi_t$. Since, count data usually suffers from overdispersion, a Gamma distribution is used to model the excess variation, resulting in a zero inflated Negative Binomial (NB) regression model with mean parameter $\lambda_t$. Linear predictors with auto regressive and moving average (ARMA) type terms, covariates, seasonality and trend are fitted to $\lambda_t$ and $\pi_t$ through canonical link generalized linear models. Estimation is done using maximum likelihood aided by iterative algorithms, such as Newton Raphson (NR) and Expectation and Maximization (EM). Theoretical results on the consistency and asymptotic normality of the estimators are given. The proposed model is illustrated using in-depth simulation studies and a dengue data set.

0
0
下载
预览

All pandemics are local; so learning about the impacts of pandemics on public health and related societal issues at granular levels is of great interest. COVID-19 is affecting everyone in the globe and mask wearing is one of the few precautions against it. To quantify people's perception of mask effectiveness and to prevent the spread of COVID-19 for small areas, we use Understanding America Study's (UAS) survey data on COVID-19 as our primary data source. Our data analysis shows that direct survey-weighted estimates for small areas could be highly unreliable. In this paper we develop a synthetic estimation method to estimate proportions of mask effectiveness for small areas using a logistic model that combines information from multiple data sources. We select our working model using an extensive data analysis facilitated by a new variable selection criterion for survey data and benchmarking ratios. We propose a Jackknife method to estimate variance of our proposed estimator. From our data analysis. it is evident that our proposed synthetic method outperforms direct survey-weighted estimator with respect to commonly used evaluation measures.

0
0
下载
预览

Understanding predictions made by deep neural networks is notoriously difficult, but also crucial to their dissemination. As all machine learning based methods, they are as good as their training data, and can also capture unwanted biases. While there are tools that can help understand whether such biases exist, they do not distinguish between correlation and causation, and might be ill-suited for text-based models and for reasoning about high level language concepts. A key problem of estimating the causal effect of a concept of interest on a given model is that this estimation requires the generation of counterfactual examples, which is challenging with existing generation technology. To bridge that gap, we propose CausaLM, a framework for producing causal model explanations using counterfactual language representation models. Our approach is based on fine-tuning of deep contextualized embedding models with auxiliary adversarial tasks derived from the causal graph of the problem. Concretely, we show that by carefully choosing auxiliary adversarial pre-training tasks, language representation models such as BERT can effectively learn a counterfactual representation for a given concept of interest, and be used to estimate its true causal effect on model performance. A byproduct of our method is a language representation model that is unaffected by the tested concept, which can be useful in mitigating unwanted bias ingrained in the data.

0
0
下载
预览

Estimation of heterogeneous treatment effects is an essential component of precision medicine. Model and algorithm-based methods have been developed within the causal inference framework to achieve valid estimation and inference. Existing methods such as the A-learner, R-learner, modified covariates method (with and without efficiency augmentation), inverse propensity score weighting, and augmented inverse propensity score weighting have been proposed mostly under the square error loss function. The performance of these methods in the presence of data irregularity and high dimensionality, such as that encountered in electronic health record (EHR) data analysis, has been less studied. In this research, we describe a general formulation that unifies many of the existing learners through a common score function. The new formulation allows the incorporation of least absolute deviation (LAD) regression and dimension reduction techniques to counter the challenges in EHR data analysis. We show that under a set of mild regularity conditions, the resultant estimator has an asymptotic normal distribution. Within this framework, we proposed two specific estimators for EHR analysis based on weighted LAD with penalties for sparsity and smoothness simultaneously. Our simulation studies show that the proposed methods are more robust to outliers under various circumstances. We use these methods to assess the blood pressure-lowering effects of two commonly used antihypertensive therapies.

0
0
下载
预览

Wildfires have become one of the biggest natural hazards for environments worldwide. The effects of wildfires are heterogeneous, meaning that the magnitude of their effects depends on many factors such as geographical region, climate and land cover/vegetation type. Yet, which areas are more affected by these events remains unclear. Here we present a novel application of the Generalised Synthetic Control (GSC) method that enables quantification and prediction of vegetation changes due to wildfires through a time-series analysis of in situ and satellite remote sensing data. We apply this method to medium to large wildfires ($>$ 1000 acres) in California throughout a time-span of two decades (1996--2016). The method's ability for estimating counterfactual vegetation characteristics for burned regions is explored in order to quantify abrupt system changes. We find that the GSC method is better at predicting vegetation changes than the more traditional approach of using nearby regions to assess wildfire impacts. We evaluate the GSC method by comparing its predictions of spectral vegetation indices to observations during pre-wildfire periods and find improvements in correlation coefficient from $R^2 = 0.66$ to $R^2 = 0.93$ in Normalised Difference Vegetation Index (NDVI), from $R^2 = 0.48$ to $R^2 = 0.81$ for Normalised Burn Ratio (NBR), and from $R^2 = 0.49$ to $R^2 = 0.85$ for Normalised Difference Moisture Index (NDMI). Results show greater changes in NDVI, NBR, and NDMI post-fire on regions classified as having a lower Burning Index. The GSC method also reveals that wildfire effects on vegetation can last for more than a decade post-wildfire, and in some cases never return to their previous vegetation cycles within our study period. Lastly, we discuss the usefulness of using GSC in remote sensing analyses.

0
0
下载
预览

The rapid growth of GPS technology and mobile devices has led to a massive accumulation of location data, bringing considerable benefits to individuals and society. One of the major usages of such data is travel time prediction, a typical service provided by GPS navigation devices and apps. Meanwhile, the constant collection and analysis of the individual location data also pose unprecedented privacy threats. We leverage the notion of geo-indistinguishability, an extension of differential privacy to the location privacy setting, and propose a procedure for privacy-preserving travel time prediction without collecting actual individual GPS trace data. We propose new concepts to examine the impact of geo-indistinguishability-based sanitization on the usefulness of GPS traces and provide analytical and experimental utility analysis for privacy-preserving travel time prediction. We also propose new metrics to measure the adversary error in learning individual GPS traces from the collected sanitized data. Our experiment results suggest that the proposed procedure provides travel time prediction with satisfactory accuracy at reasonably small privacy costs.

0
0
下载
预览

Evaluating treatment effect heterogeneity widely informs treatment decision making. At the moment, much emphasis is placed on the estimation of the conditional average treatment effect via flexible machine learning algorithms. While these methods enjoy some theoretical appeal in terms of consistency and convergence rates, they generally perform poorly in terms of uncertainty quantification. This is troubling since assessing risk is crucial for reliable decision-making in sensitive and uncertain environments. In this work, we propose a conformal inference-based approach that can produce reliable interval estimates for counterfactuals and individual treatment effects under the potential outcome framework. For completely randomized or stratified randomized experiments with perfect compliance, the intervals have guaranteed average coverage in finite samples regardless of the unknown data generating mechanism. For randomized experiments with ignorable compliance and general observational studies obeying the strong ignorability assumption, the intervals satisfy a doubly robust property which states the following: the average coverage is approximately controlled if either the propensity score or the conditional quantiles of potential outcomes can be estimated accurately. Numerical studies on both synthetic and real datasets empirically demonstrate that existing methods suffer from a significant coverage deficit even in simple models. In contrast, our methods achieve the desired coverage with reasonably short intervals.

0
0
下载
预览

We address the use of selfie ocular images captured with smartphones to estimate age and gender. Partial face occlusion has become an issue due to the mandatory use of face masks. Also, the use of mobile devices has exploded, with the pandemic further accelerating the migration to digital services. However, state-of-the-art solutions in related tasks such as identity or expression recognition employ large Convolutional Neural Networks, whose use in mobile devices is infeasible due to hardware limitations and size restrictions of downloadable applications. To counteract this, we adapt two existing lightweight CNNs proposed in the context of the ImageNet Challenge, and two additional architectures proposed for mobile face recognition. Since datasets for soft-biometrics prediction using selfie images are limited, we counteract over-fitting by using networks pre-trained on ImageNet. Furthermore, some networks are further pre-trained for face recognition, for which very large training databases are available. Since both tasks employ similar input data, we hypothesize that such strategy can be beneficial for soft-biometrics estimation. A comprehensive study of the effects of different pre-training over the employed architectures is carried out, showing that, in most cases, a better accuracy is obtained after the networks have been fine-tuned for face recognition.

0
0
下载
预览

Machine learning plays a role in many deployed decision systems, often in ways that are difficult or impossible to understand by human stakeholders. Explaining, in a human-understandable way, the relationship between the input and output of machine learning models is essential to the development of trustworthy machine-learning-based systems. A burgeoning body of research seeks to define the goals and methods of explainability in machine learning. In this paper, we seek to review and categorize research on counterfactual explanations, a specific class of explanation that provides a link between what could have happened had input to a model been changed in a particular way. Modern approaches to counterfactual explainability in machine learning draw connections to the established legal doctrine in many countries, making them appealing to fielded systems in high-impact areas such as finance and healthcare. Thus, we design a rubric with desirable properties of counterfactual explanation algorithms and comprehensively evaluate all currently-proposed algorithms against that rubric. Our rubric provides easy comparison and comprehension of the advantages and disadvantages of different approaches and serves as an introduction to major research themes in this field. We also identify gaps and discuss promising research directions in the space of counterfactual explainability.

0
11
下载
预览

Data augmentation has been widely used for training deep learning systems for medical image segmentation and plays an important role in obtaining robust and transformation-invariant predictions. However, it has seldom been used at test time for segmentation and not been formulated in a consistent mathematical framework. In this paper, we first propose a theoretical formulation of test-time augmentation for deep learning in image recognition, where the prediction is obtained through estimating its expectation by Monte Carlo simulation with prior distributions of parameters in an image acquisition model that involves image transformations and noise. We then propose a novel uncertainty estimation method based on the formulated test-time augmentation. Experiments with segmentation of fetal brains and brain tumors from 2D and 3D Magnetic Resonance Images (MRI) showed that 1) our test-time augmentation outperforms a single-prediction baseline and dropout-based multiple predictions, and 2) it provides a better uncertainty estimation than calculating the model-based uncertainty alone and helps to reduce overconfident incorrect predictions.

0
3
下载
预览
小贴士
相关论文
ARMA Models for Zero Inflated Count Time Series
Vurukonda Sathish,Siuli Mukhopadhyay,Rashmi Tiwari
0+阅读 · 5月13日
Amir Feder,Nadav Oved,Uri Shalit,Roi Reichart
0+阅读 · 5月9日
Feliu Serra-Burriel,Pedro Delicado,Andrew T. Prata,Fernando M. Cucchietti
0+阅读 · 5月7日
Fang Liu,Dong Wang,Zhengquan Xu
0+阅读 · 5月7日
Lihua Lei,Emmanuel J. Candès
0+阅读 · 5月6日
Fernando Alonso-Fernandez,Kevin Hernandez Diaz,Silvia Ramis,Francisco J. Perales,Josef Bigun
0+阅读 · 5月4日
Sahil Verma,John Dickerson,Keegan Hines
11+阅读 · 2020年10月20日
Test-time augmentation with uncertainty estimation for deep learning-based medical image segmentation
Guotai Wang,Wenqi Li,Michael Aertsen,Jan Deprest,Sebastien Ourselin,Tom Vercauteren
3+阅读 · 2018年7月19日
相关资讯
【泡泡一分钟】基于表面的自主三维建模探索
泡泡机器人SLAM
6+阅读 · 2019年9月10日
CCF推荐 | 国际会议信息10条
Call4Papers
7+阅读 · 2019年5月27日
人工智能 | 国际会议信息10条
Call4Papers
5+阅读 · 2018年12月18日
计算机类 | LICS 2019等国际会议信息7条
Call4Papers
3+阅读 · 2018年12月17日
Disentangled的假设的探讨
CreateAMind
7+阅读 · 2018年12月10日
人工智能 | 国际会议截稿信息9条
Call4Papers
4+阅读 · 2018年3月13日
计算机视觉近一年进展综述
机器学习研究会
6+阅读 · 2017年11月25日
【推荐】用Python/OpenCV实现增强现实
机器学习研究会
5+阅读 · 2017年11月16日
【推荐】决策树/随机森林深入解析
机器学习研究会
5+阅读 · 2017年9月21日
Top