COVID-19 which has spread in Iran from February 19, 2020, has infected 202,584 people and killed 9,507 people until June 20, 2020. The immediate suggested solution to prevent the spread of this virus was to avoid traveling around. In this study, the correlation between traveling between cities with new confirmed cases of COVID-19 in Iran is demonstrated. The data, used in the study, consisted of the daily inter-state traffic, air traffic data, and daily new COVID-19 confirmed cases. The data is used to train a regression model and voting was used to show the highest correlation between travels made between cities and new cases of COVID-19. Although the available data was very coarse and there was no detail of inner-city commute, an accuracy of 81% was achieved showing a positive correlation between the number of inter-state travels and the new cases of COVID-19. Consequently, the result suggests that one of the best ways to avoid the spread of the virus is limiting or eliminating traveling around.
Facing the world wide coronavirus disease 2019 (COVID-19) pandemic, a new fitting method (QDF, quasi-distribution fitting) which could be used to analyze the data of COVID-19 is developed based on piecewise quasi-uniform B-spline curves. For any given country or district, it simulates the distribution histogram data which is made from the daily confirmed cases (or the other data including daily recovery cases and daily fatality cases) of the COVID-19 with piecewise quasi-uniform B-spline curves. Being dealt with area normalization method, the fitting curves could be regarded as a kind of probability density function (PDF), its mathematical expectation and the variance could be used to analyze the situation of the coronavirus pandemic. Numerical experiments based on the data of certain countries have indicated that the QDF method demonstrate the intrinsic characteristics of COVID-19 data of the given country or distric, and because of the interval of data used in this paper is over one year (500 days), it reveals the fact that after multi-wave transmission of the coronavirus, the case fatality rate has declined obviously, the result shows that as an appraisal method, it is effective and feasible.
The recent COVID-19 pandemic has promoted vigorous scientific activity in an effort to understand, advice and control the pandemic. Data is now freely available at a staggering rate worldwide. Unfortunately, this unprecedented level of information contains a variety of data sources and formats, and the models do not always conform to the description of the data. Health officials have recognized the need for more accurate models that can adjust to sudden changes, such as produced by changes in behavior or social restrictions. In this work we formulate a model that fits a ``SIR''-type model concurrently with a statistical change detection test on the data. The result is a piece wise autonomous ordinary differential equation, whose parameters change at various points in time (automatically learned from the data). The main contributions of our model are: (a) providing interpretation of the parameters, (b) determining which parameters of the model are more important to produce changes in the spread of the disease, and (c) using data-driven discovery of sudden changes in the evolution of the pandemic. Together, these characteristics provide a new model that better describes the situation and thus, provides better quality of information for decision making.
We conducted a systematic literature review on the ethical considerations of the use of contact tracing app technology, which was extensively implemented during the COVID-19 pandemic. The rapid and extensive use of this technology during the COVID-19 pandemic, while benefiting the public well-being by providing information about people's mobility and movements to control the spread of the virus, raised several ethical concerns for the post-COVID-19 era. To investigate these concerns for the post-pandemic situation and provide direction for future events, we analyzed the current ethical frameworks, research, and case studies about the ethical usage of tracing app technology. The results suggest there are seven essential ethical considerations, namely privacy, security, acceptability, government surveillance, transparency, justice, and voluntariness in the ethical use of contact tracing technology. In this paper, we explain and discuss these considerations and how they are needed for the ethical usage of this technology. The findings also highlight the importance of developing integrated guidelines and frameworks for implementation of such technology in the post-COVID-19 world.
We develop a Bayesian spatio-temporal model to study pre-industrial grain market integration during the Finnish famine of the 1860s. Our model takes into account several problematic features often present when analysing multiple spatially interdependent time series. For example, compared with the error correction methodology commonly applied in econometrics, our approach allows simultaneous modeling of multiple interdependent time series avoiding cumbersome statistical testing needed to predetermine the market leader as a point of reference. Furthermore, introducing a flexible spatio-temporal structure enables analysing detailed regional and temporal dynamics of the market mechanisms. Applying the proposed method, we detected spatially asymmetric "price ripples" that spread out from the shock origin. We corroborated the existing literature on the speedier adjustment to emerging price differentials during the famine, but we observed this principally in urban markets. This hastened return to long-run equilibrium means faster and longer travel of price shocks, implying prolonged out-of-equilibrium dynamics, proliferated influence of market shocks, and, importantly, a wider spread of famine conditions.
Data sharing is very important for accelerating scientific research, business innovations, and for informing individuals. Yet, concerns over data privacy, cost, and lack of secure data-sharing solutions have prevented data owners from sharing data. To overcome these issues, several research works have proposed blockchain-based data-sharing solutions for their ability to add transparency and control to the data-sharing process. Yet, while models for decentralized data sharing exist, how to incentivize these structures to enable data sharing at scale remains largely unexplored. In this paper, we propose incentive mechanisms for decentralized data-sharing platforms. We use smart contracts to automate different payment options between data owners and data requesters. We discuss multiple cost pricing scenarios for data owners to monetize their data. Moreover, we simulate the incentive mechanisms on a blockchain-based data-sharing platform. The evaluation of our simulation indicates that a cost compensation model for the data owner can rapidly cover the cost of data sharing and balance the overall incentives for all the actors in the platform.
This paper presents a model for COVID19 in Mexico City. The data analyzed were considered from the appearance of the first case in Mexico until July 2021. In this first approximation the states considered were Susceptible, Infected, Hospitalized, Intensive Care Unit, Intubated, and Dead. As a consequence of the lack of coronavirus testing, the number of infected and dead people is underestimated, although the results obtained give a good approximation to the evolution of the pandemic in Mexico City. The model is based on a discrete-time Markov chain considering data provided by the Mexican government, the main objective is to estimate the transient probabilities from one state to another for the Mexico City case.
The population-based optimization algorithms have provided promising results in feature selection problems. However, the main challenges are high time complexity. Moreover, the interaction between features is another big challenge in FS problems that directly affects the classification performance. In this paper, an estimation of distribution algorithm is proposed to meet three goals. Firstly, as an extension of EDA, the proposed method generates only two individuals in each iteration that compete based on a fitness function and evolve during the algorithm, based on our proposed update procedure. Secondly, we provide a guiding technique for determining the number of features for individuals in each iteration. As a result, the number of selected features of the final solution will be optimized during the evolution process. The two mentioned advantages can increase the convergence speed of the algorithm. Thirdly, as the main contribution of the paper, in addition to considering the importance of each feature alone, the proposed method can consider the interaction between features. Thus, it can deal with complementary features and consequently increase classification performance. To do this, we provide a conditional probability scheme that considers the joint probability distribution of selecting two features. The introduced probabilities successfully detect correlated features. Experimental results on a synthetic dataset with correlated features prove the performance of our proposed approach facing these types of features. Furthermore, the results on 13 real-world datasets obtained from the UCI repository show the superiority of the proposed method in comparison with some state-of-the-art approaches.
Generative Adversarial Networks (GAN) have shown great promise in tasks like synthetic image generation, image inpainting, style transfer, and anomaly detection. However, generating discrete data is a challenge. This work presents an adversarial training based correlated discrete data (CDD) generation model. It also details an approach for conditional CDD generation. The results of our approach are presented over two datasets; job-seeking candidates skill set (private dataset) and MNIST (public dataset). From quantitative and qualitative analysis of these results, we show that our model performs better as it leverages inherent correlation in the data, than an existing model that overlooks correlation.
During the recent years, correlation filters have shown dominant and spectacular results for visual object tracking. The types of the features that are employed in these family of trackers significantly affect the performance of visual tracking. The ultimate goal is to utilize robust features invariant to any kind of appearance change of the object, while predicting the object location as properly as in the case of no appearance change. As the deep learning based methods have emerged, the study of learning features for specific tasks has accelerated. For instance, discriminative visual tracking methods based on deep architectures have been studied with promising performance. Nevertheless, correlation filter based (CFB) trackers confine themselves to use the pre-trained networks which are trained for object classification problem. To this end, in this manuscript the problem of learning deep fully convolutional features for the CFB visual tracking is formulated. In order to learn the proposed model, a novel and efficient backpropagation algorithm is presented based on the loss function of the network. The proposed learning framework enables the network model to be flexible for a custom design. Moreover, it alleviates the dependency on the network trained for classification. Extensive performance analysis shows the efficacy of the proposed custom design in the CFB tracking framework. By fine-tuning the convolutional parts of a state-of-the-art network and integrating this model to a CFB tracker, which is the top performing one of VOT2016, 18% increase is achieved in terms of expected average overlap, and tracking failures are decreased by 25%, while maintaining the superiority over the state-of-the-art methods in OTB-2013 and OTB-2015 tracking datasets.
In this paper, we study object detection using a large pool of unlabeled images and only a few labeled images per category, named "few-example object detection". The key challenge consists in generating trustworthy training samples as many as possible from the pool. Using few training examples as seeds, our method iterates between model training and high-confidence sample selection. In training, easy samples are generated first and, then the poorly initialized model undergoes improvement. As the model becomes more discriminative, challenging but reliable samples are selected. After that, another round of model improvement takes place. To further improve the precision and recall of the generated training samples, we embed multiple detection models in our framework, which has proven to outperform the single model baseline and the model ensemble method. Experiments on PASCAL VOC'07, MS COCO'14, and ILSVRC'13 indicate that by using as few as three or four samples selected for each category, our method produces very competitive results when compared to the state-of-the-art weakly-supervised approaches using a large number of image-level labels.