Virtual development and prototyping has already become an integral part in the field of automated driving systems (ADS). There are plenty of software tools that are used for the virtual development of ADS. One such tool is CarMaker from IPG Automotive, which is widely used in the scientific community and in the automotive industry. It offers a broad spectrum of implementation and modelling possibilities of the vehicle, driver behavior, control, sensors, and environmental models. Focusing on the virtual development of highly automated driving functions on the vehicle guidance level, it is essential to perceive the environment in a realistic manner. For the longitudinal and lateral path guidance line detection sensors are necessary for the determination of the relevant perceiving vehicle and for the planning of trajectories. For this purpose, a lane sensor model was developed in order to efficiently detect lanes in the simulation environment of CarMaker. The so-called advanced lane detection model (ALDM) is optimized regarding the calculation time and is for the lateral and longitudinal vehicle guidance in CarMaker.

0
下载
关闭预览

相关内容

Automator是苹果公司为他们的Mac OS X系统开发的一款软件。 只要通过点击拖拽鼠标等操作就可以将一系列动作组合成一个工作流,从而帮助你自动的(可重复的)完成一些复杂的工作。Automator还能横跨很多不同种类的程序,包括:查找器、Safari网络浏览器、iCal、地址簿或者其他的一些程序。它还能和一些第三方的程序一起工作,如微软的Office、Adobe公司的Photoshop或者Pixelmator等。

We design a multi-purpose environment for autonomous UAVs offering different communication services in a variety of application contexts (e.g., wireless mobile connectivity services, edge computing, data gathering). We develop the environment, based on OpenAI Gym framework, in order to simulate different characteristics of real operational environments and we adopt the Reinforcement Learning to generate policies that maximize some desired performance.The quality of the resulting policies are compared with a simple baseline to evaluate the system and derive guidelines to adopt this technique in different use cases. The main contribution of this paper is a flexible and extensible OpenAI Gym environment, which allows to generate, evaluate, and compare policies for autonomous multi-drone systems in multi-service applications. This environment allows for comparative evaluation and benchmarking of different approaches in a variety of application contexts.

0
0
下载
预览

In applications such as participatory sensing and crowd sensing, self-interested agents exert costly effort towards achieving an objective for the system operator. We study such a setup where a principal incentivizes multiple agents of different types who can collude with each other to derive rent. The principal cannot observe the efforts exerted directly, but only the outcome of the task, which is a noisy function of the effort. The type of each agent influences the effort cost and task output. For a duopoly in which agents are coupled in their payments, we show that if the principal and the agents interact finitely many times, the agents can derive rent by colluding even if the principal knows the types of the agents. However, if the principal and the agents interact infinitely often, the principal can disincentivize agent collusion through a suitable data-driven contract.

0
0
下载
预览

Reliable automation of the labor-intensive manual task of scoring animal sleep can facilitate the analysis of long-term sleep studies. In recent years, deep-learning-based systems, which learn optimal features from the data, increased scoring accuracies for the classical sleep stages of Wake, REM, and Non-REM. Meanwhile, it has been recognized that the statistics of transitional stages such as pre-REM, found between Non-REM and REM, may hold additional insight into the physiology of sleep and are now under vivid investigation. We propose a classification system based on a simple neural network architecture that scores the classical stages as well as pre-REM sleep in mice. When restricted to the classical stages, the optimized network showed state-of-the-art classification performance with an out-of-sample F1 score of 0.95. When unrestricted, the network showed lower F1 scores on pre-REM (0.5) compared to the classical stages. The result is comparable to previous attempts to score transitional stages in other species such as transition sleep in rats or N1 sleep in humans. Nevertheless, we observed that the sequence of predictions including pre-REM typically transitioned from Non-REM to REM reflecting sleep dynamics observed by human scorers. Our findings provide further evidence for the difficulty of scoring transitional sleep stages, likely because such stages of sleep are under-represented in typical data sets or show large inter-scorer variability. We further provide our source code and an online platform to run predictions with our trained network.

0
0
下载
预览

This paper presents a decentralized and asynchronous systematic solution for multi-robot autonomous navigation in unknown obstacle-rich scenes using merely onboard resources. The planning system is formulated under gradient-based local planning framework, where collision avoidance is achieved by formulating the collision risk as a penalty of a nonlinear optimization problem. In order to improve robustness and escape local minima, we incorporate a lightweight topological trajectory generation method. Then agents generate safe, smooth, and dynamically feasible trajectories in only several milliseconds using an unreliable trajectory sharing network. Relative localization drift among agents is corrected by using agent detection in depth images. Our method is demonstrated in both simulation and real-world experiments. The source code is released for the reference of the community.

0
0
下载
预览

Embedded systems represent a billionaire market and are present in most of the processors produced in the world.Embedded software interacts with external peripherals such as sensors and actuators through drivers. The strong interaction between drivers and external peripherals often hamper embedded software development, in special the testing task, giving that the physical environment may not be deterministic and difficult to be recreated during the tests, the hardware may not be available or its presence may not be desirable or costly. Aiming at addressing these problems, this paper introduces a solution to test drivers on microcontrollers, based on a method that uses three components: a Device Under Test (DUT), a Double device, and a computer. The computer runs a test orchestration code, while the DUT runs the test target code, and the Double plays the role of the real external peripherals that interact with the DUT. The proposed solution was successfully implemented and validated using different protocols.

0
0
下载
预览

Deep learning has been successfully applied to solve various complex problems ranging from big data analytics to computer vision and human-level control. Deep learning advances however have also been employed to create software that can cause threats to privacy, democracy and national security. One of those deep learning-powered applications recently emerged is "deepfake". Deepfake algorithms can create fake images and videos that humans cannot distinguish them from authentic ones. The proposal of technologies that can automatically detect and assess the integrity of digital visual media is therefore indispensable. This paper presents a survey of algorithms used to create deepfakes and, more importantly, methods proposed to detect deepfakes in the literature to date. We present extensive discussions on challenges, research trends and directions related to deepfake technologies. By reviewing the background of deepfakes and state-of-the-art deepfake detection methods, this study provides a comprehensive overview of deepfake techniques and facilitates the development of new and more robust methods to deal with the increasingly challenging deepfakes.

0
4
下载
预览

Safety and decline of road traffic accidents remain important issues of autonomous driving. Statistics show that unintended lane departure is a leading cause of worldwide motor vehicle collisions, making lane detection the most promising and challenge task for self-driving. Today, numerous groups are combining deep learning techniques with computer vision problems to solve self-driving problems. In this paper, a Global Convolution Networks (GCN) model is used to address both classification and localization issues for semantic segmentation of lane. We are using color-based segmentation is presented and the usability of the model is evaluated. A residual-based boundary refinement and Adam optimization is also used to achieve state-of-art performance. As normal cars could not afford GPUs on the car, and training session for a particular road could be shared by several cars. We propose a framework to get it work in real world. We build a real time video transfer system to get video from the car, get the model trained in edge server (which is equipped with GPUs), and send the trained model back to the car.

0
3
下载
预览

Latest deep learning methods for object detection provide remarkable performance, but have limits when used in robotic applications. One of the most relevant issues is the long training time, which is due to the large size and imbalance of the associated training sets, characterized by few positive and a large number of negative examples (i.e. background). Proposed approaches are based on end-to-end learning by back-propagation [22] or kernel methods trained with Hard Negatives Mining on top of deep features [8]. These solutions are effective, but prohibitively slow for on-line applications. In this paper we propose a novel pipeline for object detection that overcomes this problem and provides comparable performance, with a 60x training speedup. Our pipeline combines (i) the Region Proposal Network and the deep feature extractor from [22] to efficiently select candidate RoIs and encode them into powerful representations, with (ii) the FALKON [23] algorithm, a novel kernel-based method that allows fast training on large scale problems (millions of points). We address the size and imbalance of training data by exploiting the stochastic subsampling intrinsic into the method and a novel, fast, bootstrapping approach. We assess the effectiveness of the approach on a standard Computer Vision dataset (PASCAL VOC 2007 [5]) and demonstrate its applicability to a real robotic scenario with the iCubWorld Transformations [18] dataset.

0
6
下载
预览

The ever-growing interest witnessed in the acquisition and development of unmanned aerial vehicles (UAVs), commonly known as drones in the past few years, has brought generation of a very promising and effective technology. Because of their characteristic of small size and fast deployment, UAVs have shown their effectiveness in collecting data over unreachable areas and restricted coverage zones. Moreover, their flexible-defined capacity enables them to collect information with a very high level of detail, leading to high resolution images. UAVs mainly served in military scenario. However, in the last decade, they have being broadly adopted in civilian applications as well. The task of aerial surveillance and situation awareness is usually completed by integrating intelligence, surveillance, observation, and navigation systems, all interacting in the same operational framework. To build this capability, UAV's are well suited tools that can be equipped with a wide variety of sensors, such as cameras or radars. Deep learning has been widely recognized as a prominent approach in different computer vision applications. Specifically, one-stage object detector and two-stage object detector are regarded as the most important two groups of Convolutional Neural Network based object detection methods. One-stage object detector could usually outperform two-stage object detector in speed; however, it normally trails in detection accuracy, compared with two-stage object detectors. In this study, focal loss based RetinaNet, which works as one-stage object detector, is utilized to be able to well match the speed of regular one-stage detectors and also defeat two-stage detectors in accuracy, for UAV based object detection. State-of-the-art performance result has been showed on the UAV captured image dataset-Stanford Drone Dataset (SDD).

0
5
下载
预览

Collecting training data from the physical world is usually time-consuming and even dangerous for fragile robots, and thus, recent advances in robot learning advocate the use of simulators as the training platform. Unfortunately, the reality gap between synthetic and real visual data prohibits direct migration of the models trained in virtual worlds to the real world. This paper proposes a modular architecture for tackling the virtual-to-real problem. The proposed architecture separates the learning model into a perception module and a control policy module, and uses semantic image segmentation as the meta representation for relating these two modules. The perception module translates the perceived RGB image to semantic image segmentation. The control policy module is implemented as a deep reinforcement learning agent, which performs actions based on the translated image segmentation. Our architecture is evaluated in an obstacle avoidance task and a target following task. Experimental results show that our architecture significantly outperforms all of the baseline methods in both virtual and real environments, and demonstrates a faster learning curve than them. We also present a detailed analysis for a variety of variant configurations, and validate the transferability of our modular architecture.

0
4
下载
预览
小贴士
相关论文
A Reinforcement Learning Environment for Multi-Service UAV-enabled Wireless Systems
Damiano Brunori,Stefania Colonnese,Francesca Cuomo,Luca Iocchi
0+阅读 · 5月11日
Nayara Aguiar,Parv Venkitasubramaniam,Vijay Gupta
0+阅读 · 5月6日
Niklas Grieger,Justus T. C. Schwabedal,Stefanie Wendel,Yvonne Ritze,Stephan Bialonski
0+阅读 · 5月5日
Xin Zhou,Jiangchao Zhu,Hongyu Zhou,Chao Xu,Fei Gao
0+阅读 · 5月5日
Sara C. M. Souza,Rogerio Atem de Carvalho
0+阅读 · 5月4日
Deep Learning for Deepfakes Creation and Detection
Thanh Thi Nguyen,Cuong M. Nguyen,Dung Tien Nguyen,Duc Thanh Nguyen,Saeid Nahavandi
4+阅读 · 2019年9月25日
Wenhui Zhang,Tejas Mahale
3+阅读 · 2018年12月13日
Speeding-up Object Detection Training for Robotics with FALKON
Elisa Maiettini,Giulia Pasquale,Lorenzo Rosasco,Lorenzo Natale
6+阅读 · 2018年8月27日
Fast and Accurate, Convolutional Neural Network Based Approach for Object Detection from UAV
Xiaoliang Wang,Peng Cheng,Xinchuan Liu,Benedict Uzochukwu
5+阅读 · 2018年8月16日
Zhang-Wei Hong,Chen Yu-Ming,Shih-Yang Su,Tzu-Yun Shann,Yi-Hsiang Chang,Hsuan-Kung Yang,Brian Hsi-Lin Ho,Chih-Chieh Tu,Yueh-Chuan Chang,Tsu-Ching Hsiao,Hsin-Wei Hsiao,Sih-Pin Lai,Chun-Yi Lee
4+阅读 · 2018年2月1日
相关VIP内容
专知会员服务
42+阅读 · 2020年12月2日
专知会员服务
12+阅读 · 2020年9月6日
机器学习入门的经验与建议
专知会员服务
34+阅读 · 2019年10月10日
最新BERT相关论文清单,BERT-related Papers
专知会员服务
31+阅读 · 2019年9月29日
相关资讯
计算机 | 国际会议信息5条
Call4Papers
3+阅读 · 2019年7月3日
计算机 | 入门级EI会议ICVRIS 2019诚邀稿件
Call4Papers
8+阅读 · 2019年6月24日
计算机 | ICDE 2020等国际会议信息8条
Call4Papers
3+阅读 · 2019年5月24日
人脸检测库:libfacedetection
Python程序员
12+阅读 · 2019年3月22日
计算机 | ISMAR 2019等国际会议信息8条
Call4Papers
3+阅读 · 2019年3月5日
计算机类 | ISCC 2019等国际会议信息9条
Call4Papers
5+阅读 · 2018年12月25日
A Technical Overview of AI & ML in 2018 & Trends for 2019
待字闺中
10+阅读 · 2018年12月24日
人工智能 | COLT 2019等国际会议信息9条
Call4Papers
6+阅读 · 2018年9月21日
carla无人驾驶模拟中文项目 carla_simulator_Chinese
CreateAMind
3+阅读 · 2018年1月30日
计算机类 | 期刊专刊截稿信息9条
Call4Papers
4+阅读 · 2018年1月26日
Top