热门内容

Understanding an opponent agent helps in negotiating with it. Existing works on understanding opponents focus on preference modeling (or estimating the opponent's utility function). An important but largely unexplored direction is recognizing an opponent's negotiation strategy, which captures the opponent's tactics, e.g., to be tough at the beginning but to concede toward the deadline. Recognizing complex, state-of-the-art, negotiation strategies is extremely challenging, and simple heuristics may not be adequate for this purpose. We propose a novel data-driven approach for recognizing an opponent's s negotiation strategy. Our approach includes a data generation method for an agent to generate domain-independent sequences by negotiating with a variety of opponents across domains, a feature engineering method for representing negotiation data as time series with time-step features and overall features, and a hybrid (recurrent neural network-based) deep learning method for recognizing an opponent's strategy from the time series of bids. We perform extensive experiments, spanning four problem scenarios, to demonstrate the effectiveness of our approach.

0
0
下载
预览

最新内容

Multi-agent Reinforcement Learning (MARL) problems often require cooperation among agents in order to solve a task. Centralization and decentralization are two approaches used for cooperation in MARL. While fully decentralized methods are prone to converge to suboptimal solutions due to partial observability and nonstationarity, the methods involving centralization suffer from scalability limitations and lazy agent problem. Centralized training decentralized execution paradigm brings out the best of these two approaches; however, centralized training still has an upper limit of scalability not only for acquired coordination performance but also for model size and training time. In this work, we adopt the centralized training with decentralized execution paradigm and investigate the generalization and transfer capacity of the trained models across variable number of agents. This capacity is assessed by training variable number of agents in a specific MARL problem and then performing greedy evaluations with variable number of agents for each training configuration. Thus, we analyze the evaluation performance for each combination of agent count for training versus evaluation. We perform experimental evaluations on predator prey and traffic junction environments and demonstrate that it is possible to obtain similar or higher evaluation performance by training with less agents. We conclude that optimal number of agents to perform training may differ from the target number of agents and argue that transfer across large number of agents can be a more efficient solution to scaling up than directly increasing number of agents during training.

0
0
下载
预览

最新论文

Multi-agent Reinforcement Learning (MARL) problems often require cooperation among agents in order to solve a task. Centralization and decentralization are two approaches used for cooperation in MARL. While fully decentralized methods are prone to converge to suboptimal solutions due to partial observability and nonstationarity, the methods involving centralization suffer from scalability limitations and lazy agent problem. Centralized training decentralized execution paradigm brings out the best of these two approaches; however, centralized training still has an upper limit of scalability not only for acquired coordination performance but also for model size and training time. In this work, we adopt the centralized training with decentralized execution paradigm and investigate the generalization and transfer capacity of the trained models across variable number of agents. This capacity is assessed by training variable number of agents in a specific MARL problem and then performing greedy evaluations with variable number of agents for each training configuration. Thus, we analyze the evaluation performance for each combination of agent count for training versus evaluation. We perform experimental evaluations on predator prey and traffic junction environments and demonstrate that it is possible to obtain similar or higher evaluation performance by training with less agents. We conclude that optimal number of agents to perform training may differ from the target number of agents and argue that transfer across large number of agents can be a more efficient solution to scaling up than directly increasing number of agents during training.

0
0
下载
预览
参考链接
Top