Skip to content

wgosrich/task-allocation-testbed

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

113 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Modular Testbed for Multi-Agent Task Allocation

This project creates a testbed for multi-agent task allocation strategies using a modular structure (inspired by OpenAI Gym). The main components of this structure are laid out below.

Usage

from the task-allocation directory, run python simple_experiment.py

Command line arguments can be viewed with python simple_experiment.py -h

  • Use --seed [value] to set the random number generator seed to [value]. Defaults to completely random.
  • Use --controller [controller_name] and --planner [planner_name] to set controller or planner. Defaults to simple_xyz
  • Use --planner from_file --filename [filename] to load a plan from a file [filename]
  • Use --params [param_file_name] to use a parameter file specified by [param_file_name]. Defaults to dependency_test_params
  • Use --n_tasks [integer] to change the number of tasks in the experiment
  • Use --n_agents [integer] to change the number of agents in the experiment
  • Use --n_dependencies [integer] to change the number of task dependencies in the experiment (only with random_dependency_params)
  • Use --save to save the experiment data to a .csv file in the data folder when it's done The simple_experiment is a script that instantiates the testbed, assembling the modules together and running the experiment. At the top, we import a ...params.py file. These parameter files include experiment-specific data, such as number of agents and tasks, locations of the agents and tasks, dependencies of the tasks, etc. This param file is the passed into the simple_environment instantiation, where simulation takes place.

The one_to_one_params file includes an experiment with 3 robots and 3 tasks. It is a one-to-one assignment test. The dependency_test_params file includes an experiment with 3 robots and 4 tasks, and one dependency (task 4 on task 1).

Usage with Optimal Solution

Optimal solution is written in Matlab. It uses Mixed Integer Linear Programming (MILP) to find a plan for the experiment.

  1. Run an experiment. Doesn't matter what planner you use. Make sure to use seeding
    • ex. python simple_experiment.py --seed 1 --n_tasks 10
    • this creates a file matlab_inputs.mat that the MILP planner will use
  2. Open Matlab, and run milp_from_file.m
    • this creates a file matlab_out.mat that contains the optimal plan
  3. Now run the experiment, with the planner from file: python simple_experiment.py --seed 1 --n_tasks 10 --planner from_file

Environments

The environment files hold simulation environments for the robot systems. They include a step function to calculate dynamics, and hold the state of the robots in the system, the state of tasks/goals and relevant constraints, and other information such as obstacle location.

  • has function init(self, params). This initializes the environment based on the data in the params file (initial robot state, task locations)
  • has function step(self,action). This takes in an action, which is an appropriately sized numpy array of control signals, each row corresponding to one robot. These control signals are generated by the controller module. Using this action, the step function steps forward one timestep, and determines the system state at the next time step (note: this doesn't necessarily need to be structured as a step function: it might be easier to use different functions with Gazebo, so this can be very flexible)
  • has functions to keep track of the state of tasks: whether they are ready to be completed (i.e. all dependencies fulfilled) and whether they have been completed.

Planners

Planner modules compute the task allocation, through any method. They receive robot and task state information from the environment, and use it to assign a set of tasks to each robot, and an order of task completion or time windows, if required by the experiment. This assignment is returned to the environment. The new planner"centralized planner6.22.2200" is able to allocate tasks based on C-CAPT algorithm. The user can change the number of robot as well as number of task in one_to_one_params.py / dependency_test_params.py to customize the program.

Controllers

Controller modules take the state and goal information from the environment and compute control commands to take the robot from its initial position along a path to its goals. Controllers can be centralized, or set up to mimic a decentralized system in which robots only use local information.

Supervisors

(TODO:) Supervisor modules wrap the controller and planner modules, and perform real-time supervision of the system. It can order re-planning, change control inputs to the robots (to avoid collisions, for example), and perform task swaps, among any other required capabilities.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors