Introduction

This page contains videos to complement Katie Genter's PhD dissertation "Fly with Me: Algorithms and Methods for Influencing a Flock".

Contents

Chapter 3 - Leading a Stationary Flock to a Desired Orientation

Stationary Influencing Agents

Section 3.2 of the dissertation is about stationary influencing agents. In the following video, you can see examples of our FlockSim simulator with stationary influencing agents.

In this video, we use an initial flocking angle of 90 degrees, a target flocking angle of 270 degrees, various alpha (visibility cone) sizes, maximum number of steps of 10 (high enough to never be reached in these examples), and 0 velocity for all agents. Remember that in the stationary influencing agents case, the orientation of influencing agents that are not within the flocking agent's visibility cone are arbitrary. In this video, we consider the following cases:

First Case and Second Case: alpha = 90 degrees with two influencing agents and one flocking agent. The influencing agent within the flocking agent's visibility cone influences the flocking agent such that it becomes a border agent on the first step. The target is then reached on step two, since both influencing agents are able to influence the flocking agent on step two.

Third Case: alpha = 90 degrees with two influencing agents and one flocking agent. The influencing agent within the flocking agent's visibility cone influences the flocking agent such that it becomes a border agent on the first step. The other influencing agent is not within the visibility cone after one step, so the border agent influences the flocking agent maximally on step 2. The other influencing agent is within the flocking agent's visibility cone after step 2, and is able to influence the flocking agent to reach the target on step three.

Fourth Case: alpha = 120 degrees with two influencing agents and one flocking agent. The influencing agent within the flocking agent's visibility cone influences the flocking agent maximally on the first time step, such that both influencing agents are within the flocking agent's visibility cone on step two. On step two, the two influencing agents are able to influence the flocking agent to reach the target.

Fifth Case: alpha = 180 degrees with one influencing agent and one flocking agent. The influencing agent influences the flocking agent maximally on step one, and then influences the flocking agent such that it becomes a border agent on step two. Then on step three, the influencing agent influences the flocking agent to reach the target orientation.

Sixth Case: alpha = 180 degrees with two influencing agents and one flocking agent. The influencing agent within the flocking agent's visibility cone influences the flocking agent maximally on step one, and then both influencing agents influence the flocking agent to reach the target on step two.

Seventh Case: alpha = 90 degrees with two influencing agents and two flocking agents (the user clicks twice when placing the flocking agents to get two flocking agents). The influencing agent within the flocking agents' visibility cone influences the flocking agents maximally on step one, and then both influencing agents influence the flocking agents maximally on step two, and then one influencing agent influences the flocking agents to reach the target on step three.

Non-stationary Influencing Agents

Section 3.3 of the dissertation is about non-stationary influencing agents. In the following video, you can see examples of our FlockSim simulator with non-stationary influencing agents.

In this video, we use an initial flocking angle of 90 degrees, a target flocking angle of 270 degrees, alpha of 90 degrees, maximum number of steps of 30 (high enough to never be reached in these examples), a velocity of 50 for the influencing agents, and one flocking agent and one influencing agent for each case.

In this video, we consider the four following cases. For each case, we first consider what happens when the Towards Flocking Agent behavior is used for influencing agents that are outside the flocking agent's visibility cone, and then we consider what happens when the Towards Visibility Cone behavior is used.

First Case: Both influencing agent behaviors result in the same influence on the flocking agent for this case. The influencing agent begins inside the flocking agent's visibility cone, and influences the flocking agent such that it will still be within the visibility cone for the second time step. In the second time step, the influencing agent once again influences the flocking agent such that it will still be within the visibility cone for the third time step. Finally, on the third time step the influencing agent influences the flocking agent to turn to the target orientation.

Second Case: Both influencing agent behaviors result in the same influence on the flocking agent for this case. The influencing agent begins just outside the flocking agent's visibility cone, and hence moves inside the visibility cone in the first time step (it moves directly towards the visibility cone in both behaviors because both behaviors do this when the cone is reachable within one time step). Then in the second time step, the influencing agent influences the flocking agent such that the influencing agent is still within the visibility cone in time step three. In time step three, the influencing agent influences the flocking agent such that it will still be within the visibility cone for the fourth time step. Finally, on the fourth time step, the influencing agent influences the flocking agent to turn to the target orientation.

Third Case: When the Towards Flocking Agent behavior is used, the influencing agent moves for four time steps before it enters into the flocking agent's visibility cone. On the fifth time step, the influencing agent influences the flocking agent such that the influencing agent is still within the visibility cone in time step six. On time step six, the influencing agent influences the flocking agent towards the target such that the influencing agent remains inside the visibility cone. Finally, on time step seven the flocking agent is influenced to reach the target orientation. When the Towards Visibility Cone behavior is used, the influencing agent moves for three time steps before it enters into the flocking agent's visibility cone. On the fourth time step, the influencing agent influences the flocking agent such that the influencing agent is still within the visibility cone in time step five. On time step five, the influencing agent influences the flocking agent maximally, which results in the influencing agent not being in the visibility cone on time step six. It takes the influencing agent three time steps to reach the visibility cone, but then the influencing agent influences the flocking agent to turn to the target on the ninth time step.

Fourth case: one influencing agent and one flocking agent. When the Towards Flocking Agent behavior is used, the influencing agent moves for five time steps before it enters into the flocking agent's visibility cone. On the sixth and seventh time steps, the influencing agent influences the flocking agent such that the influencing agent is still within the visibility cone in time steps seven and eight. Finally, in time step eight the influencing agent influences the flocking agent to orient towards the target orientation. When the Towards Visibility Cone behavior is used, the influencing agent moves for five time steps before it enters into the flocking agent's visibility cone. On the sixth time step the influencing agent influences the flocking agent to turn maximally and then moves back into the visibility cone for the seventh step. On the seventh step the influencing agent influences the flocking agent to turn such that it is still within the visibility cone. Then, on the eighth step the influencing agent influences the flocking agent to orient towards the target orientation.

Chapter 4 - Influencing a Flock to a Desired Orientation

In each of our videos, the influencing agents are pink and the flocking agents are grey. The grey box on the left shows the simulation parameters for the experiment.

Flock Behavior Without Influencing Agents

The following video shows the behavior of the flock when not being influenced by any influencing agents. In this case, each agent orients itself towards the average heading of its neighbors.

This video depicts 4 separate trials, where each trial is concluded when the flock has converged to traveling at a particular heading.

Influencing Agent Behaviors

In the following videos, we show the three main type of influencing agent behavior. Each video uses a random seed of 55 and a flock size of 100. The box below the grey simulation parameters box prints the number of steps needed for the flock to converge to the target direction (facing directly south) at the moment convergence occurs.

1-Step Lookahead Behavior (Section 4.1, 58 steps for convergence):

2-Step Lookahead Behavior (Section 4.2, 62 steps for convergence):

Coordinated Behavior (Section 4.3, 66 steps for convergence):

Effect of Flock Size and Influencing Agent Percentage in Behavior Experiments

In the videos below, we show one example (where each video uses the same random seed for initial agent placement and orientation) of each of these variations. These videos show how these variations affect the dynamics of the agents in the environment. In each video, we use the 1-Step Lookahead behavior. The box below the grey simulation parameters box prints the number of steps needed for the flock to converge to the target direction (facing directly south) at the moment convergence occurs.

Using the experimental setup described in Section 4.4.2 (flock size = 200, influencing agent percent = 10%):

Percentage of influencing agents in the flock decreased to 5% (a smaller percentage of influencing agents in the flock can result in slower convergence):

Percentage of influencing agents in the flock increased to 20% (a larger percentage of influencing agents in the flock can result in faster convergence):

Size of the flock decreased to 100 (a smaller flock is usually more spread-out initially, so convergence can occur slower since agents often have few neighbors):

Size of the flock increased to 300 (a larger flock is usually more compact initially, so convergence can occur quickly since agents often have many neighbors):

Variations in Steps to Turn in the Herd Case

In Section 4.5, we present results for influencing the flock along a path.

In the videos below, we show one example (where each video uses the same random seed for initial agent placement and orientation) of each of the number of steps used to turn. These videos show how these variations affect the path along which the flock was influenced to travel. In each video, we use the 1-step lookahead algorithm.

Using the experimental setup described in Section 4.5.1 with 200 steps to turn:

Using the experimental setup described in Section 4.5.1 with 100 steps to turn:

Using the experimental setup described in Section 4.5.1 with 50 steps to turn:

Using the experimental setup described in Section 4.5.1 with 30 steps to turn:

Using the experimental setup described in Section 4.5.1 with 10 steps to turn:

Chapter 5 - Placing Influencing Agents into a Flock

In each of our videos, the influencing agents are pink and the flocking agents are grey. If you want to see the videos in a larger format, click on each video and either open it in YouTube or hit the full screen icon.

In Chapter 5, we consider various methods by which influencing agents could be placed into a flock at time 0. We provide video examples of each placement method below.

The videos below show 4 influencing agents and 10 flocking agents. Each video uses the same random seed (1) for initial agent placement and orientation. The goal is to influence all of the flocking agents to travel south while minimizing the number of flocking agents that become lost. In each video, the influencing agents behave according to the 1-step lookahead algorithm (Section 4.1).

Constant-time Methods (Section 5.2)

In Section 5.2, we present three constant time placement methods. For each method, we consider a scaled and preset variant. At the end of Section 5.2, we decided to use the scaled variants for the constant time placement methods throughout the remainder of the chapter and dissertation.

Random Preset Placement Method (Section 5.2.1):

Random Scaled Placement Method (Section 5.2.1):

Grid Preset Placement Method (Section 5.2.2):

Grid Scaled Placement Method (Section 5.2.2):

Border Preset Placement Method (Section 5.2.3):

Border Scaled Placement Method (Section 5.2.3):

Graph Placement Method (Section 5.3)

In Section 5.3, we present the Graph placement method.

Hybrid Placement Method (Section 5.4)

In Section 5.4, we present hybrid placement methods. In the video below, we show a hybrid of the Graph placement method and the Grid placement method in which two influencing agents are placed according to each method.

Two-Step Placement Methods (Section 5.5)

In Section 5.5, we present two-step placement methods. In the videos below, we show videos in which Grid Set is used to select S.

Random used to Select S' (Section 5.5.2):

OneNeighbor used to Select S' (Section 5.5.2):

MaxNeighbors used to Select S' (Section 5.5.2):

MinUninfluenced used to Select S' (Section 5.5.2):

Clustering Placement Methods (Section 5.6)

In Section 5.6, we present three clustering placement methods.

Farthest First (Section 5.6.1):

Expectation Maximization (Section 5.6.2):

K-Means (Section 5.6.3):

Chapter 6 - Joining and Leaving a Flock

In each of our videos, the influencing agents are pink and the flocking agents are grey. If you want to see the videos in a larger format, click on each video and either open it in YouTube or hit the full screen icon.

Hovering Feasible

In the case where we assume hovering is feasible (Sections 6.1.1 and 6.2.1), we consider various methods for choosing the desired positions and the arrival behavior.

Let's first consider the various types of desired positions presented in Section 6.1.1. In each video, the desired positions are shown with the Face Initial arrival behavior.

Desired Position = Grid

Desired Position = Border

Desired Position = Funnel

Desired Position = K-Means

Next, let's consider the various types of arrival behaviors presented in Section 6.1.1. In each video, the arrival behaviors are shown with Grid desired positions.

Arrival Behavior = Face Initial

Arrival Behavior = Face Goal

Arrival Behavior = Influence

Arrival Behavior = Condense

Hovering Infeasible

Influencing agents that would be recognized by the flock as ''one of their own'' may not be able to hover due to their design. With this in mind, Chapter 6 considered the case in which hovering is infeasible (Sections 6.1.2, 6.2.2, and 6.2.3). Specifically, we consider multiple target formations (Section 6.1.2) as well as multiple approaches for leaving the flock (Sections 6.2.2 and 6.2.3).

Let's first consider the various types of target formations presented in Section 6.1.2. In each video, the Influence while Leaving approach for leaving the flock is utilized.

Target Formation = Push to Goal Line

Target Formation = Forward Line

Target Formation = Push to Goal Funnel

Target Formation = Forward Funnel

Target Formation = L Corral

Next, let's consider the approaches for leaving the flock that are presented in Sections 6.1.2 and 6.2.2. In each video, the Push to Goal Line target position used is utilized.

Leaving Approach = Nearest 2-edge

Leaving Approach = Nearest 3-edge

Leaving Approach = Influence while Leaving

Chapter 8 - Robot Implementation

When available, two videos are provided for each experiment: a robot video showing real-world robot behavior and a localization video showing the robot's current beliefs. In the robot videos, the influencing agents are wearing orange jerseys and the flocking agents are wearing white jerseys. In the localization videos, there is no significance to the color of each robot unless noted. Due to technical difficulties, localization videos are not available for all experiments. If you want to see the videos in a larger format, click on each video and either open it in YouTube or hit the full screen icon.

Flocking Agents

In Section 8.2.1 we describe the behavior and implementation details of implementing the Alignment aspect of Reynolds' flocking algorithm on SoftBank NAO robots. In Section 8.2.2 we discuss experiments using these flocking agents, while in Section 8.2.3 we discuss experiments in which a manually controlled flocking agent acts as an influencing agent.

Let's first consider experiments using flocking agents (Section 8.2.2). In these experiments, the flocking agents flock downfield through the center circle.

Two Robots Flocking Downfield - Robot Video

Two Robots Flocking Downfield - Localization Video

Three Robots Flocking Downfield - Robot Video

Next, let's consider the experiments in which a manually controlled flocking agent acts as an influencing agent (Section 8.2.3). In these videos, the robot in the orange jersey is running flocking agent code but is periodically being oriented by a human to act as an influencing agent. The robot in the orange jersey is being operated to influence the flock to travel around the center circle on the soccer field.

Three Flocking Agents (One Manually Controlled) - Robot Video

Three Flocking Agents (One Manually Controlled) - Localization Video

Four Flocking Agents (One Manually Controlled) - Robot Video

Four Flocking Agents (One Manually Controlled) - Localization Video

Five Flocking Agents (One Manually Controlled) - Robot Video

Five Flocking Agents (One Manually Controlled) - Localization Video

The light blue agent is the influencing agent in this video.

Influencing Agent

In Section 8.3.1 we describe the behavior and implementation details for implementing the 1-Step Lookahead behavior on SoftBank NAO robots. In Section 8.3.2 we discuss experiments using an influencing agent to influence flocking agents to avoid the center circle of the robot soccer field.

2 Flocking Agents, 1 Influencing Agent - Robot Video (Episode 1)

2 Flocking Agents, 1 Influencing Agent - Localization Video (Episode 1)

The light blue agent is the influencing agent in this video.

2 Flocking Agents, 1 Influencing Agent - Robot Video (Episode 2)

2 Flocking Agents, 1 Influencing Agent - Localization Video (Episode 2)

The light blue agent is the influencing agent in this video.

4 Flocking Agents, 1 Influencing Agent - Robot Video

4 Flocking Agents, 1 Influencing Agent - Localization Video

The light blue agent is the influencing agent in this video.

Header image credit: Walter Baxter