Daniel S. Brown
I am a second-year computer science PhD Student at UT Austin.
I work in the Personal Autonomous Robotics Lab (PeARL), with
Scott Niekum .
I am interested in safe learning from demonstration. In particular I am developing methods that allow a robot to reason about the performance a policy learned from a limited number of demonstrations. My recent work shows how a learning agent can use Bayesian Inverse Reinforcement Learning to calculate accurate probabilistic performance bounds, without knowing the demonstrator's reward function.
Prior to coming to UT I worked as a research scientist at the Air Force Research Lab's Information Directorate in Rome, New York. I earned my master's degree in Computer Science from Brigham Young University under the advisement of
Mike Goodrich. I also obtained my bachelor's degree in Mathematics at Brigham Young University and completed an honors thesis under the advisement of Sean Warnick.
Google Scholar link
D. S. Brown, J. Hudack, N. Gemelli, B. Banerjee.
Exact and Heuristic Algorithms for Risk-Aware Stochastic Physical Search.
Computational Intelligence, 2016.
D. S. Brown, M. A. Goodich, S.-Y. Jung, and S. Kerman.
Two Invariants of Human-Swarm Interaction.
Journal of Human-Robot Interaction, 5(1), 2016, pp. 1-31.
Conference and Workshop Papers
D.S. Brown, S. Niekum.
Efficient Probabilistic Performance Bounds for Inverse Reinforcement Learning. to appear in AAAI, 2018.
D.S. Brown, S. Niekum.
Toward Probabilistic Safety Bounds for Robot Learning from Demonstration. AAAI Fall Symposium: Artificial Intelligence for Human-Robot Interaction, 2017.
D.S. Brown, R. Turner, O. Hennigh, S. Loscalzo.
Discovery and Exploration of Novel Swarm Behaviors given Limited Robot Capabilities. International Symposium on Distributed Autonomous Robotic Systems, 2016. *Nominated for Best Paper Award*
M. Berger, L. Seversky, D. S. Brown.
Classifying Swarm Behaviors via Compressive Subspace Learning. International Conference on Robotics and Automation, 2016.
M. Johnson, D. S. Brown.
Evolving and Controlling Perimeter, Rendezvous, and Foraging Behaviors in a Computation-Free Robot Swarm. International Conference on Bio-
inspired Information and Communications Technologies, 2015 D.S. Brown, S. Loscalzo, N. Gemelli.
k-Agent Sufficiency for Multiagent Stochastic Physical Search Problems. International Conference on Algorithmic Decision Theory, 2015 J. Hudack, N. Gemelli, D. S. Brown, S. Loscalzo, J. C. Oh.
Multiobjective optimization for the stochastic physical search problem. International Conference on Industrial, Engineering and Other Applications of Applied Intelligent Systems, 2015 D. S. Brown, J. Hudack, B. Banerjee.
Algorithms for stochastic physical search on general graphs. Planning, Search, and Optimization Workshop at the Twenty-Ninth AAAI Conference
on Artificial Intelligence, 2015 2014
D. S. Brown, S.-Y. Jung, and M. A. Goodrich,
Balancing human and inter-agent influences for shared control of bio-inspired collectives.
Proceedings of IEEE International Conference on Systems, Man, and Cybernetics.
October, 2014, San Diego. D. Brown, S. Kerman, and M. A. Goodrich.
Limited Bandwidth Recognition of Collective Behaviors in Bio-Inspired Swarms.
Proceedings of AAMAS,
May 2014, Paris France. D. Brown, S. Kerman, and M. A. Goodrich.
Human-Swarm Interactions Based on Managing Attractors.
In ACM/IEEE International Conference on Human-Robot Interactions, 2014. *Nominated for Best Paper Award* 2013
S.-Y. Jung, D. S. Brown, and M. A. Goodrich.
Shaping Couzin-like Torus Swarms through Coordinated Mediation.
Proceedings of the 2013 International Conference on Systems, Man, and Cybernetics,
October 2013, Manchester, United Kingdom. 2012
S. Kerman, D. S. Brown, and M. A. Goodrich.
Supporting Human Interaction with Robust Robot Swarms,
Proceedings of the International Symposium on Resilient Control Systems,
August 2012, Salt Lake City, Utah,USA.