Contact

If you would like to contact the creator, send an email to James MacGlashan at:
jmacglashan at cs dot brown dot edu.

Alternatively, it may be worthwhile to direct your question to the BURLAP Google group at:
https://groups.google.com/forum/#!forum/burlap-discussion

Obtaining BURLAP

BURALP now fully supports Maven and is available on Maven Central! That means that if you'd like to create a project that uses BURLAP, all you need to do is add the following dependency to the <dependencies> section of your projects pom.xml

<dependency> <groupId>edu.brown.cs.burlap</groupId> <artifactId>burlap</artifactId> <version>3.0.0</version> </dependency> and the library will automatically be downloaded and linked to your project! If you do not have Maven installed, you can get it from here

You can also get thefull BURLAP source to manually compile/modify from Github at:
https://github.com/jmacglashan/burlap

Alternatively, you can directly download precompiled jars from Maven Central from here, including older versions of Java. Use the jar-with-dependencies if you want all dependencies included.

If you are looking for the older version 1 of BURLAP that predated Maven, you can get the pre-compiled jars for it below, or use the v1 branch on git.

The Java doc for version 2 can be found here and version 1 Java doc can be here.

History and Future

The Brown-UMBC Reinforcement Learning and Planning (BURLAP) Java library is primarily developed and maintained by James MacGlashan, a postdoc at Brown University, with a number of contributions from various students. BURLAP originated from James MacGlashan's dissertation work at the University of Maryland, Baltimore County (UMBC) in transfer learning for reinforcement learning (Multi-source Option-based Policy Transfer). This work motivated the need for a reinforcement learning code framework that was built on top of a highly expressive domain representation that could support flexible task definitions. The answer to this demand was the object-oriented Markov Decision process (OO-MDP) formalism, which represents states as a set of objects in the world—each defined by their own attributes—and provides a set of high-level propositional functions that operate on the objects in states. At the time, there were no existing libraries that supported OO-MDPs (in fact, OO-MDPs were a fairly new idea in general); instead, most libraries were restricted to RL's more classic fixed feature vector representation. As a consequence, code to rapidly deploy OO-MDP domains was developed.

After James graduated and moved to Brown, the initial OO-MDP code proved to be valuable in being able to quickly generate a variety of different classes of problems to which already implemented algorithms could be trivially applied. The ability to support lots of different problems enabled the code to be easily used for a number of different projects that more typically would have required reimplementation of standard algorithms for a different representation. In response, a decision was made to polish, expand, and document the code base, and make it all available to the public. The result is BURLAP as it is now.

If everything goes well, BURLAP will never be "finished" because it will continue to have more algorithms found in reinforcement learning and planning literature added to it and grow with the field. Ideally, it will also continue grow to support even more classes of problems. As of writing this, BURLAP supports finite, infinite, continuous, and relational state spaces in single-agent or multi-agent (stochastic game) problem spaces with a finite number of parameterizable actions. (Object-oriented) Partially observable MDPs (POMDPs) is currently in development with a number of POMDP algorithms being implemented. BURLAP could also be trivially extended to support continuous action and time domains, but has not yet since there are currently no implemented algorithms in BURLAP to take advantage of it. In the future, we hope to expand into this space as well.

If there is a reinforcement learning or planning problem class that BURLAP cannot support that you would like to see, we would love to hear from you so that we can consider adding in support. The ultimate goal of BURLAP is to be able to pick and choose different algorithms for any number of different problems you might want to solve and stop us from reinventing the wheel every time.

BURLAP Extensions

Currently there are three BURLAP library extension projects used to connect BURLAP up with other systems.

  1. BURLAP Rosbridge This extension allows you to control ROS-powered robots with BURLAP planning and learning algorithms by providing a standard BURLAP Environment class that communicates to ROS over Rosbridge. BURLAP Rosbridge is also on Maven Central, so you can simply add the following dependency along with your BURLAP dependency: <dependency> <groupId>edu.brown.cs.burlap</groupId> <artifactId>burlap_rosbridge</artifactId> <version>3.0.0</version> </dependency>
  2. BurlapCraft This extension allows you to use BURLAP planning and learning algorithms to control a player in the video game Minecraft.
  3. BURLAP Weka This extensions provides some tools for hooking up BURLAP with Weka, the machine learning library. Weka, however, is licensed under GPL, so if you use this extension, the whole license is GPL.

People

Organizers

Brown University University of Maryland, Baltimore County

Code Contributers