A B C D E F G H I J K L M N O P Q R S T U V W X Y Z 

M

MacroAction - Class in burlap.behavior.singleagent.options
A macro action is an action that always executes a sequence of actions.
MacroAction(String, List<GroundedAction>) - Constructor for class burlap.behavior.singleagent.options.MacroAction
Instantiates a macro action with a given name and action sequence.
MacroCellGridWorld - Class in burlap.domain.singleagent.gridworld.macro
A domain that extends the grid world domain by adding "Macro Cells" to it, which specify rectangular regions of the space.
MacroCellGridWorld() - Constructor for class burlap.domain.singleagent.gridworld.macro.MacroCellGridWorld
Initializes with a default world size of 32x32 and macro cell size of 16x16.
MacroCellGridWorld(int, int, int, int) - Constructor for class burlap.domain.singleagent.gridworld.macro.MacroCellGridWorld
Initializes with the given world width/height and macro-cell width/height
MacroCellGridWorld.InMacroCellPF - Class in burlap.domain.singleagent.gridworld.macro
A propositional function for detecting if the agent is in a specific macro cell.
MacroCellGridWorld.InMacroCellPF(Domain, int, int, int, int) - Constructor for class burlap.domain.singleagent.gridworld.macro.MacroCellGridWorld.InMacroCellPF
 
MacroCellGridWorld.LinearInPFRewardFunction - Class in burlap.domain.singleagent.gridworld.macro
RewardFunction class that returns rewards based on a linear combination of propositional functions
MacroCellGridWorld.LinearInPFRewardFunction(PropositionalFunction[], Map<String, Double>) - Constructor for class burlap.domain.singleagent.gridworld.macro.MacroCellGridWorld.LinearInPFRewardFunction
Initializes
macroCellHorizontalCount - Variable in class burlap.domain.singleagent.gridworld.macro.MacroCellGridWorld
The number of columns of macro cells (cells across the y-axis)
macroCellVerticalCount - Variable in class burlap.domain.singleagent.gridworld.macro.MacroCellGridWorld
The number of rows of macro cells (cells across the x-axis)
MacroCellVisualizer - Class in burlap.domain.singleagent.gridworld.macro
A class for visualizing the reward weights assigned to a Macro-cell in a Macro-cell grid world.
MacroCellVisualizer() - Constructor for class burlap.domain.singleagent.gridworld.macro.MacroCellVisualizer
 
MacroCellVisualizer.MacroCellRewardWeightPainter - Class in burlap.domain.singleagent.gridworld.macro
Class for painting the macro cells a color between white and blue, where strong blue indicates strong reward weights.
MacroCellVisualizer.MacroCellRewardWeightPainter(int[][], MacroCellGridWorld.InMacroCellPF[], Map<String, Double>) - Constructor for class burlap.domain.singleagent.gridworld.macro.MacroCellVisualizer.MacroCellRewardWeightPainter
 
MADPPlanAgentFactory - Class in burlap.behavior.stochasticgames.agents.madp
An agent factory for the MultiAgentDPPlanningAgent agent.
MADPPlanAgentFactory(SGDomain, MADynamicProgramming, PolicyFromJointPolicy) - Constructor for class burlap.behavior.stochasticgames.agents.madp.MADPPlanAgentFactory
Initializes.
MADPPlanAgentFactory(SGDomain, MADPPlannerFactory, PolicyFromJointPolicy) - Constructor for class burlap.behavior.stochasticgames.agents.madp.MADPPlanAgentFactory
Initializes
MADPPlannerFactory - Interface in burlap.behavior.stochasticgames.agents.madp
An interface for generating MADynamicProgramming objects.
MADPPlannerFactory.ConstantMADPPlannerFactory - Class in burlap.behavior.stochasticgames.agents.madp
MADynamicProgramming factory that always returns the same object instance, unless the reference is chaned with a mutator.
MADPPlannerFactory.ConstantMADPPlannerFactory(MADynamicProgramming) - Constructor for class burlap.behavior.stochasticgames.agents.madp.MADPPlannerFactory.ConstantMADPPlannerFactory
Initializes with a given valueFunction reference.
MADPPlannerFactory.MAVIPlannerFactory - Class in burlap.behavior.stochasticgames.agents.madp
Factory for generating multi-agent value iteration planners (MAValueIteration).
MADPPlannerFactory.MAVIPlannerFactory(SGDomain, JointActionModel, JointReward, TerminalFunction, double, HashableStateFactory, double, SGBackupOperator, double, int) - Constructor for class burlap.behavior.stochasticgames.agents.madp.MADPPlannerFactory.MAVIPlannerFactory
Initializes.
MADPPlannerFactory.MAVIPlannerFactory(SGDomain, JointActionModel, JointReward, TerminalFunction, double, HashableStateFactory, ValueFunctionInitialization, SGBackupOperator, double, int) - Constructor for class burlap.behavior.stochasticgames.agents.madp.MADPPlannerFactory.MAVIPlannerFactory
Initializes.
MADPPlannerFactory.MAVIPlannerFactory(SGDomain, Map<String, SGAgentType>, JointActionModel, JointReward, TerminalFunction, double, HashableStateFactory, ValueFunctionInitialization, SGBackupOperator, double, int) - Constructor for class burlap.behavior.stochasticgames.agents.madp.MADPPlannerFactory.MAVIPlannerFactory
Initializes.
MADynamicProgramming - Class in burlap.behavior.stochasticgames.madynamicprogramming
An abstract value function based planning algorithm base for sequential stochastic games that require the computation of Q-values for each agent for each joint action.
MADynamicProgramming() - Constructor for class burlap.behavior.stochasticgames.madynamicprogramming.MADynamicProgramming
 
MADynamicProgramming.BackupBasedQSource - Class in burlap.behavior.stochasticgames.madynamicprogramming
A QSourceForSingleAgent implementation which stores a value function for an agent and produces Joint action Q-values by marginalizing over the transition dynamics the reward and discounted next state value.
MADynamicProgramming.BackupBasedQSource(String) - Constructor for class burlap.behavior.stochasticgames.madynamicprogramming.MADynamicProgramming.BackupBasedQSource
Initializes a value function for the agent of the given name.
MADynamicProgramming.JointActionTransitions - Class in burlap.behavior.stochasticgames.madynamicprogramming
A class for holding all of the transition dynamic information for a given joint action in a given state.
MADynamicProgramming.JointActionTransitions(State, JointAction) - Constructor for class burlap.behavior.stochasticgames.madynamicprogramming.MADynamicProgramming.JointActionTransitions
Generates the transition information for the given state and joint aciton
main(String[]) - Static method in class burlap.behavior.singleagent.EpisodeAnalysis
 
main(String[]) - Static method in class burlap.behavior.singleagent.pomdp.wrappedmdpalgs.BeliefSparseSampling
 
main(String[]) - Static method in class burlap.behavior.stochasticgames.GameAnalysis
 
main(String[]) - Static method in class burlap.behavior.stochasticgames.solvers.CorrelatedEquilibriumSolver
 
main(String[]) - Static method in class burlap.datastructures.AlphanumericSorting
Testing the alphanumeric sorting
main(String[]) - Static method in class burlap.datastructures.StochasticTree
Demos how to use this class
main(String[]) - Static method in class burlap.debugtools.MyTimer
Demo of usage
main(String[]) - Static method in class burlap.debugtools.RandomFactory
Example usage.
main(String[]) - Static method in class burlap.domain.singleagent.blockdude.BlockDude
Runs an interactive visual explorer for level one of Block Dude.
main(String[]) - Static method in class burlap.domain.singleagent.blocksworld.BlocksWorld
Main method for exploring the domain.
main(String[]) - Static method in class burlap.domain.singleagent.cartpole.CartPoleDomain
Launches an interactive visualize in which key 'a' applies a force in the left direction and key 'd' applies force in the right direction.
main(String[]) - Static method in class burlap.domain.singleagent.cartpole.InvertedPendulum
 
main(String[]) - Static method in class burlap.domain.singleagent.frostbite.FrostbiteDomain
Main function to test the domain.
main(String[]) - Static method in class burlap.domain.singleagent.graphdefined.GraphDefinedDomain
 
main(String[]) - Static method in class burlap.domain.singleagent.gridworld.GridWorldDomain
Creates a visual explorer or terminal explorer.
main(String[]) - Static method in class burlap.domain.singleagent.lunarlander.LunarLanderDomain
This method will launch a visual explorer for the lunar lander domain.
main(String[]) - Static method in class burlap.domain.singleagent.mountaincar.MountainCar
Will launch a visual explorer for the mountain car domain that is controlled with the a-s-d keys.
main(String[]) - Static method in class burlap.domain.singleagent.pomdp.tiger.TigerDomain
Main method for interacting with the tiger domain via a TerminalExplorer.
main(String[]) - Static method in class burlap.domain.stochasticgames.gridgame.GridGame
Creates a visual explorer for a simple domain with two agents in it.
main(String[]) - Static method in class burlap.domain.stochasticgames.normalform.SingleStageNormalFormGame
A main method showing example code that would be used to create an instance of Prisoner's dilemma and begin playing it with a SGTerminalExplorer.
main(String[]) - Static method in class burlap.testing.TestRunner
 
main(String[]) - Static method in class burlap.tutorials.bd.ExampleGridWorld
 
main(String[]) - Static method in class burlap.tutorials.bpl.BasicBehavior
 
main(String[]) - Static method in class burlap.tutorials.cpl.QLTutorial
 
main(String[]) - Static method in class burlap.tutorials.cpl.VITutorial
 
main(String[]) - Static method in class burlap.tutorials.hgw.HelloGridWorld
 
main(String[]) - Static method in class burlap.tutorials.hgw.PlotTest
 
main(String[]) - Static method in class burlap.tutorials.scd.ContinuousDomainTutorial
 
main(String[]) - Static method in class burlap.tutorials.video.mc.MCVideo
 
maintainClosed - Variable in class burlap.behavior.singleagent.planning.deterministic.uninformed.dfs.DFS
Whether to keep track of a closed list to prevent exploring already seen nodes.
makeEmptyMap() - Method in class burlap.domain.singleagent.gridworld.GridWorldDomain
Makes the map empty
manualAgents - Variable in class burlap.shell.command.world.ManualAgentsCommands
 
ManualAgentsCommands - Class in burlap.shell.command.world
A controller for a set of ShellCommand objects.
ManualAgentsCommands() - Constructor for class burlap.shell.command.world.ManualAgentsCommands
 
ManualAgentsCommands.ListManualAgents - Class in burlap.shell.command.world
 
ManualAgentsCommands.ListManualAgents() - Constructor for class burlap.shell.command.world.ManualAgentsCommands.ListManualAgents
 
ManualAgentsCommands.LSManualAgentActionsCommands - Class in burlap.shell.command.world
 
ManualAgentsCommands.LSManualAgentActionsCommands() - Constructor for class burlap.shell.command.world.ManualAgentsCommands.LSManualAgentActionsCommands
 
ManualAgentsCommands.ManualSGAgent - Class in burlap.shell.command.world
 
ManualAgentsCommands.ManualSGAgent() - Constructor for class burlap.shell.command.world.ManualAgentsCommands.ManualSGAgent
 
ManualAgentsCommands.RegisterAgentCommand - Class in burlap.shell.command.world
 
ManualAgentsCommands.RegisterAgentCommand() - Constructor for class burlap.shell.command.world.ManualAgentsCommands.RegisterAgentCommand
 
ManualAgentsCommands.SetAgentAction - Class in burlap.shell.command.world
 
ManualAgentsCommands.SetAgentAction() - Constructor for class burlap.shell.command.world.ManualAgentsCommands.SetAgentAction
 
manualValueFunctionVis(ValueFunction, Policy) - Method in class burlap.tutorials.bpl.BasicBehavior
 
map(State) - Method in class burlap.behavior.singleagent.options.Option
Returns the state that is mapped from the input state.
map - Variable in class burlap.behavior.stochasticgames.agents.naiveq.history.ParameterNaiveActionIdMap
The map from action names to their corresponding int value
map - Variable in class burlap.domain.singleagent.gridworld.GridWorldDomain
The wall map where the first index is the x position and the second index is the y position.
map - Variable in class burlap.domain.singleagent.gridworld.GridWorldDomain.MovementAction
The map of the world
map - Variable in class burlap.domain.singleagent.gridworld.GridWorldVisualizer.CellPainter
 
map - Variable in class burlap.domain.singleagent.gridworld.GridWorldVisualizer.LocationPainter
 
map - Variable in class burlap.domain.singleagent.gridworld.GridWorldVisualizer.MapPainter
 
map - Variable in class burlap.tutorials.bd.ExampleGridWorld
 
mapAction(AbstractGroundedAction) - Method in class burlap.behavior.policy.DomainMappedPolicy
Maps an input GroundedAction to a GroundedAction using an action reference of the action in this object's DomainMappedPolicy.targetDomain object that has the same name as the action in the input GroundedAction.
mapState(State) - Method in interface burlap.oomdp.auxiliary.StateMapping
 
mapToStateIndex - Variable in class burlap.behavior.singleagent.MDPSolver
A mapping to internal stored hashed states (HashableState) that are stored.
MAQLFactory - Class in burlap.behavior.stochasticgames.agents.maql
This class provides a factory for MultiAgentQLearning agents.
MAQLFactory() - Constructor for class burlap.behavior.stochasticgames.agents.maql.MAQLFactory
Empty constructor.
MAQLFactory(SGDomain, double, double, HashableStateFactory, double, SGBackupOperator, boolean) - Constructor for class burlap.behavior.stochasticgames.agents.maql.MAQLFactory
Initializes.
MAQLFactory(SGDomain, double, LearningRate, HashableStateFactory, ValueFunctionInitialization, SGBackupOperator, boolean, PolicyFromJointPolicy) - Constructor for class burlap.behavior.stochasticgames.agents.maql.MAQLFactory
Initializes.
MAQLFactory.CoCoQLearningFactory - Class in burlap.behavior.stochasticgames.agents.maql
Factory for generating CoCo-Q agents.
MAQLFactory.CoCoQLearningFactory(SGDomain, double, LearningRate, HashableStateFactory, ValueFunctionInitialization, boolean, double) - Constructor for class burlap.behavior.stochasticgames.agents.maql.MAQLFactory.CoCoQLearningFactory
 
MAQLFactory.MAMaxQLearningFactory - Class in burlap.behavior.stochasticgames.agents.maql
Factory for generating Max multiagent Q-learning agents.
MAQLFactory.MAMaxQLearningFactory(SGDomain, double, LearningRate, HashableStateFactory, ValueFunctionInitialization, boolean, double) - Constructor for class burlap.behavior.stochasticgames.agents.maql.MAQLFactory.MAMaxQLearningFactory
 
MAQSourcePolicy - Class in burlap.behavior.stochasticgames.madynamicprogramming
An abstract extension of the JointPolicy class that adds a required interface of being able to a MultiAgentQSourceProvider.
MAQSourcePolicy() - Constructor for class burlap.behavior.stochasticgames.madynamicprogramming.MAQSourcePolicy
 
marginalizeColPlayerStrategy(double[][]) - Static method in class burlap.behavior.stochasticgames.solvers.GeneralBimatrixSolverTools
Returns the column player's strategy by marginalizing it out from a joint action probability distribution represented as a matrix
marginalizeRowPlayerStrategy(double[][]) - Static method in class burlap.behavior.stochasticgames.solvers.GeneralBimatrixSolverTools
Returns the row player's strategy by marginalizing it out from a joint action probability distribution represented as a matrix
markAsTerminalPosition(int, int) - Method in class burlap.domain.singleagent.gridworld.GridWorldTerminalFunction
Marks a position as a terminal position for the agent.
maskedAttributes - Variable in class burlap.oomdp.statehashing.MaskedHashableStateFactory
 
MaskedHashableStateFactory - Class in burlap.oomdp.statehashing
This class produces HashableState instances in which the hash code and equality of the states ignores either ObjectInstance belonging to specific ObjectClass or value assignments for specific Attributes.
MaskedHashableStateFactory() - Constructor for class burlap.oomdp.statehashing.MaskedHashableStateFactory
Default constructor: object identifier independent, no hash code caching, and no object class or attribute masks.
MaskedHashableStateFactory(boolean) - Constructor for class burlap.oomdp.statehashing.MaskedHashableStateFactory
Initializes with no hash code caching and no object class or attribute masks.
MaskedHashableStateFactory(boolean, boolean) - Constructor for class burlap.oomdp.statehashing.MaskedHashableStateFactory
Initializes with no object class or attribute masks.
MaskedHashableStateFactory(boolean, boolean, boolean, String...) - Constructor for class burlap.oomdp.statehashing.MaskedHashableStateFactory
Initializes with a specified attribute or object class mask.
maskedObjectClasses - Variable in class burlap.oomdp.statehashing.ImmutableStateHashableStateFactory
 
maskedObjectClasses - Variable in class burlap.oomdp.statehashing.MaskedHashableStateFactory
 
maskedObjects - Variable in class burlap.oomdp.statehashing.ImmutableStateHashableStateFactory
 
MatchEntry - Class in burlap.oomdp.stochasticgames.tournament
This class indicates which player in a tournament is to play in a match and what SGAgentType role they will play.
MatchEntry(SGAgentType, int) - Constructor for class burlap.oomdp.stochasticgames.tournament.MatchEntry
Initializes the MatchEntry
matchingActionFeature(List<FVCMACFeatureDatabase.ActionFeatureID>, GroundedAction) - Method in class burlap.behavior.singleagent.vfa.cmac.FVCMACFeatureDatabase
Returns the FVCMACFeatureDatabase.ActionFeatureID with an equivalent GroundedAction in the given list or null if there is none.
matchingStateTP(List<TransitionProbability>, State) - Method in class burlap.oomdp.singleagent.pomdp.BeliefMDPGenerator.BeliefAction
Finds a transition in the input list of transitions that matches the input state and returns it.
matchingTransitions(GroundedAction) - Method in class burlap.behavior.singleagent.planning.stochastic.ActionTransitions
Returns whether these action transitions are for the specified GroundedAction
MatchSelector - Interface in burlap.oomdp.stochasticgames.tournament
An interface for defining how matches in a tournament will be determined
MAValueIteration - Class in burlap.behavior.stochasticgames.madynamicprogramming.dpplanners
A class for performing multi-agent value iteration.
MAValueIteration(SGDomain, JointReward, TerminalFunction, double, HashableStateFactory, double, SGBackupOperator, double, int) - Constructor for class burlap.behavior.stochasticgames.madynamicprogramming.dpplanners.MAValueIteration
Initializes.
MAValueIteration(SGDomain, JointReward, TerminalFunction, double, HashableStateFactory, ValueFunctionInitialization, SGBackupOperator, double, int) - Constructor for class burlap.behavior.stochasticgames.madynamicprogramming.dpplanners.MAValueIteration
Initializes.
MAValueIteration(SGDomain, Map<String, SGAgentType>, JointReward, TerminalFunction, double, HashableStateFactory, double, SGBackupOperator, double, int) - Constructor for class burlap.behavior.stochasticgames.madynamicprogramming.dpplanners.MAValueIteration
Initializes.
MAValueIteration(SGDomain, Map<String, SGAgentType>, JointReward, TerminalFunction, double, HashableStateFactory, ValueFunctionInitialization, SGBackupOperator, double, int) - Constructor for class burlap.behavior.stochasticgames.madynamicprogramming.dpplanners.MAValueIteration
Initializes.
maxActions - Variable in class burlap.domain.singleagent.graphdefined.GraphDefinedDomain
The maximum number of actions available from any given state node.
maxAngleSpeed - Variable in class burlap.domain.singleagent.cartpole.CartPoleDomain.CPPhysicsParams
The maximum speed of the change in angle.
maxAngleSpeed - Variable in class burlap.domain.singleagent.cartpole.InvertedPendulum.IPPhysicsParams
The maximum speed (manitude) of the change in angle.
maxBackups - Variable in class burlap.behavior.singleagent.planning.stochastic.valueiteration.PrioritizedSweeping
THe maximum number Bellman backups permitted
maxBetaScaled(double[], double) - Static method in class burlap.behavior.singleagent.learnfromdemo.mlirl.support.BoltzmannPolicyGradient
Given an array of Q-values, returns the maximum Q-value multiplied by the parameter beta.
maxCartSpeed - Variable in class burlap.domain.singleagent.cartpole.CartPoleDomain.CPPhysicsParams
The maximum speed of the cart.
maxChange - Variable in class burlap.behavior.singleagent.learning.lspi.LSPI
The maximum change in weights permitted to terminate LSPI.
maxDelta - Variable in class burlap.behavior.singleagent.learnfromdemo.mlirl.differentiableplanners.DifferentiableVI
When the maximum change in the value function is smaller than this value, VI will terminate.
maxDelta - Variable in class burlap.behavior.singleagent.learnfromdemo.mlirl.support.QGradientPlannerFactory.DifferentiableVIFactory
The value function change threshold to stop VI.
maxDelta - Variable in class burlap.behavior.singleagent.planning.stochastic.rtdp.RTDP
When the maximum change in the value function from a rollout is smaller than this value, VI will terminate.
maxDelta - Variable in class burlap.behavior.singleagent.planning.stochastic.valueiteration.ValueIteration
When the maximum change in the value function is smaller than this value, VI will terminate.
maxDelta - Variable in class burlap.behavior.singleagent.planning.vfa.fittedvi.FittedVI
The maximum change in the value function that will cause planning to terminate.
maxDelta - Variable in class burlap.behavior.stochasticgames.agents.madp.MADPPlannerFactory.MAVIPlannerFactory
The threshold that will cause VI to terminate when the max change in Q-value for is less than it
maxDelta - Variable in class burlap.behavior.stochasticgames.madynamicprogramming.dpplanners.MAValueIteration
The threshold that will cause VI to terminate when the max change in Q-value for is less than it
maxDepth - Variable in class burlap.behavior.singleagent.planning.deterministic.uninformed.dfs.DFS
The max depth of the search tree that will be explored.
maxDepth - Variable in class burlap.behavior.singleagent.planning.stochastic.rtdp.BoundedRTDP
The maximum depth/length of a rollout before it is terminated and Bellman updates are performed.
maxDepth - Variable in class burlap.behavior.singleagent.planning.stochastic.rtdp.RTDP
The maximum depth/length of a rollout before it is terminated and Bellman updates are performed.
maxDiff - Variable in class burlap.behavior.singleagent.planning.stochastic.rtdp.BoundedRTDP
The max permitted difference between the lower bound and upperbound for planning termination.
maxDim - Variable in class burlap.domain.stochasticgames.gridgame.GridGame
The width and height of the world.
maxEpisodeSize - Variable in class burlap.behavior.singleagent.learning.actorcritic.ActorCritic
The maximum number of steps of an episode before the agent will manually terminate it.This is defaulted to Integer.MAX_VALUE.
maxEpisodeSize - Variable in class burlap.behavior.singleagent.learning.actorcritic.critics.TimeIndexedTDLambda
The maximum number of steps possible in an episode.
maxEpisodeSize - Variable in class burlap.behavior.singleagent.learning.tdmethods.QLearning
The maximum number of steps that will be taken in an episode before the agent terminates a learning episode
maxEpisodeSize - Variable in class burlap.behavior.singleagent.learning.tdmethods.vfa.GradientDescentSarsaLam
The maximum number of steps that will be taken in an episode before the agent terminates a learning episode
maxEvalDelta - Variable in class burlap.behavior.singleagent.planning.stochastic.policyiteration.PolicyEvaluation
When the maximum change in the value function is smaller than this value, policy evaluation will terminate.
maxEvalDelta - Variable in class burlap.behavior.singleagent.planning.stochastic.policyiteration.PolicyIteration
When the maximum change in the value function is smaller than this value, policy evaluation will terminate.
maxEvalIterations - Variable in class burlap.behavior.singleagent.planning.stochastic.policyiteration.PolicyEvaluation
When the maximum number of evaluation iterations passes this number, policy evaluation will terminate
maxGT - Variable in class burlap.domain.stochasticgames.gridgame.GridGame
The number of goal types
maxHeap - Variable in class burlap.datastructures.HashIndexedHeap
If true, this is ordered according to a max heap; if false ordered according to a min heap.
maxHorizon - Variable in class burlap.behavior.singleagent.planning.stochastic.montecarlo.uct.UCT
 
maxIterations - Variable in class burlap.behavior.singleagent.learnfromdemo.apprenticeship.ApprenticeshipLearningRequest
The maximum number of iterations of apprenticeship learning
maxIterations - Variable in class burlap.behavior.singleagent.learnfromdemo.mlirl.differentiableplanners.DifferentiableVI
When the number of VI iterations exceeds this value, VI will terminate.
maxIterations - Variable in class burlap.behavior.singleagent.learnfromdemo.mlirl.support.QGradientPlannerFactory.DifferentiableVIFactory
The maximum allowed number of VI iterations.
maxIterations - Variable in class burlap.behavior.singleagent.planning.stochastic.policyiteration.PolicyIteration
When the number of policy evaluation iterations exceeds this value, policy evaluation will terminate.
maxIterations - Variable in class burlap.behavior.singleagent.planning.stochastic.valueiteration.ValueIteration
When the number of VI iterations exceeds this value, VI will terminate.
maxIterations - Variable in class burlap.behavior.singleagent.planning.vfa.fittedvi.FittedVI
The maximum number of iterations to run.
maxIterations - Variable in class burlap.behavior.stochasticgames.agents.madp.MADPPlannerFactory.MAVIPlannerFactory
The maximum allowable number of iterations until VI termination
maxIterations - Variable in class burlap.behavior.stochasticgames.madynamicprogramming.dpplanners.MAValueIteration
The maximum allowable number of iterations until VI termination
maxLearningSteps - Variable in class burlap.behavior.singleagent.learning.lspi.LSPI
The maximum number of learning steps in an episode when the LSPI.runLearningEpisode(burlap.oomdp.singleagent.environment.Environment) method is called.
maxLikelihoodChange - Variable in class burlap.behavior.singleagent.learnfromdemo.mlirl.MLIRL
The likelihood change threshold to stop gradient ascent.
MaxMax - Class in burlap.behavior.stochasticgames.agents.twoplayer.singlestage.equilibriumplayer.equilibriumsolvers
A class for finding a stategy that maxmizes the player's payoff under the assumption that their "opponent" is friendly and will try to do the same.
MaxMax() - Constructor for class burlap.behavior.stochasticgames.agents.twoplayer.singlestage.equilibriumplayer.equilibriumsolvers.MaxMax
 
maxNonZeroCoefficents - Variable in class burlap.behavior.singleagent.vfa.fourier.FourierBasis
The maximum number of non-zero coefficient entries permitted in a coefficient vector
maxNumPlanningIterations - Variable in class burlap.behavior.singleagent.learning.lspi.LSPI
The maximum number of policy iterations permitted when LSPI is run from the LSPI.planFromState(State) or LSPI.runLearningEpisode(burlap.oomdp.singleagent.environment.Environment) methods.
maxNumSteps - Variable in class burlap.behavior.singleagent.learning.modellearning.artdp.ARTDP
The maximum number of learning steps per episode before the agent gives up
maxNumSteps - Variable in class burlap.behavior.singleagent.learning.modellearning.rmax.PotentialShapedRMax
The maximum number of learning steps per episode before the agent gives up
maxPIDelta - Variable in class burlap.behavior.singleagent.planning.stochastic.policyiteration.PolicyIteration
When the maximum change between policy evaluations is smaller than this value, planning will terminate.
maxPlayers - Variable in class burlap.behavior.stochasticgames.agents.naiveq.history.SGQWActionHistoryFactory
The maximum number of players that can be in the game
maxPlyrs - Variable in class burlap.domain.stochasticgames.gridgame.GridGame
The maximum number of players that can be in the game
maxPolicyIterations - Variable in class burlap.behavior.singleagent.planning.stochastic.policyiteration.PolicyIteration
When the number of policy iterations passes this value, planning will terminate.
maxQ(State) - Method in class burlap.behavior.singleagent.planning.stochastic.rtdp.BoundedRTDP
Returns the maximum Q-value entry for the given state with ties broken randomly.
MaxQ - Class in burlap.behavior.stochasticgames.madynamicprogramming.backupOperators
A classic MDP-style max backup operator in which an agent back ups his max Q-value in the state.
MaxQ() - Constructor for class burlap.behavior.stochasticgames.madynamicprogramming.backupOperators.MaxQ
 
maxQChangeForPlanningTermination - Variable in class burlap.behavior.singleagent.learning.tdmethods.QLearning
The maximum allowable change in the Q-function during an episode before the planning method terminates.
maxQChangeInLastEpisode - Variable in class burlap.behavior.singleagent.learning.tdmethods.QLearning
The maximum Q-value change that occurred in the last learning episode.
maxRollouts - Variable in class burlap.behavior.singleagent.planning.stochastic.rtdp.BoundedRTDP
the max number of rollouts to perform when planning is started unless the value function margin is small enough.
maxRollOutsFromRoot - Variable in class burlap.behavior.singleagent.planning.stochastic.montecarlo.uct.UCT
 
maxSelfTransitionProb - Variable in class burlap.behavior.singleagent.planning.stochastic.valueiteration.PrioritizedSweeping.BPTRNode
 
maxStages - Variable in class burlap.oomdp.stochasticgames.tournament.Tournament
 
maxSteps - Variable in class burlap.behavior.singleagent.learnfromdemo.mlirl.MLIRL
The maximum number of steps of gradient ascent.
maxTimeStep() - Method in class burlap.behavior.singleagent.EpisodeAnalysis
Returns the maximimum time step index in this episode which is the EpisodeAnalysis.numTimeSteps()-1.
maxTimeStep() - Method in class burlap.behavior.stochasticgames.GameAnalysis
Returns the max time step index in this game which equals GameAnalysis.numTimeSteps()-1.
maxTNormed() - Method in class burlap.datastructures.BoltzmannDistribution
Returns the maximum temperature normalized preference
maxValue() - Method in interface burlap.behavior.stochasticgames.agents.naiveq.history.ActionIdMap
The maximum number of int values for actions
maxValue() - Method in class burlap.behavior.stochasticgames.agents.naiveq.history.ParameterNaiveActionIdMap
 
maxWeightChangeForPlanningTermination - Variable in class burlap.behavior.singleagent.learning.tdmethods.vfa.GradientDescentSarsaLam
The maximum allowable change in the VFA weights during an episode before the planning method terminates.
maxWeightChangeInLastEpisode - Variable in class burlap.behavior.singleagent.learning.tdmethods.vfa.GradientDescentSarsaLam
The maximum VFA weight change that occurred in the last learning episode.
maxWT - Variable in class burlap.domain.stochasticgames.gridgame.GridGame
The number of wall types
maxx - Variable in class burlap.domain.singleagent.blockdude.BlockDude
Domain parameter specifying the maximum x dimension value of the world
maxx - Variable in class burlap.domain.singleagent.blockdude.BlockDude.MoveAction
 
maxx - Variable in class burlap.domain.singleagent.blockdude.BlockDude.MoveUpAction
 
maxx - Variable in class burlap.domain.singleagent.blockdude.BlockDude.PickupAction
 
maxx - Variable in class burlap.domain.singleagent.blockdude.BlockDude.PutdownAction
 
maxx - Variable in class burlap.domain.singleagent.blockdude.BlockDudeVisualizer.AgentPainter
 
maxx - Variable in class burlap.domain.singleagent.blockdude.BlockDudeVisualizer.BlockPainter
 
maxx - Variable in class burlap.domain.singleagent.blockdude.BlockDudeVisualizer.BricksPainter
 
maxx - Variable in class burlap.domain.singleagent.blockdude.BlockDudeVisualizer.ExitPainter
 
maxy - Variable in class burlap.domain.singleagent.blockdude.BlockDude
Domain parameter specifying the maximum y dimension value of the world
maxy - Variable in class burlap.domain.singleagent.blockdude.BlockDudeVisualizer.AgentPainter
 
maxy - Variable in class burlap.domain.singleagent.blockdude.BlockDudeVisualizer.BlockPainter
 
maxy - Variable in class burlap.domain.singleagent.blockdude.BlockDudeVisualizer.BricksPainter
 
maxy - Variable in class burlap.domain.singleagent.blockdude.BlockDudeVisualizer.ExitPainter
 
MCLSPIFB() - Static method in class burlap.tutorials.scd.ContinuousDomainTutorial
 
MCLSPIRBF() - Static method in class burlap.tutorials.scd.ContinuousDomainTutorial
 
MCRandomStateGenerator - Class in burlap.domain.singleagent.mountaincar
Generates MountainCar states with the x-position between some specified range and the velocity between some specified range.
MCRandomStateGenerator(Domain) - Constructor for class burlap.domain.singleagent.mountaincar.MCRandomStateGenerator
Initializes for the MountainCar Domain object for which states will be generated.
MCRandomStateGenerator(Domain, double, double, double, double) - Constructor for class burlap.domain.singleagent.mountaincar.MCRandomStateGenerator
Initializes for the MountainCar Domain object for which states will be generated.
MCVideo - Class in burlap.tutorials.video.mc
 
MCVideo() - Constructor for class burlap.tutorials.video.mc.MCVideo
 
mdpPlanner - Variable in class burlap.behavior.singleagent.pomdp.wrappedmdpalgs.BeliefSparseSampling
The SparseSampling planning instance to solve the problem.
mdpQSource - Variable in class burlap.behavior.singleagent.pomdp.qmdp.QMDP
The fully observable MDP QFunction source.
MDPSolver - Class in burlap.behavior.singleagent
The abstract super class to use for various MDP solving algorithms, including both planning and learning algorithms.
MDPSolver() - Constructor for class burlap.behavior.singleagent.MDPSolver
 
MDPSolverInterface - Interface in burlap.behavior.singleagent
The top-level interface for algorithms that solve MDPs.
medianEpisodeReward - Variable in class burlap.behavior.singleagent.auxiliary.performance.PerformancePlotter.Trial
Stores the median reward by episode
medianEpisodeReward - Variable in class burlap.behavior.stochasticgames.auxiliary.performance.MultiAgentPerformancePlotter.Trial
Stores the median reward by episode
medianEpisodeRewardSeries - Variable in class burlap.behavior.singleagent.auxiliary.performance.PerformancePlotter.AgentDatasets
Most recent trial's median reward per step episode data
medianEpisodeRewardSeries - Variable in class burlap.behavior.stochasticgames.auxiliary.performance.MultiAgentPerformancePlotter.AgentDatasets
Most recent trial's median reward per step episode data
memoryQueue - Variable in class burlap.behavior.singleagent.planning.deterministic.uninformed.dfs.LimitedMemoryDFS
A queue for storing the most recently expanded nodes.
memorySize - Variable in class burlap.behavior.singleagent.planning.deterministic.uninformed.dfs.LimitedMemoryDFS
the size of the memory; that is, the number of recently expanded search nodes the valueFunction will remember.
memoryStateDepth - Variable in class burlap.behavior.singleagent.planning.deterministic.uninformed.dfs.LimitedMemoryDFS
Stores the depth at which each state in the memory was explored.
merAvgSeries - Variable in class burlap.behavior.singleagent.auxiliary.performance.PerformancePlotter.AgentDatasets
All trial's average median reward per episode series data
merAvgSeries - Variable in class burlap.behavior.stochasticgames.auxiliary.performance.MultiAgentPerformancePlotter.AgentDatasets
All trial's average median reward per episode series data
metric - Variable in class burlap.behavior.singleagent.vfa.rbf.FVRBF
The distance metric to compare query input states to the centeredState
metric - Variable in class burlap.behavior.singleagent.vfa.rbf.RBF
The distance metric used to compare input states to this unit's center state.
metricsSet - Variable in class burlap.behavior.singleagent.auxiliary.performance.PerformancePlotter
A set specifying the performance metrics that will be plotted
metricsSet - Variable in class burlap.behavior.stochasticgames.auxiliary.performance.MultiAgentPerformancePlotter
A set specifying the performance metrics that will be plotted
minEligibityForUpdate - Variable in class burlap.behavior.singleagent.learning.tdmethods.vfa.GradientDescentSarsaLam
The minimum eligibility value of a trace that will cause it to be updated
minimumLR - Variable in class burlap.behavior.learningrate.ExponentialDecayLR
The minimum learning rate
minimumLR - Variable in class burlap.behavior.learningrate.SoftTimeInverseDecayLR
The minimum learning rate
MinMax - Class in burlap.behavior.stochasticgames.agents.twoplayer.singlestage.equilibriumplayer.equilibriumsolvers
Finds the MinMax equilibrium using linear programming and returns the appropraite strategy.
MinMax() - Constructor for class burlap.behavior.stochasticgames.agents.twoplayer.singlestage.equilibriumplayer.equilibriumsolvers.MinMax
 
MinMaxQ - Class in burlap.behavior.stochasticgames.madynamicprogramming.backupOperators
A minmax operator.
MinMaxQ() - Constructor for class burlap.behavior.stochasticgames.madynamicprogramming.backupOperators.MinMaxQ
 
MinMaxSolver - Class in burlap.behavior.stochasticgames.solvers
 
MinMaxSolver() - Constructor for class burlap.behavior.stochasticgames.solvers.MinMaxSolver
 
minNewStepsForLearningPI - Variable in class burlap.behavior.singleagent.learning.lspi.LSPI
The minimum number of new observations received from learning episodes before LSPI will be run again.
minNumRolloutsWithSmallValueChange - Variable in class burlap.behavior.singleagent.planning.stochastic.rtdp.RTDP
RTDP will be delcared "converged" if there are this many consecutive policy rollouts in which the value function change is smaller than the maxDelta value.
minStepAndEpisodes(List<PerformancePlotter.Trial>) - Method in class burlap.behavior.singleagent.auxiliary.performance.PerformancePlotter
Returns the minimum steps and episodes across all trials
minStepAndEpisodes(List<MultiAgentPerformancePlotter.Trial>) - Method in class burlap.behavior.stochasticgames.auxiliary.performance.MultiAgentPerformancePlotter
Returns the minimum steps and episodes across all trials
minx - Variable in class burlap.domain.singleagent.blockdude.BlockDudeVisualizer.AgentPainter
 
minx - Variable in class burlap.domain.singleagent.blockdude.BlockDudeVisualizer.BlockPainter
 
minx - Variable in class burlap.domain.singleagent.blockdude.BlockDudeVisualizer.BricksPainter
 
minx - Variable in class burlap.domain.singleagent.blockdude.BlockDudeVisualizer.ExitPainter
 
miny - Variable in class burlap.domain.singleagent.blockdude.BlockDudeVisualizer.AgentPainter
 
miny - Variable in class burlap.domain.singleagent.blockdude.BlockDudeVisualizer.BlockPainter
 
miny - Variable in class burlap.domain.singleagent.blockdude.BlockDudeVisualizer.BricksPainter
 
miny - Variable in class burlap.domain.singleagent.blockdude.BlockDudeVisualizer.ExitPainter
 
MLIRL - Class in burlap.behavior.singleagent.learnfromdemo.mlirl
An implementation of Maximum-likelihood Inverse Reinforcement Learning [1].
MLIRL(MLIRLRequest, double, double, int) - Constructor for class burlap.behavior.singleagent.learnfromdemo.mlirl.MLIRL
Initializes.
mlirlInstance - Variable in class burlap.behavior.singleagent.learnfromdemo.mlirl.MultipleIntentionsMLIRL
The MLIRL instance used to perform the maximization step for each clusters reward function parameter values.
MLIRLRequest - Class in burlap.behavior.singleagent.learnfromdemo.mlirl
A request object for Maximum-Likelihood Inverse Reinforcement Learning (MLIRL).
MLIRLRequest(Domain, Planner, List<EpisodeAnalysis>, DifferentiableRF) - Constructor for class burlap.behavior.singleagent.learnfromdemo.mlirl.MLIRLRequest
Initializes the request without any expert trajectory weights (which will be assumed to have a value 1).
MLIRLRequest(Domain, List<EpisodeAnalysis>, DifferentiableRF, HashableStateFactory) - Constructor for class burlap.behavior.singleagent.learnfromdemo.mlirl.MLIRLRequest
Initializes without any expert trajectory weights (which will be assumed to have a value 1) and requests a default QGradientPlanner instance to be created using the HashableStateFactory provided.
mode(int) - Static method in class burlap.debugtools.DPrint
Returns the print mode for a given debug code
model - Variable in class burlap.behavior.singleagent.learning.modellearning.artdp.ARTDP
The model of the world that is being learned.
Model - Class in burlap.behavior.singleagent.learning.modellearning
This abstract class is used by model learning algorithms to learn the transition dynamics, reward function, and terminal function, of the world through experience.
Model() - Constructor for class burlap.behavior.singleagent.learning.modellearning.Model
 
model - Variable in class burlap.behavior.singleagent.learning.modellearning.ModeledDomainGenerator.ModeledAction
The model of the transition dynamics that specify the outcomes of this action
model - Variable in class burlap.behavior.singleagent.learning.modellearning.rmax.PotentialShapedRMax
The model of the world that is being learned.
model - Variable in class burlap.behavior.singleagent.learning.modellearning.rmax.UnmodeledFavoredPolicy
 
modelChanged(State) - Method in interface burlap.behavior.singleagent.learning.modellearning.ModelLearningPlanner
Tells the valueFunction that the model has changed and that it will need to replan accordingly
modelChanged(State) - Method in class burlap.behavior.singleagent.learning.modellearning.modelplanners.VIModelLearningPlanner
 
modelDomain - Variable in class burlap.behavior.singleagent.learning.modellearning.ModeledDomainGenerator
The domain object to be returned
modeledDomain - Variable in class burlap.behavior.singleagent.learning.modellearning.rmax.PotentialShapedRMax
The modeled domain object containing the modeled actions that a valueFunction will use.
ModeledDomainGenerator - Class in burlap.behavior.singleagent.learning.modellearning
Use this class when an action model is being modeled.
ModeledDomainGenerator(Domain, Model) - Constructor for class burlap.behavior.singleagent.learning.modellearning.ModeledDomainGenerator
Creates a new domain object that is a reflection of the input domain, Actions are created using the ModeledDomainGenerator.ModeledAction class and their execution and transition dynamics should be defined by the given model that was learned by some Model class.
ModeledDomainGenerator.ModeledAction - Class in burlap.behavior.singleagent.learning.modellearning
A class for creating an action model for some source action.
ModeledDomainGenerator.ModeledAction(Domain, Action, Model) - Constructor for class burlap.behavior.singleagent.learning.modellearning.ModeledDomainGenerator.ModeledAction
Initializes.
modeledRewardFunction - Variable in class burlap.behavior.singleagent.learning.modellearning.rmax.PotentialShapedRMax
The modeled reward function that is being learned.
modeledRF - Variable in class burlap.behavior.singleagent.learning.modellearning.models.TabularModel
The modeled reward function.
modeledTerminalFunction - Variable in class burlap.behavior.singleagent.learning.modellearning.rmax.PotentialShapedRMax
The modeled terminal state function.
modeledTF - Variable in class burlap.behavior.singleagent.learning.modellearning.models.TabularModel
The modeled terminal funciton.
ModelLearningPlanner - Interface in burlap.behavior.singleagent.learning.modellearning
Interface for defining planning algorithms that operate on iteratively learned models.
modelPlannedPolicy() - Method in interface burlap.behavior.singleagent.learning.modellearning.ModelLearningPlanner
Returns a policy encoding the planner's results.
modelPlannedPolicy() - Method in class burlap.behavior.singleagent.learning.modellearning.modelplanners.VIModelLearningPlanner
 
modelPlanner - Variable in class burlap.behavior.singleagent.learning.modellearning.artdp.ARTDP
The valueFunction used on the modeled world to update the value function
modelPlanner - Variable in class burlap.behavior.singleagent.learning.modellearning.rmax.PotentialShapedRMax
The model-adaptive planning algorithm to use
modelPolicy - Variable in class burlap.behavior.singleagent.learning.modellearning.modelplanners.VIModelLearningPlanner
The greedy policy that results from VI
mostRecentTrialEnabled() - Method in enum burlap.behavior.singleagent.auxiliary.performance.TrialMode
Returns true if the most recent trial plots will be plotted by this mode.
MountainCar - Class in burlap.domain.singleagent.mountaincar
A domain generator for the classic mountain car domain with default dynamics follow those implemented by Singh and Sutton [1].
MountainCar() - Constructor for class burlap.domain.singleagent.mountaincar.MountainCar
 
MountainCar.ClassicMCTF - Class in burlap.domain.singleagent.mountaincar
A Terminal Function for the Mountain Car domain that terminates when the agent's position is >= the max position in the world.
MountainCar.ClassicMCTF() - Constructor for class burlap.domain.singleagent.mountaincar.MountainCar.ClassicMCTF
Sets terminal states to be those that are >= the maximum position in the world.
MountainCar.ClassicMCTF(double) - Constructor for class burlap.domain.singleagent.mountaincar.MountainCar.ClassicMCTF
Sets terminal states to be those >= the given threshold.
MountainCar.MCPhysicsParams - Class in burlap.domain.singleagent.mountaincar
 
MountainCar.MCPhysicsParams() - Constructor for class burlap.domain.singleagent.mountaincar.MountainCar.MCPhysicsParams
 
MountainCarVisualizer - Class in burlap.domain.singleagent.mountaincar
A class for creating a Visualizer for a MountainCar Domain.
MountainCarVisualizer() - Constructor for class burlap.domain.singleagent.mountaincar.MountainCarVisualizer
 
MountainCarVisualizer.AgentPainter - Class in burlap.domain.singleagent.mountaincar
Class for paining the agent in the mountain car domain.
MountainCarVisualizer.AgentPainter(MountainCar.MCPhysicsParams) - Constructor for class burlap.domain.singleagent.mountaincar.MountainCarVisualizer.AgentPainter
Initializes with the mountain car physics used
MountainCarVisualizer.HillPainter - Class in burlap.domain.singleagent.mountaincar
Class for drawing a black outline of the hill that the mountain car climbs.
MountainCarVisualizer.HillPainter(MountainCar.MCPhysicsParams) - Constructor for class burlap.domain.singleagent.mountaincar.MountainCarVisualizer.HillPainter
Initializes with the mountain car physics used
move(State, int, int) - Method in class burlap.domain.singleagent.frostbite.FrostbiteDomain
Attempts to move the agent into the given position, taking into account platforms and screen borders
move(State, int, int, int[][]) - Method in class burlap.domain.singleagent.gridworld.GridWorldDomain
Attempts to move the agent into the given position, taking into account walls and blocks
move(State, int, MountainCar.MCPhysicsParams) - Static method in class burlap.domain.singleagent.mountaincar.MountainCar
Changes the agents position in the provided state using car engine acceleration in the specified direction.
moveCarriedBlockToNewAgentPosition(State, ObjectInstance, int, int, int, int) - Static method in class burlap.domain.singleagent.blockdude.BlockDude
Moves a carried block to a new position of the agent
moveClassicModel(State, double, CartPoleDomain.CPPhysicsParams) - Static method in class burlap.domain.singleagent.cartpole.CartPoleDomain
Simulates the physics for one time step give the input state s, and the direction of force applied.
moveCorrectModel(State, double, CartPoleDomain.CPPhysicsParams) - Static method in class burlap.domain.singleagent.cartpole.CartPoleDomain
Simulates the physics for one time step give the input state s, and the direction of force applied.
moveHorizontally(State, int, int) - Static method in class burlap.domain.singleagent.blockdude.BlockDude
Modifies state s to be the result of a horizontal movement.
movementDirectionFromIndex(int) - Method in class burlap.domain.singleagent.frostbite.FrostbiteDomain
Returns the change in x and y position for a given direction number.
movementDirectionFromIndex(int) - Method in class burlap.domain.singleagent.gridworld.GridWorldDomain
Returns the change in x and y position for a given direction number.
movementForceMag - Variable in class burlap.domain.singleagent.cartpole.CartPoleDomain.CPPhysicsParams
The force magnatude that can be exterted in either direction on the cart
moveResult(int, int, int) - Method in class burlap.tutorials.bd.ExampleGridWorld.Movement
 
moveUp(State, int) - Static method in class burlap.domain.singleagent.blockdude.BlockDude
Modifies state s to be the result of a vertical movement, that will result in the agent onto the platform adjacent to its current location in the direction the agent is facing, provided that there is room for the agent (and any block it's holding) to step onto it.
MultiAgentDPPlanningAgent - Class in burlap.behavior.stochasticgames.agents.madp
A agent that using a MADynamicProgramming planning algorithm to compute the value of each state and then follow a policy derived from a joint policy that is derived from that estimated value function.
MultiAgentDPPlanningAgent(SGDomain, MADynamicProgramming, PolicyFromJointPolicy) - Constructor for class burlap.behavior.stochasticgames.agents.madp.MultiAgentDPPlanningAgent
Initializes.
MultiAgentExperimenter - Class in burlap.behavior.stochasticgames.auxiliary.performance
This class is used to simplify the comparison of agent perfomance in a stochastic game world.
MultiAgentExperimenter(WorldGenerator, TerminalFunction, int, int, AgentFactoryAndType...) - Constructor for class burlap.behavior.stochasticgames.auxiliary.performance.MultiAgentExperimenter
Initializes.
MultiAgentPerformancePlotter - Class in burlap.behavior.stochasticgames.auxiliary.performance
This class is a world observer used for recording and plotting the performance of the agents in the world.
MultiAgentPerformancePlotter(TerminalFunction, int, int, int, int, TrialMode, PerformanceMetric...) - Constructor for class burlap.behavior.stochasticgames.auxiliary.performance.MultiAgentPerformancePlotter
Initializes
MultiAgentPerformancePlotter.AgentDatasets - Class in burlap.behavior.stochasticgames.auxiliary.performance
A datastructure for maintain the plot series data in the current agent
MultiAgentPerformancePlotter.AgentDatasets(String) - Constructor for class burlap.behavior.stochasticgames.auxiliary.performance.MultiAgentPerformancePlotter.AgentDatasets
Initializes the datastructures for an agent with the given name
MultiAgentPerformancePlotter.DatasetsAndTrials - Class in burlap.behavior.stochasticgames.auxiliary.performance
A class for storing the tiral data and series datasets for a given agent.
MultiAgentPerformancePlotter.DatasetsAndTrials(String) - Constructor for class burlap.behavior.stochasticgames.auxiliary.performance.MultiAgentPerformancePlotter.DatasetsAndTrials
Initializes for an agent with the given name.
MultiAgentPerformancePlotter.MutableBoolean - Class in burlap.behavior.stochasticgames.auxiliary.performance
A class for a mutable boolean
MultiAgentPerformancePlotter.MutableBoolean(boolean) - Constructor for class burlap.behavior.stochasticgames.auxiliary.performance.MultiAgentPerformancePlotter.MutableBoolean
Initializes with the given Boolean value
MultiAgentPerformancePlotter.Trial - Class in burlap.behavior.stochasticgames.auxiliary.performance
A datastructure for maintaining all the metric stats for a single trial.
MultiAgentPerformancePlotter.Trial() - Constructor for class burlap.behavior.stochasticgames.auxiliary.performance.MultiAgentPerformancePlotter.Trial
 
MultiAgentQLearning - Class in burlap.behavior.stochasticgames.agents.maql
A class for performing multi-agent Q-learning in which different Q-value backup operators can be provided to enable the learning of different solution concepts.
MultiAgentQLearning(SGDomain, double, double, HashableStateFactory, double, SGBackupOperator, boolean) - Constructor for class burlap.behavior.stochasticgames.agents.maql.MultiAgentQLearning
Initializes this Q-learning agent.
MultiAgentQLearning(SGDomain, double, LearningRate, HashableStateFactory, ValueFunctionInitialization, SGBackupOperator, boolean) - Constructor for class burlap.behavior.stochasticgames.agents.maql.MultiAgentQLearning
Initializes this Q-learning agent.
MultiAgentQSourceProvider - Interface in burlap.behavior.stochasticgames.madynamicprogramming
An interface for an object that can providing the Q-values stored for each agent in a problem.
MultiLayerRenderer - Class in burlap.oomdp.visualizer
A MultiLayerRenderer is a canvas that will sequentially render a set of render layers, one on top of the other, to the same 2D graphics context.
MultiLayerRenderer() - Constructor for class burlap.oomdp.visualizer.MultiLayerRenderer
 
MultipleIntentionsMLIRL - Class in burlap.behavior.singleagent.learnfromdemo.mlirl
An implementation of Multiple Intentions Maximum-likelihood Inverse Reinforcement Learning [1].
MultipleIntentionsMLIRL(MultipleIntentionsMLIRLRequest, int, double, double, int) - Constructor for class burlap.behavior.singleagent.learnfromdemo.mlirl.MultipleIntentionsMLIRL
Initializes.
MultipleIntentionsMLIRLRequest - Class in burlap.behavior.singleagent.learnfromdemo.mlirl
A problem request object for MultipleIntentionsMLIRL.
MultipleIntentionsMLIRLRequest(Domain, QGradientPlannerFactory, List<EpisodeAnalysis>, DifferentiableRF, int) - Constructor for class burlap.behavior.singleagent.learnfromdemo.mlirl.MultipleIntentionsMLIRLRequest
Initializes
MultipleIntentionsMLIRLRequest(Domain, List<EpisodeAnalysis>, DifferentiableRF, int, HashableStateFactory) - Constructor for class burlap.behavior.singleagent.learnfromdemo.mlirl.MultipleIntentionsMLIRLRequest
Initializes using a default QGradientPlannerFactory.DifferentiableVIFactory that is based on the provided HashableStateFactory object.
MultiStatePrePlanner - Class in burlap.behavior.singleagent.planning.deterministic
This is a helper class that is used to run a valueFunction from multiple initial states to ensure that an adequate plan/policy exists for each them.
MultiStatePrePlanner() - Constructor for class burlap.behavior.singleagent.planning.deterministic.MultiStatePrePlanner
 
MultiTargetRelationalValue - Class in burlap.oomdp.core.values
A multi-target relational value object subclass.
MultiTargetRelationalValue(Attribute) - Constructor for class burlap.oomdp.core.values.MultiTargetRelationalValue
Initializes the value to be associted with the given attribute
MultiTargetRelationalValue(MultiTargetRelationalValue) - Constructor for class burlap.oomdp.core.values.MultiTargetRelationalValue
Initializes this value as a copy from the source Value object v.
MultiTargetRelationalValue(Attribute, Collection<String>) - Constructor for class burlap.oomdp.core.values.MultiTargetRelationalValue
 
MutableObjectInstance - Class in burlap.oomdp.core.objects
Object Instances are the primary element for defining states.
MutableObjectInstance(ObjectClass, String) - Constructor for class burlap.oomdp.core.objects.MutableObjectInstance
Initializes an object instance for a given object class and name.
MutableObjectInstance(MutableObjectInstance) - Constructor for class burlap.oomdp.core.objects.MutableObjectInstance
Creates a new object instance that is a deep copy of the specified object instance's values.
MutableState - Class in burlap.oomdp.core.states
State objects are a collection of Object Instances.
MutableState() - Constructor for class burlap.oomdp.core.states.MutableState
 
MutableState(MutableState) - Constructor for class burlap.oomdp.core.states.MutableState
Initializes this state as a deep copy of the object instances in the provided source state s
myCoop - Variable in class burlap.behavior.stochasticgames.agents.twoplayer.repeatedsinglestage.GrimTrigger.GrimTriggerAgentFactory
The agent's cooperate action
myCoop - Variable in class burlap.behavior.stochasticgames.agents.twoplayer.repeatedsinglestage.GrimTrigger
This agent's cooperate action
myCoop - Variable in class burlap.behavior.stochasticgames.agents.twoplayer.repeatedsinglestage.TitForTat
This agent's cooperate action
myCoop - Variable in class burlap.behavior.stochasticgames.agents.twoplayer.repeatedsinglestage.TitForTat.TitForTatAgentFactory
This agent's cooperate action
myDefect - Variable in class burlap.behavior.stochasticgames.agents.twoplayer.repeatedsinglestage.GrimTrigger.GrimTriggerAgentFactory
The agent's defect action
myDefect - Variable in class burlap.behavior.stochasticgames.agents.twoplayer.repeatedsinglestage.GrimTrigger
This agent's defect action
myDefect - Variable in class burlap.behavior.stochasticgames.agents.twoplayer.repeatedsinglestage.TitForTat
This agent's defect action
myDefect - Variable in class burlap.behavior.stochasticgames.agents.twoplayer.repeatedsinglestage.TitForTat.TitForTatAgentFactory
This agent's defect action
myQSource - Variable in class burlap.behavior.stochasticgames.agents.maql.MultiAgentQLearning
This agent's Q-value source
MyTimer - Class in burlap.debugtools
A data structure for keeping track of elapsed and average time.
MyTimer() - Constructor for class burlap.debugtools.MyTimer
Creates a new timer.
MyTimer(boolean) - Constructor for class burlap.debugtools.MyTimer
Creates a new timer and starts it if start=true.
A B C D E F G H I J K L M N O P Q R S T U V W X Y Z