A B C D E F G H I J K L M N O P Q R S T U V W X Y Z 

M

MacroAction - Class in burlap.behavior.singleagent.options
A macro action is an action that always executes a sequence of actions.
MacroAction(String, List<GroundedAction>) - Constructor for class burlap.behavior.singleagent.options.MacroAction
Instantiates a macro action with a given name and action sequence.
MacroCellGridWorld - Class in burlap.domain.singleagent.gridworld.macro
A domain that extends the grid world domain by adding "Macro Cells" to it, which specify rectangular regions of the space.
MacroCellGridWorld() - Constructor for class burlap.domain.singleagent.gridworld.macro.MacroCellGridWorld
Initializes with a default world size of 32x32 and macro cell size of 16x16.
MacroCellGridWorld(int, int, int, int) - Constructor for class burlap.domain.singleagent.gridworld.macro.MacroCellGridWorld
Initializes with the given world width/height and macro-cell width/height
MacroCellGridWorld.InMacroCellPF - Class in burlap.domain.singleagent.gridworld.macro
A propositional function for detecting if the agent is in a specific macro cell.
MacroCellGridWorld.InMacroCellPF(Domain, int, int, int, int) - Constructor for class burlap.domain.singleagent.gridworld.macro.MacroCellGridWorld.InMacroCellPF
 
MacroCellGridWorld.LinearInPFRewardFunction - Class in burlap.domain.singleagent.gridworld.macro
RewardFunction class that returns rewards based on a linear combination of propositional functions
MacroCellGridWorld.LinearInPFRewardFunction(PropositionalFunction[], Map<String, Double>) - Constructor for class burlap.domain.singleagent.gridworld.macro.MacroCellGridWorld.LinearInPFRewardFunction
Initializes
macroCellHorizontalCount - Variable in class burlap.domain.singleagent.gridworld.macro.MacroCellGridWorld
The number of columns of macro cells (cells across the y-axis)
macroCellVerticalCount - Variable in class burlap.domain.singleagent.gridworld.macro.MacroCellGridWorld
The number of rows of macro cells (cells across the x-axis)
MacroCellVisualizer - Class in burlap.domain.singleagent.gridworld.macro
A class for visualizing the reward weights assigned to a Macro-cell in a Macro-cell grid world.
MacroCellVisualizer() - Constructor for class burlap.domain.singleagent.gridworld.macro.MacroCellVisualizer
 
MacroCellVisualizer.MacroCellRewardWeightPainter - Class in burlap.domain.singleagent.gridworld.macro
Class for painting the macro cells a color between white and blue, where strong blue indicates strong reward weights.
MacroCellVisualizer.MacroCellRewardWeightPainter(int[][], MacroCellGridWorld.InMacroCellPF[], Map<String, Double>) - Constructor for class burlap.domain.singleagent.gridworld.macro.MacroCellVisualizer.MacroCellRewardWeightPainter
 
main(String[]) - Static method in class burlap.behavior.singleagent.interfaces.rlglue.common.RLGlueCMACSarsaLambdaFactory
 
main(String[]) - Static method in class burlap.behavior.singleagent.interfaces.rlglue.common.RLGLueQlearningFactory
 
main(String[]) - Static method in class burlap.behavior.singleagent.interfaces.rlglue.common.RLGlueSARSALambdaFactory
 
main(String[]) - Static method in class burlap.behavior.stochasticgame.solvers.CorrelatedEquilibriumSolver
 
main(String[]) - Static method in class burlap.datastructures.AlphanumericSorting
Testing the alphanumeric sorting
main(String[]) - Static method in class burlap.datastructures.StochasticTree
Demos how to use this class
main(String[]) - Static method in class burlap.debugtools.MyTimer
Demo of usage
main(String[]) - Static method in class burlap.debugtools.RandomFactory
Example usage.
main(String[]) - Static method in class burlap.domain.singleagent.blockdude.BlockDude
Runs an interactive visual explorer for level one of Block Dude.
main(String[]) - Static method in class burlap.domain.singleagent.blocksworld.BlocksWorld
Main method for exploring the domain.
main(String[]) - Static method in class burlap.domain.singleagent.cartpole.CartPoleDomain
Launches an interactive visualize in which key 'a' applies a force in the left direction and key 'd' applies force in the right direction.
main(String[]) - Static method in class burlap.domain.singleagent.cartpole.InvertedPendulum
 
main(String[]) - Static method in class burlap.domain.singleagent.frostbite.FrostbiteDomain
Main function to test the domain.
main(String[]) - Static method in class burlap.domain.singleagent.graphdefined.GraphDefinedDomain
 
main(String[]) - Static method in class burlap.domain.singleagent.gridworld.GridWorldDomain
Creates a visual explorer or terminal explorer.
main(String[]) - Static method in class burlap.domain.singleagent.lunarlander.LunarLanderDomain
This method will launch a visual explorer for the lunar lander domain.
main(String[]) - Static method in class burlap.domain.singleagent.mountaincar.MountainCar
Will launch a visual explorer for the mountain car domain that is controlled with the a-s-d keys.
main(String[]) - Static method in class burlap.domain.stochasticgames.gridgame.GridGame
Creates a visual explorer for a simple domain with two agents in it.
main(String[]) - Static method in class burlap.domain.stochasticgames.normalform.SingleStageNormalFormGame
A main method showing example code that would be used to create an instance of Prisoner's dilemma and begin playing it with a SGTerminalExplorer.
main(String[]) - Static method in class burlap.testing.TestRunner
 
maintainClosed - Variable in class burlap.behavior.singleagent.planning.deterministic.uninformed.dfs.DFS
Whether to keep track of a closed list to prevent exploring already seen nodes.
makeEmptyMap() - Method in class burlap.domain.singleagent.gridworld.GridWorldDomain
Makes the map empty
map(State) - Method in class burlap.behavior.singleagent.options.Option
Returns the state that is mapped from the input state.
map - Variable in class burlap.behavior.stochasticgame.agents.naiveq.history.ParameterNaiveActionIdMap
The map from action names to their corresponding int value
map - Variable in class burlap.domain.singleagent.gridworld.GridWorldDomain
The wall map where the first index is the x position and the second index is the y position.
map - Variable in class burlap.domain.singleagent.gridworld.GridWorldDomain.MovementAction
The map of the world
map - Variable in class burlap.domain.singleagent.gridworld.GridWorldVisualizer.CellPainter
 
map - Variable in class burlap.domain.singleagent.gridworld.GridWorldVisualizer.LocationPainter
 
map - Variable in class burlap.domain.singleagent.gridworld.GridWorldVisualizer.MapPainter
 
mapAction(AbstractGroundedAction) - Method in class burlap.behavior.singleagent.learning.modellearning.DomainMappedPolicy
Maps an input GroundedAction to a GroundedAction using an action reference of the action in this object's DomainMappedPolicy.realWorldDomain object that has the same name as the action in the input GroundedAction.
mapState(State) - Method in interface burlap.behavior.singleagent.planning.StateMapping
 
mapToStateIndex - Variable in class burlap.behavior.singleagent.planning.OOMDPPlanner
A mapping to internal states that are stored.
MAQLFactory - Class in burlap.behavior.stochasticgame.agents.maql
This class provides a factory for MultiAgentQLearning agents.
MAQLFactory() - Constructor for class burlap.behavior.stochasticgame.agents.maql.MAQLFactory
Empty constructor.
MAQLFactory(SGDomain, double, double, StateHashFactory, double, SGBackupOperator, boolean) - Constructor for class burlap.behavior.stochasticgame.agents.maql.MAQLFactory
Initializes.
MAQLFactory(SGDomain, double, LearningRate, StateHashFactory, ValueFunctionInitialization, SGBackupOperator, boolean, PolicyFromJointPolicy) - Constructor for class burlap.behavior.stochasticgame.agents.maql.MAQLFactory
Initializes.
MAQLFactory.CoCoQLearningFactory - Class in burlap.behavior.stochasticgame.agents.maql
Factory for generating CoCo-Q agents.
MAQLFactory.CoCoQLearningFactory(SGDomain, double, LearningRate, StateHashFactory, ValueFunctionInitialization, boolean, double) - Constructor for class burlap.behavior.stochasticgame.agents.maql.MAQLFactory.CoCoQLearningFactory
 
MAQLFactory.MAMaxQLearningFactory - Class in burlap.behavior.stochasticgame.agents.maql
Factory for generating Max multiagent Q-learning agents.
MAQLFactory.MAMaxQLearningFactory(SGDomain, double, LearningRate, StateHashFactory, ValueFunctionInitialization, boolean, double) - Constructor for class burlap.behavior.stochasticgame.agents.maql.MAQLFactory.MAMaxQLearningFactory
 
MAQSourcePolicy - Class in burlap.behavior.stochasticgame.mavaluefunction
An abstract extension of the JointPolicy class that adds a required interface of being able to a MultiAgentQSourceProvider.
MAQSourcePolicy() - Constructor for class burlap.behavior.stochasticgame.mavaluefunction.MAQSourcePolicy
 
marginalizeColPlayerStrategy(double[][]) - Static method in class burlap.behavior.stochasticgame.solvers.GeneralBimatrixSolverTools
Returns the column player's strategy by marginalizing it out from a joint action probability distribution represented as a matrix
marginalizeRowPlayerStrategy(double[][]) - Static method in class burlap.behavior.stochasticgame.solvers.GeneralBimatrixSolverTools
Returns the row player's strategy by marginalizing it out from a joint action probability distribution represented as a matrix
markAsTerminalPosition(int, int) - Method in class burlap.domain.singleagent.gridworld.GridWorldTerminalFunction
Marks a position as a terminal position for the agent.
MatchEntry - Class in burlap.oomdp.stochasticgames.tournament
This class indicates which player in a tournament is to play in a match and what AgentType role they will play.
MatchEntry(AgentType, int) - Constructor for class burlap.oomdp.stochasticgames.tournament.MatchEntry
Initializes the MatchEntry
matchingActionFeature(List<FVCMACFeatureDatabase.ActionFeatureID>, GroundedAction) - Method in class burlap.behavior.singleagent.vfa.cmac.FVCMACFeatureDatabase
Returns the FVCMACFeatureDatabase.ActionFeatureID with an equivalent GroundedAction in the given list or null if there is none.
matchingTransitions(GroundedAction) - Method in class burlap.behavior.singleagent.planning.ActionTransitions
Returns whether these action transitions are for the specified GroundedAction
MatchSelector - Interface in burlap.oomdp.stochasticgames.tournament
An interface for defining how matches in a tournament will be determined
MAValueFunctionPlanner - Class in burlap.behavior.stochasticgame.mavaluefunction
An abstract value function based planning algorithm base for sequential stochastic games that require the computation of Q-values for each agent for each joint action.
MAValueFunctionPlanner() - Constructor for class burlap.behavior.stochasticgame.mavaluefunction.MAValueFunctionPlanner
 
MAValueFunctionPlanner.BackupBasedQSource - Class in burlap.behavior.stochasticgame.mavaluefunction
A QSourceForSingleAgent implementation which stores a value function for an agent and produces Joint action Q-values by marginalizing over the transition dynamics the reward and discounted next state value.
MAValueFunctionPlanner.BackupBasedQSource(String) - Constructor for class burlap.behavior.stochasticgame.mavaluefunction.MAValueFunctionPlanner.BackupBasedQSource
Initializes a value function for the agent of the given name.
MAValueFunctionPlanner.JointActionTransitions - Class in burlap.behavior.stochasticgame.mavaluefunction
A class for holding all of the transition dynamic information for a given joint action in a given state.
MAValueFunctionPlanner.JointActionTransitions(State, JointAction) - Constructor for class burlap.behavior.stochasticgame.mavaluefunction.MAValueFunctionPlanner.JointActionTransitions
Generates the transition information for the given state and joint aciton
MAValueIteration - Class in burlap.behavior.stochasticgame.mavaluefunction.vfplanners
A class for performing multi-agent value iteration.
MAValueIteration(SGDomain, JointActionModel, JointReward, TerminalFunction, double, StateHashFactory, double, SGBackupOperator, double, int) - Constructor for class burlap.behavior.stochasticgame.mavaluefunction.vfplanners.MAValueIteration
Initializes.
MAValueIteration(SGDomain, JointActionModel, JointReward, TerminalFunction, double, StateHashFactory, ValueFunctionInitialization, SGBackupOperator, double, int) - Constructor for class burlap.behavior.stochasticgame.mavaluefunction.vfplanners.MAValueIteration
Initializes.
MAValueIteration(SGDomain, Map<String, AgentType>, JointActionModel, JointReward, TerminalFunction, double, StateHashFactory, double, SGBackupOperator, double, int) - Constructor for class burlap.behavior.stochasticgame.mavaluefunction.vfplanners.MAValueIteration
Initializes.
MAValueIteration(SGDomain, Map<String, AgentType>, JointActionModel, JointReward, TerminalFunction, double, StateHashFactory, ValueFunctionInitialization, SGBackupOperator, double, int) - Constructor for class burlap.behavior.stochasticgame.mavaluefunction.vfplanners.MAValueIteration
Initializes.
MAVFPlanAgentFactory - Class in burlap.behavior.stochasticgame.agents.mavf
An agent factory for the MultiAgentVFPlanningAgent agent.
MAVFPlanAgentFactory(SGDomain, MAValueFunctionPlanner, PolicyFromJointPolicy) - Constructor for class burlap.behavior.stochasticgame.agents.mavf.MAVFPlanAgentFactory
Initializes.
MAVFPlanAgentFactory(SGDomain, MAVFPlannerFactory, PolicyFromJointPolicy) - Constructor for class burlap.behavior.stochasticgame.agents.mavf.MAVFPlanAgentFactory
Initializes
MAVFPlannerFactory - Interface in burlap.behavior.stochasticgame.agents.mavf
An interface for generating multi agent value function planners (MAValueFunctionPlanner objects).
MAVFPlannerFactory.ConstantMAVFPlannerFactory - Class in burlap.behavior.stochasticgame.agents.mavf
MAValueFunctionPlanner factory that always returns the same object instance, unless the reference is chaned with a mutator.
MAVFPlannerFactory.ConstantMAVFPlannerFactory(MAValueFunctionPlanner) - Constructor for class burlap.behavior.stochasticgame.agents.mavf.MAVFPlannerFactory.ConstantMAVFPlannerFactory
Initializes with a given planner reference.
MAVFPlannerFactory.MAVIPlannerFactory - Class in burlap.behavior.stochasticgame.agents.mavf
Factory for generating multi-agent value iteration planners (MAValueIteration).
MAVFPlannerFactory.MAVIPlannerFactory(SGDomain, JointActionModel, JointReward, TerminalFunction, double, StateHashFactory, double, SGBackupOperator, double, int) - Constructor for class burlap.behavior.stochasticgame.agents.mavf.MAVFPlannerFactory.MAVIPlannerFactory
Initializes.
MAVFPlannerFactory.MAVIPlannerFactory(SGDomain, JointActionModel, JointReward, TerminalFunction, double, StateHashFactory, ValueFunctionInitialization, SGBackupOperator, double, int) - Constructor for class burlap.behavior.stochasticgame.agents.mavf.MAVFPlannerFactory.MAVIPlannerFactory
Initializes.
MAVFPlannerFactory.MAVIPlannerFactory(SGDomain, Map<String, AgentType>, JointActionModel, JointReward, TerminalFunction, double, StateHashFactory, ValueFunctionInitialization, SGBackupOperator, double, int) - Constructor for class burlap.behavior.stochasticgame.agents.mavf.MAVFPlannerFactory.MAVIPlannerFactory
Initializes.
maxActions - Variable in class burlap.domain.singleagent.graphdefined.GraphDefinedDomain
The maximum number of actions available from any given state node.
maxAngleSpeed - Variable in class burlap.domain.singleagent.cartpole.CartPoleDomain.CPPhysicsParams
The maximum speed of the change in angle.
maxAngleSpeed - Variable in class burlap.domain.singleagent.cartpole.InvertedPendulum.IPPhysicsParams
The maximum speed (manitude) of the change in angle.
maxBackups - Variable in class burlap.behavior.singleagent.planning.stochastic.valueiteration.PrioritizedSweeping
THe maximum number Bellman backups permitted
maxBetaScaled(double[], double) - Static method in class burlap.behavior.singleagent.learnbydemo.mlirl.support.BoltzmannPolicyGradient
Given an array of Q-values, returns the maximum Q-value multiplied by the parameter beta.
maxCartSpeed - Variable in class burlap.domain.singleagent.cartpole.CartPoleDomain.CPPhysicsParams
The maximum speed of the cart.
maxChange - Variable in class burlap.behavior.singleagent.learning.lspi.LSPI
The maximum change in weights permitted to terminate LSPI.
maxDelta - Variable in class burlap.behavior.singleagent.learnbydemo.mlirl.differentiableplanners.DifferentiableVI
When the maximum change in the value function is smaller than this value, VI will terminate.
maxDelta - Variable in class burlap.behavior.singleagent.learnbydemo.mlirl.support.QGradientPlannerFactory.DifferentiableVIFactory
The value function change threshold to stop VI.
maxDelta - Variable in class burlap.behavior.singleagent.learning.modellearning.modelplanners.VIModelPlanner
The maximium VI delta
maxDelta - Variable in class burlap.behavior.singleagent.planning.stochastic.rtdp.RTDP
When the maximum change in the value function from a rollout is smaller than this value, VI will terminate.
maxDelta - Variable in class burlap.behavior.singleagent.planning.stochastic.valueiteration.ValueIteration
When the maximum change in the value function is smaller than this value, VI will terminate.
maxDelta - Variable in class burlap.behavior.singleagent.planning.vfa.fittedvi.FittedVI
The maximum change in the value function that will cause planning to terminate.
maxDelta - Variable in class burlap.behavior.stochasticgame.agents.mavf.MAVFPlannerFactory.MAVIPlannerFactory
The threshold that will cause VI to terminate when the max change in Q-value for is less than it
maxDelta - Variable in class burlap.behavior.stochasticgame.mavaluefunction.vfplanners.MAValueIteration
The threshold that will cause VI to terminate when the max change in Q-value for is less than it
maxDepth - Variable in class burlap.behavior.singleagent.planning.deterministic.uninformed.dfs.DFS
The max depth of the search tree that will be explored.
maxDepth - Variable in class burlap.behavior.singleagent.planning.stochastic.rtdp.BoundedRTDP
The maximum depth/length of a rollout before it is terminated and Bellman updates are performed.
maxDepth - Variable in class burlap.behavior.singleagent.planning.stochastic.rtdp.RTDP
The maximum depth/length of a rollout before it is terminated and Bellman updates are performed.
maxDiff - Variable in class burlap.behavior.singleagent.planning.stochastic.rtdp.BoundedRTDP
The max permitted difference between the lower bound and upperbound for planning termination.
maxDim - Variable in class burlap.domain.stochasticgames.gridgame.GridGame
The width and height of the world.
maxEpisodeSize - Variable in class burlap.behavior.singleagent.learning.actorcritic.ActorCritic
The maximum number of steps of an episode before the agent will manually terminate it.This is defaulted to Integer.MAX_VALUE.
maxEpisodeSize - Variable in class burlap.behavior.singleagent.learning.actorcritic.critics.TimeIndexedTDLambda
The maximum number of steps possible in an episode.
maxEpisodeSize - Variable in class burlap.behavior.singleagent.learning.LearningAgent.LearningAgentBookKeeping
 
maxEpisodeSize - Variable in class burlap.behavior.singleagent.learning.tdmethods.QLearning
The maximum number of steps that will be taken in an episode before the agent terminates a learning episode
maxEpisodeSize - Variable in class burlap.behavior.singleagent.learning.tdmethods.vfa.GradientDescentSarsaLam
The maximum number of steps that will be taken in an episode before the agent terminates a learning episode
maxEvalDelta - Variable in class burlap.behavior.singleagent.planning.stochastic.policyiteration.PolicyIteration
When the maximum change in the value function is smaller than this value, policy evaluation will terminate.
maxGT - Variable in class burlap.domain.stochasticgames.gridgame.GridGame
The number of goal types
maxHeap - Variable in class burlap.datastructures.HashIndexedHeap
If true, this is ordered according to a max heap; if false ordered according to a min heap.
maxHorizon - Variable in class burlap.behavior.singleagent.planning.stochastic.montecarlo.uct.UCT
 
maxIterations - Variable in class burlap.behavior.singleagent.learnbydemo.apprenticeship.ApprenticeshipLearningRequest
The maximum number of iterations of apprenticeship learning
maxIterations - Variable in class burlap.behavior.singleagent.learnbydemo.mlirl.differentiableplanners.DifferentiableVI
When the number of VI iterations exceeds this value, VI will terminate.
maxIterations - Variable in class burlap.behavior.singleagent.learnbydemo.mlirl.support.QGradientPlannerFactory.DifferentiableVIFactory
The maximum allowed number of VI iterations.
maxIterations - Variable in class burlap.behavior.singleagent.learning.modellearning.modelplanners.VIModelPlanner
The maximum number of VI iterations
maxIterations - Variable in class burlap.behavior.singleagent.planning.stochastic.policyiteration.PolicyIteration
When the number of policy evaluation iterations exceeds this value, policy evaluation will terminate.
maxIterations - Variable in class burlap.behavior.singleagent.planning.stochastic.valueiteration.ValueIteration
When the number of VI iterations exceeds this value, VI will terminate.
maxIterations - Variable in class burlap.behavior.singleagent.planning.vfa.fittedvi.FittedVI
The maximum number of iterations to run.
maxIterations - Variable in class burlap.behavior.stochasticgame.agents.mavf.MAVFPlannerFactory.MAVIPlannerFactory
The maximum allowable number of iterations until VI termination
maxIterations - Variable in class burlap.behavior.stochasticgame.mavaluefunction.vfplanners.MAValueIteration
The maximum allowable number of iterations until VI termination
maxLearningSteps - Variable in class burlap.behavior.singleagent.learning.lspi.LSPI
The maximum number of learning steps in an episode when the LSPI.runLearningEpisodeFrom(State) method is called.
maxLikelihoodChange - Variable in class burlap.behavior.singleagent.learnbydemo.mlirl.MLIRL
The likelihood change threshold to stop gradient ascent.
MaxMax - Class in burlap.behavior.stochasticgame.agents.twoplayer.singlestage.equilibriumplayer.equilibriumsolvers
A class for finding a stategy that maxmizes the player's payoff under the assumption that their "opponent" is friendly and will try to do the same.
MaxMax() - Constructor for class burlap.behavior.stochasticgame.agents.twoplayer.singlestage.equilibriumplayer.equilibriumsolvers.MaxMax
 
maxNonZeroCoefficents - Variable in class burlap.behavior.singleagent.vfa.fourier.FourierBasis
The maximum number of non-zero coefficient entries permitted in a coefficient vector
maxNumPlanningIterations - Variable in class burlap.behavior.singleagent.learning.lspi.LSPI
The maximum number of policy iterations permitted when LSPI is run from the LSPI.planFromState(State) or LSPI.runLearningEpisodeFrom(State) methods.
maxNumSteps - Variable in class burlap.behavior.singleagent.learning.modellearning.artdp.ARTDP
The maximum number of learning steps per episode before the agent gives up
maxNumSteps - Variable in class burlap.behavior.singleagent.learning.modellearning.rmax.PotentialShapedRMax
The maximum number of learning steps per episode before the agent gives up
maxPIDelta - Variable in class burlap.behavior.singleagent.planning.stochastic.policyiteration.PolicyIteration
When the maximum change between policy evaluations is smaller than this value, planning will terminate.
maxPlayers - Variable in class burlap.behavior.stochasticgame.agents.naiveq.history.SGQWActionHistoryFactory
The maximum number of players that can be in the game
maxPlyrs - Variable in class burlap.domain.stochasticgames.gridgame.GridGame
The maximum number of players that can be in the game
maxPolicyIterations - Variable in class burlap.behavior.singleagent.planning.stochastic.policyiteration.PolicyIteration
When the number of policy iterations passes this value, planning will terminate.
maxQ(State) - Method in class burlap.behavior.singleagent.planning.stochastic.rtdp.BoundedRTDP
Returns the maximum Q-value entry for the given state with ties broken randomly.
MaxQ - Class in burlap.behavior.stochasticgame.mavaluefunction.backupOperators
A classic MDP-style max backup operator in which an agent back ups his max Q-value in the state.
MaxQ() - Constructor for class burlap.behavior.stochasticgame.mavaluefunction.backupOperators.MaxQ
 
maxQChangeForPlanningTermination - Variable in class burlap.behavior.singleagent.learning.tdmethods.QLearning
The maximum allowable change in the Q-function during an episode before the planning method terminates.
maxQChangeInLastEpisode - Variable in class burlap.behavior.singleagent.learning.tdmethods.QLearning
The maximum Q-value change that occurred in the last learning episode.
maxRollouts - Variable in class burlap.behavior.singleagent.planning.stochastic.rtdp.BoundedRTDP
the max number of rollouts to perform when planning is started unless the value function margin is small enough.
maxRollOutsFromRoot - Variable in class burlap.behavior.singleagent.planning.stochastic.montecarlo.uct.UCT
 
maxSelfTransitionProb - Variable in class burlap.behavior.singleagent.planning.stochastic.valueiteration.PrioritizedSweeping.BPTRNode
 
maxStages - Variable in class burlap.oomdp.stochasticgames.tournament.Tournament
 
maxSteps - Variable in class burlap.behavior.singleagent.learnbydemo.mlirl.MLIRL
The maximum number of steps of gradient ascent.
maxTimeStep() - Method in class burlap.behavior.singleagent.EpisodeAnalysis
Returns the maximimum time step index in this episode which is the EpisodeAnalysis.numTimeSteps()-1.
maxTimeStep() - Method in class burlap.behavior.stochasticgame.GameAnalysis
Returns the max time step index in this game which equals GameAnalysis.numTimeSteps()-1.
maxTNormed() - Method in class burlap.datastructures.BoltzmannDistribution
Returns the maximum temperature normalized preference
maxValue() - Method in interface burlap.behavior.stochasticgame.agents.naiveq.history.ActionIdMap
The maximum number of int values for actions
maxValue() - Method in class burlap.behavior.stochasticgame.agents.naiveq.history.ParameterNaiveActionIdMap
 
maxWeightChangeForPlanningTermination - Variable in class burlap.behavior.singleagent.learning.tdmethods.vfa.GradientDescentSarsaLam
The maximum allowable change in the VFA weights during an episode before the planning method terminates.
maxWeightChangeInLastEpisode - Variable in class burlap.behavior.singleagent.learning.tdmethods.vfa.GradientDescentSarsaLam
The maximum VFA weight change that occurred in the last learning episode.
maxWT - Variable in class burlap.domain.stochasticgames.gridgame.GridGame
The number of wall types
maxx - Variable in class burlap.domain.singleagent.blockdude.BlockDude
Domain parameter specifying the maximum x dimension value of the world
maxx - Variable in class burlap.domain.singleagent.blockdude.BlockDude.MoveAction
 
maxx - Variable in class burlap.domain.singleagent.blockdude.BlockDude.MoveUpAction
 
maxx - Variable in class burlap.domain.singleagent.blockdude.BlockDude.PickupAction
 
maxx - Variable in class burlap.domain.singleagent.blockdude.BlockDude.PutdownAction
 
maxx - Variable in class burlap.domain.singleagent.blockdude.BlockDudeVisualizer.AgentPainter
 
maxx - Variable in class burlap.domain.singleagent.blockdude.BlockDudeVisualizer.BlockPainter
 
maxx - Variable in class burlap.domain.singleagent.blockdude.BlockDudeVisualizer.BricksPainter
 
maxx - Variable in class burlap.domain.singleagent.blockdude.BlockDudeVisualizer.ExitPainter
 
maxy - Variable in class burlap.domain.singleagent.blockdude.BlockDude
Domain parameter specifying the maximum y dimension value of the world
maxy - Variable in class burlap.domain.singleagent.blockdude.BlockDudeVisualizer.AgentPainter
 
maxy - Variable in class burlap.domain.singleagent.blockdude.BlockDudeVisualizer.BlockPainter
 
maxy - Variable in class burlap.domain.singleagent.blockdude.BlockDudeVisualizer.BricksPainter
 
maxy - Variable in class burlap.domain.singleagent.blockdude.BlockDudeVisualizer.ExitPainter
 
MCRandomStateGenerator - Class in burlap.domain.singleagent.mountaincar
Generates MountainCar states with the x-position between some specified range and the velocity between some specified range.
MCRandomStateGenerator(Domain) - Constructor for class burlap.domain.singleagent.mountaincar.MCRandomStateGenerator
Initializes for the MountainCar Domain object for which states will be generated.
MCRandomStateGenerator(Domain, double, double, double, double) - Constructor for class burlap.domain.singleagent.mountaincar.MCRandomStateGenerator
Initializes for the MountainCar Domain object for which states will be generated.
medianEpisodeReward - Variable in class burlap.behavior.singleagent.auxiliary.performance.PerformancePlotter.Trial
Stores the median reward by episode
medianEpisodeReward - Variable in class burlap.behavior.stochasticgame.auxiliary.performance.MultiAgentPerformancePlotter.Trial
Stores the median reward by episode
medianEpisodeRewardSeries - Variable in class burlap.behavior.singleagent.auxiliary.performance.PerformancePlotter.AgentDatasets
Most recent trial's median reward per step episode data
medianEpisodeRewardSeries - Variable in class burlap.behavior.stochasticgame.auxiliary.performance.MultiAgentPerformancePlotter.AgentDatasets
Most recent trial's median reward per step episode data
memoryQueue - Variable in class burlap.behavior.singleagent.planning.deterministic.uninformed.dfs.LimitedMemoryDFS
A queue for storing the most recently expanded nodes.
memorySize - Variable in class burlap.behavior.singleagent.planning.deterministic.uninformed.dfs.LimitedMemoryDFS
the size of the memory; that is, the number of recently expanded search nodes the planner will remember.
memoryStateDepth - Variable in class burlap.behavior.singleagent.planning.deterministic.uninformed.dfs.LimitedMemoryDFS
Stores the depth at which each state in the memory was explored.
merAvgSeries - Variable in class burlap.behavior.singleagent.auxiliary.performance.PerformancePlotter.AgentDatasets
All trial's average median reward per episode series data
merAvgSeries - Variable in class burlap.behavior.stochasticgame.auxiliary.performance.MultiAgentPerformancePlotter.AgentDatasets
All trial's average median reward per episode series data
metric - Variable in class burlap.behavior.singleagent.vfa.rbf.FVRBF
The distance metric to compare query input states to the centeredState
metric - Variable in class burlap.behavior.singleagent.vfa.rbf.RBF
The distance metric used to compare input states to this unit's center state.
metricsSet - Variable in class burlap.behavior.singleagent.auxiliary.performance.PerformancePlotter
A set specifying the performance metrics that will be plotted
metricsSet - Variable in class burlap.behavior.stochasticgame.auxiliary.performance.MultiAgentPerformancePlotter
A set specifying the performance metrics that will be plotted
minEligibityForUpdate - Variable in class burlap.behavior.singleagent.learning.tdmethods.vfa.GradientDescentSarsaLam
The minimum eligibility value of a trace that will cause it to be updated
minimumLR - Variable in class burlap.behavior.learningrate.ExponentialDecayLR
The minimum learning rate
minimumLR - Variable in class burlap.behavior.learningrate.SoftTimeInverseDecayLR
The minimum learning rate
MinMax - Class in burlap.behavior.stochasticgame.agents.twoplayer.singlestage.equilibriumplayer.equilibriumsolvers
Finds the MinMax equilibrium using linear programming and returns the appropraite strategy.
MinMax() - Constructor for class burlap.behavior.stochasticgame.agents.twoplayer.singlestage.equilibriumplayer.equilibriumsolvers.MinMax
 
MinMaxQ - Class in burlap.behavior.stochasticgame.mavaluefunction.backupOperators
A minmax operator.
MinMaxQ() - Constructor for class burlap.behavior.stochasticgame.mavaluefunction.backupOperators.MinMaxQ
 
MinMaxSolver - Class in burlap.behavior.stochasticgame.solvers
 
MinMaxSolver() - Constructor for class burlap.behavior.stochasticgame.solvers.MinMaxSolver
 
minNewStepsForLearningPI - Variable in class burlap.behavior.singleagent.learning.lspi.LSPI
The minimum number of new observations received from learning episodes before LSPI will be run again.
minNumRolloutsWithSmallValueChange - Variable in class burlap.behavior.singleagent.planning.stochastic.rtdp.RTDP
RTDP will be delcared "converged" if there are this many consecutive policy rollouts in which the value function change is smaller than the maxDelta value.
minStepAndEpisodes(List<PerformancePlotter.Trial>) - Method in class burlap.behavior.singleagent.auxiliary.performance.PerformancePlotter
Returns the minimum steps and episodes across all trials
minStepAndEpisodes(List<MultiAgentPerformancePlotter.Trial>) - Method in class burlap.behavior.stochasticgame.auxiliary.performance.MultiAgentPerformancePlotter
Returns the minimum steps and episodes across all trials
minx - Variable in class burlap.domain.singleagent.blockdude.BlockDudeVisualizer.AgentPainter
 
minx - Variable in class burlap.domain.singleagent.blockdude.BlockDudeVisualizer.BlockPainter
 
minx - Variable in class burlap.domain.singleagent.blockdude.BlockDudeVisualizer.BricksPainter
 
minx - Variable in class burlap.domain.singleagent.blockdude.BlockDudeVisualizer.ExitPainter
 
miny - Variable in class burlap.domain.singleagent.blockdude.BlockDudeVisualizer.AgentPainter
 
miny - Variable in class burlap.domain.singleagent.blockdude.BlockDudeVisualizer.BlockPainter
 
miny - Variable in class burlap.domain.singleagent.blockdude.BlockDudeVisualizer.BricksPainter
 
miny - Variable in class burlap.domain.singleagent.blockdude.BlockDudeVisualizer.ExitPainter
 
MLIRL - Class in burlap.behavior.singleagent.learnbydemo.mlirl
An implementation of Maximum-likelihood Inverse Reinforcement Learning [1].
MLIRL(MLIRLRequest, double, double, int) - Constructor for class burlap.behavior.singleagent.learnbydemo.mlirl.MLIRL
Initializes.
mlirlInstance - Variable in class burlap.behavior.singleagent.learnbydemo.mlirl.MultipleIntentionsMLIRL
The MLIRL instance used to perform the maximization step for each clusters reward function parameter values.
MLIRLRequest - Class in burlap.behavior.singleagent.learnbydemo.mlirl
A request object for Maximum-Likelihood Inverse Reinforcement Learning (MLIRL).
MLIRLRequest(Domain, OOMDPPlanner, List<EpisodeAnalysis>, DifferentiableRF) - Constructor for class burlap.behavior.singleagent.learnbydemo.mlirl.MLIRLRequest
Initializes the request without any expert trajectory weights (which will be assumed to have a value 1).
MLIRLRequest(Domain, List<EpisodeAnalysis>, DifferentiableRF, StateHashFactory) - Constructor for class burlap.behavior.singleagent.learnbydemo.mlirl.MLIRLRequest
Initializes without any expert trajectory weights (which will be assumed to have a value 1) and requests a default QGradientPlanner instance to be created using the StateHashFactory provided.
mode(int) - Static method in class burlap.debugtools.DPrint
Returns the print mode for a given debug code
model - Variable in class burlap.behavior.singleagent.learning.modellearning.artdp.ARTDP
The model of the world that is being learned.
Model - Class in burlap.behavior.singleagent.learning.modellearning
This abstract class is used by model learning algorithms to learn the transition dynamics, reward function, and terminal function, of the world through experience.
Model() - Constructor for class burlap.behavior.singleagent.learning.modellearning.Model
 
model - Variable in class burlap.behavior.singleagent.learning.modellearning.ModeledDomainGenerator.ModeledAction
The model of the transition dynamics that specify the outcomes of this action
model - Variable in class burlap.behavior.singleagent.learning.modellearning.rmax.PotentialShapedRMax
The model of the world that is being learned.
model - Variable in class burlap.behavior.singleagent.learning.modellearning.rmax.UnmodeledFavoredPolicy
 
modelChanged(State) - Method in interface burlap.behavior.singleagent.learning.modellearning.ModelPlanner
Tells the planner that the model has changed and that it will need to replan accordingly
modelChanged(State) - Method in class burlap.behavior.singleagent.learning.modellearning.modelplanners.VIModelPlanner
 
modelDomain - Variable in class burlap.behavior.singleagent.learning.modellearning.ModeledDomainGenerator
The domain object to be returned
modeledDomain - Variable in class burlap.behavior.singleagent.learning.modellearning.rmax.PotentialShapedRMax
The modeled domain object containing the modeled actions that a planner will use.
ModeledDomainGenerator - Class in burlap.behavior.singleagent.learning.modellearning
Use this class when an action model is being modeled.
ModeledDomainGenerator(Domain, Model, boolean) - Constructor for class burlap.behavior.singleagent.learning.modellearning.ModeledDomainGenerator
Creates a new domain object that is a reflection of the input domain, Actions are created using the ModeledDomainGenerator.ModeledAction class and their execution and transition dynamics should be defined by the given model that was learned by some Model class.
ModeledDomainGenerator.ModeledAction - Class in burlap.behavior.singleagent.learning.modellearning
A class for creating an action model for some source action.
ModeledDomainGenerator.ModeledAction(Domain, Action, Model, boolean) - Constructor for class burlap.behavior.singleagent.learning.modellearning.ModeledDomainGenerator.ModeledAction
Initializes.
ModeledDomainGenerator.RMaxStateAction - Class in burlap.behavior.singleagent.learning.modellearning
An action that is only executable in the ficitious RMax state and which always transitions back to the fictitious RMax state.
ModeledDomainGenerator.RMaxStateAction() - Constructor for class burlap.behavior.singleagent.learning.modellearning.ModeledDomainGenerator.RMaxStateAction
Initializes for the owning ModeledDomainGenerator instance's ModeledDomain instance
modeledRewardFunction - Variable in class burlap.behavior.singleagent.learning.modellearning.rmax.PotentialShapedRMax
The modeled reward function that is being learned.
modeledRF - Variable in class burlap.behavior.singleagent.learning.modellearning.models.TabularModel
The modeled reward function.
modeledTerminalFunction - Variable in class burlap.behavior.singleagent.learning.modellearning.rmax.PotentialShapedRMax
The modeled terminal state function.
modeledTF - Variable in class burlap.behavior.singleagent.learning.modellearning.models.TabularModel
The modeled terminal funciton.
modelPlannedPolicy() - Method in interface burlap.behavior.singleagent.learning.modellearning.ModelPlanner
Returns a policy encoding the planner's results.
modelPlannedPolicy() - Method in class burlap.behavior.singleagent.learning.modellearning.modelplanners.VIModelPlanner
 
modelPlanner - Variable in class burlap.behavior.singleagent.learning.modellearning.artdp.ARTDP
The planner used on the modeled world to update the value function
ModelPlanner - Interface in burlap.behavior.singleagent.learning.modellearning
Interface for defining planning algorithms that operate on iteratively learned models.
modelPlanner - Variable in class burlap.behavior.singleagent.learning.modellearning.rmax.PotentialShapedRMax
The model-adaptive planning algorithm to use
ModelPlanner.ModelPlannerGenerator - Interface in burlap.behavior.singleagent.learning.modellearning
 
modelPolicy - Variable in class burlap.behavior.singleagent.learning.modellearning.DomainMappedPolicy
The policy formed over the model action space.
modelPolicy - Variable in class burlap.behavior.singleagent.learning.modellearning.modelplanners.VIModelPlanner
The greedy policy that results from VI
mostRecentTrialEnabled() - Method in enum burlap.behavior.singleagent.auxiliary.performance.TrialMode
Returns true if the most recent trial plots will be plotted by this mode.
MountainCar - Class in burlap.domain.singleagent.mountaincar
A domain generator for the classic mountain car domain with default dynamics follow those implemented by Singh and Sutton [1].
MountainCar() - Constructor for class burlap.domain.singleagent.mountaincar.MountainCar
 
MountainCar.ClassicMCTF - Class in burlap.domain.singleagent.mountaincar
A Terminal Function for the Mountain Car domain that terminates when the agent's position is >= the max position in the world.
MountainCar.ClassicMCTF() - Constructor for class burlap.domain.singleagent.mountaincar.MountainCar.ClassicMCTF
Sets terminal states to be those that are >= the maximum position in the world.
MountainCar.ClassicMCTF(double) - Constructor for class burlap.domain.singleagent.mountaincar.MountainCar.ClassicMCTF
Sets terminal states to be those >= the given threshold.
MountainCar.MCPhysicsParams - Class in burlap.domain.singleagent.mountaincar
 
MountainCar.MCPhysicsParams() - Constructor for class burlap.domain.singleagent.mountaincar.MountainCar.MCPhysicsParams
 
MountainCarStateParser - Class in burlap.domain.singleagent.mountaincar
 
MountainCarStateParser(Domain) - Constructor for class burlap.domain.singleagent.mountaincar.MountainCarStateParser
Constructs a state parser for the given mountain car domain.
MountainCarVisualizer - Class in burlap.domain.singleagent.mountaincar
A class for creating a Visualizer for a MountainCar Domain.
MountainCarVisualizer() - Constructor for class burlap.domain.singleagent.mountaincar.MountainCarVisualizer
 
MountainCarVisualizer.AgentPainter - Class in burlap.domain.singleagent.mountaincar
Class for paining the agent in the mountain car domain.
MountainCarVisualizer.AgentPainter(MountainCar.MCPhysicsParams) - Constructor for class burlap.domain.singleagent.mountaincar.MountainCarVisualizer.AgentPainter
Initializes with the mountain car physics used
MountainCarVisualizer.HillPainter - Class in burlap.domain.singleagent.mountaincar
Class for drawing a black outline of the hill that the mountain car climbs.
MountainCarVisualizer.HillPainter(MountainCar.MCPhysicsParams) - Constructor for class burlap.domain.singleagent.mountaincar.MountainCarVisualizer.HillPainter
Initializes with the mountain car physics used
move(State, int, int) - Method in class burlap.domain.singleagent.frostbite.FrostbiteDomain
Attempts to move the agent into the given position, taking into account platforms and screen borders
move(State, int, int, int[][]) - Method in class burlap.domain.singleagent.gridworld.GridWorldDomain
Attempts to move the agent into the given position, taking into account walls and blocks
move(State, int, MountainCar.MCPhysicsParams) - Static method in class burlap.domain.singleagent.mountaincar.MountainCar
Changes the agents position in the provided state using car engine acceleration in the specified direction.
moveCarriedBlockToNewAgentPosition(State, ObjectInstance, int, int, int, int) - Static method in class burlap.domain.singleagent.blockdude.BlockDude
Moves a carried block to a new position of the agent
moveClassicModel(State, double, CartPoleDomain.CPPhysicsParams) - Static method in class burlap.domain.singleagent.cartpole.CartPoleDomain
Simulates the physics for one time step give the input state s, and the direction of force applied.
moveCorrectModel(State, double, CartPoleDomain.CPPhysicsParams) - Static method in class burlap.domain.singleagent.cartpole.CartPoleDomain
Simulates the physics for one time step give the input state s, and the direction of force applied.
moveHorizontally(State, int, int) - Static method in class burlap.domain.singleagent.blockdude.BlockDude
Modifies state s to be the result of a horizontal movement.
movementDirectionFromIndex(int) - Method in class burlap.domain.singleagent.frostbite.FrostbiteDomain
Returns the change in x and y position for a given direction number.
movementDirectionFromIndex(int) - Method in class burlap.domain.singleagent.gridworld.GridWorldDomain
Returns the change in x and y position for a given direction number.
movementForceMag - Variable in class burlap.domain.singleagent.cartpole.CartPoleDomain.CPPhysicsParams
The force magnatude that can be exterted in either direction on the cart
moveUp(State, int) - Static method in class burlap.domain.singleagent.blockdude.BlockDude
Modifies state s to be the result of a vertical movement, that will result in the agent onto the platform adjacent to its current location in the direction the agent is facing, provided that there is room for the agent (and any block it's holding) to step onto it.
MultiAgentExperimenter - Class in burlap.behavior.stochasticgame.auxiliary.performance
This class is used to simplify the comparison of agent perfomance in a stochastic game world.
MultiAgentExperimenter(WorldGenerator, TerminalFunction, int, int, AgentFactoryAndType...) - Constructor for class burlap.behavior.stochasticgame.auxiliary.performance.MultiAgentExperimenter
Initializes.
MultiAgentPerformancePlotter - Class in burlap.behavior.stochasticgame.auxiliary.performance
This class is a world observer used for recording and plotting the performance of the agents in the world.
MultiAgentPerformancePlotter(TerminalFunction, int, int, int, int, TrialMode, PerformanceMetric...) - Constructor for class burlap.behavior.stochasticgame.auxiliary.performance.MultiAgentPerformancePlotter
Initializes
MultiAgentPerformancePlotter.AgentDatasets - Class in burlap.behavior.stochasticgame.auxiliary.performance
A datastructure for maintain the plot series data in the current agent
MultiAgentPerformancePlotter.AgentDatasets(String) - Constructor for class burlap.behavior.stochasticgame.auxiliary.performance.MultiAgentPerformancePlotter.AgentDatasets
Initializes the datastructures for an agent with the given name
MultiAgentPerformancePlotter.DatasetsAndTrials - Class in burlap.behavior.stochasticgame.auxiliary.performance
A class for storing the tiral data and series datasets for a given agent.
MultiAgentPerformancePlotter.DatasetsAndTrials(String) - Constructor for class burlap.behavior.stochasticgame.auxiliary.performance.MultiAgentPerformancePlotter.DatasetsAndTrials
Initializes for an agent with the given name.
MultiAgentPerformancePlotter.MutableBoolean - Class in burlap.behavior.stochasticgame.auxiliary.performance
A class for a mutable boolean
MultiAgentPerformancePlotter.MutableBoolean(boolean) - Constructor for class burlap.behavior.stochasticgame.auxiliary.performance.MultiAgentPerformancePlotter.MutableBoolean
Initializes with the given Boolean value
MultiAgentPerformancePlotter.Trial - Class in burlap.behavior.stochasticgame.auxiliary.performance
A datastructure for maintaining all the metric stats for a single trial.
MultiAgentPerformancePlotter.Trial() - Constructor for class burlap.behavior.stochasticgame.auxiliary.performance.MultiAgentPerformancePlotter.Trial
 
MultiAgentQLearning - Class in burlap.behavior.stochasticgame.agents.maql
A class for performing multi-agent Q-learning in which different Q-value backup operators can be provided to enable the learning of different soution concepts.
MultiAgentQLearning(SGDomain, double, double, StateHashFactory, double, SGBackupOperator, boolean) - Constructor for class burlap.behavior.stochasticgame.agents.maql.MultiAgentQLearning
Initializes this Q-learning agent.
MultiAgentQLearning(SGDomain, double, LearningRate, StateHashFactory, ValueFunctionInitialization, SGBackupOperator, boolean) - Constructor for class burlap.behavior.stochasticgame.agents.maql.MultiAgentQLearning
Initializes this Q-learning agent.
MultiAgentQSourceProvider - Interface in burlap.behavior.stochasticgame.mavaluefunction
An interface for an object that can providing the Q-values stored for each agent in a problem.
MultiAgentVFPlanningAgent - Class in burlap.behavior.stochasticgame.agents.mavf
A agent that using a multi agent value function planning algorithm (instance of MAValueFunctionPlanner) to compute the value of each state and then follow a policy derived from a joint policy that is derived from that estimated value function.
MultiAgentVFPlanningAgent(SGDomain, MAValueFunctionPlanner, PolicyFromJointPolicy) - Constructor for class burlap.behavior.stochasticgame.agents.mavf.MultiAgentVFPlanningAgent
Initializes.
MultiLayerRenderer - Class in burlap.oomdp.visualizer
A MultiLayerRenderer is a canvas that will sequentially render a set of render layers, one on top of the other, to the same 2D graphics context.
MultiLayerRenderer() - Constructor for class burlap.oomdp.visualizer.MultiLayerRenderer
 
MultipleIntentionsMLIRL - Class in burlap.behavior.singleagent.learnbydemo.mlirl
An implementation of Multiple Intentions Maximum-likelihood Inverse Reinforcement Learning [1].
MultipleIntentionsMLIRL(MultipleIntentionsMLIRLRequest, int, double, double, int) - Constructor for class burlap.behavior.singleagent.learnbydemo.mlirl.MultipleIntentionsMLIRL
Initializes.
MultipleIntentionsMLIRLRequest - Class in burlap.behavior.singleagent.learnbydemo.mlirl
A problem request object for MultipleIntentionsMLIRL.
MultipleIntentionsMLIRLRequest(Domain, QGradientPlannerFactory, List<EpisodeAnalysis>, DifferentiableRF, int) - Constructor for class burlap.behavior.singleagent.learnbydemo.mlirl.MultipleIntentionsMLIRLRequest
Initializes
MultipleIntentionsMLIRLRequest(Domain, List<EpisodeAnalysis>, DifferentiableRF, int, StateHashFactory) - Constructor for class burlap.behavior.singleagent.learnbydemo.mlirl.MultipleIntentionsMLIRLRequest
Initializes using a default QGradientPlannerFactory.DifferentiableVIFactory that is based on the provided StateHashFactory object.
MultiStatePrePlanner - Class in burlap.behavior.singleagent.planning.deterministic
This is a helper class that is used to run a planner from multiple initial states to ensure that an adequate plan/policy exists for each them.
MultiStatePrePlanner() - Constructor for class burlap.behavior.singleagent.planning.deterministic.MultiStatePrePlanner
 
MultiTargetRelationalValue - Class in burlap.oomdp.core.values
A multi-target relational value object subclass.
MultiTargetRelationalValue(Attribute) - Constructor for class burlap.oomdp.core.values.MultiTargetRelationalValue
Initializes the value to be associted with the given attribute
MultiTargetRelationalValue(Value) - Constructor for class burlap.oomdp.core.values.MultiTargetRelationalValue
Initializes this value as a copy from the source Value object v.
myCoop - Variable in class burlap.behavior.stochasticgame.agents.twoplayer.repeatedsinglestage.GrimTrigger.GrimTriggerAgentFactory
The agent's cooperate action
myCoop - Variable in class burlap.behavior.stochasticgame.agents.twoplayer.repeatedsinglestage.GrimTrigger
This agent's cooperate action
myCoop - Variable in class burlap.behavior.stochasticgame.agents.twoplayer.repeatedsinglestage.TitForTat
This agent's cooperate action
myCoop - Variable in class burlap.behavior.stochasticgame.agents.twoplayer.repeatedsinglestage.TitForTat.TitForTatAgentFactory
This agent's cooperate action
myDefect - Variable in class burlap.behavior.stochasticgame.agents.twoplayer.repeatedsinglestage.GrimTrigger.GrimTriggerAgentFactory
The agent's defect action
myDefect - Variable in class burlap.behavior.stochasticgame.agents.twoplayer.repeatedsinglestage.GrimTrigger
This agent's defect action
myDefect - Variable in class burlap.behavior.stochasticgame.agents.twoplayer.repeatedsinglestage.TitForTat
This agent's defect action
myDefect - Variable in class burlap.behavior.stochasticgame.agents.twoplayer.repeatedsinglestage.TitForTat.TitForTatAgentFactory
This agent's defect action
myQSource - Variable in class burlap.behavior.stochasticgame.agents.maql.MultiAgentQLearning
This agent's Q-value source
MyTimer - Class in burlap.debugtools
A data structure for keeping track of elapsed and average time.
MyTimer() - Constructor for class burlap.debugtools.MyTimer
Creates a new timer.
A B C D E F G H I J K L M N O P Q R S T U V W X Y Z