- a - Variable in class burlap.behavior.singleagent.learnfromdemo.mlirl.support.QGradientTuple
-
The action
- a - Variable in class burlap.behavior.singleagent.learning.actorcritic.CritiqueResult
-
The action taken in state s
- a - Variable in class burlap.behavior.singleagent.learning.lspi.SARSData.SARS
-
The action taken inthe previous state
- a - Variable in class burlap.behavior.valuefunction.QValue
-
The action with which this Q-value is associated
- a - Variable in class burlap.mdp.singleagent.environment.EnvironmentOutcome
-
The action taken in the environment
- abstractionForAgents - Variable in class burlap.mdp.stochasticgames.tournament.common.ConstantWorldGenerator
-
- abstractionForAgents - Variable in class burlap.mdp.stochasticgames.world.World
-
- acceleration - Variable in class burlap.domain.singleagent.mountaincar.MountainCar.MCPhysicsParams
-
The amount of acceleration of the car engine can use
- accumulate(List<Double>, double) - Static method in class burlap.behavior.singleagent.auxiliary.performance.PerformancePlotter
-
Computes the sum of the last entry in list and the value v and adds it to the end of list.
- accumulate(List<Double>, double) - Static method in class burlap.behavior.stochasticgames.auxiliary.performance.MultiAgentPerformancePlotter
-
Computes the sum of the last entry in list and the value v and adds it to the end of list.
- actingAgent - Variable in class burlap.behavior.stochasticgames.PolicyFromJointPolicy
-
- action(State) - Method in class burlap.behavior.policy.BoltzmannQPolicy
-
- action(State) - Method in class burlap.behavior.policy.CachedPolicy
-
- action(State) - Method in class burlap.behavior.policy.EpsilonGreedy
-
- action(State) - Method in class burlap.behavior.policy.GreedyDeterministicQPolicy
-
- action(State) - Method in class burlap.behavior.policy.GreedyQPolicy
-
- action(State) - Method in interface burlap.behavior.policy.Policy
-
This method will return an action sampled by the policy for the given state.
- action(State) - Method in class burlap.behavior.policy.RandomPolicy
-
- action(int) - Method in class burlap.behavior.singleagent.Episode
-
Returns the action taken in the state at time step t.
- action(State) - Method in class burlap.behavior.singleagent.learnfromdemo.apprenticeship.ApprenticeshipLearning.StationaryRandomDistributionPolicy
-
- action(State) - Method in class burlap.behavior.singleagent.learning.actorcritic.actor.BoltzmannActor
-
- action(State) - Method in class burlap.behavior.singleagent.learning.modellearning.rmax.UnmodeledFavoredPolicy
-
- action(State) - Method in class burlap.behavior.singleagent.planning.deterministic.DDPlannerPolicy
-
- action(State) - Method in class burlap.behavior.singleagent.planning.deterministic.SDPlannerPolicy
-
- action - Variable in class burlap.behavior.singleagent.planning.stochastic.montecarlo.uct.UCTActionNode
-
The action this action node wraps
- action(State) - Method in class burlap.behavior.singleagent.planning.stochastic.montecarlo.uct.UCTTreeWalkPolicy
-
- action(State) - Method in class burlap.behavior.stochasticgames.agents.interfacing.singleagent.LearningAgentToSGAgentInterface
-
- action(State) - Method in class burlap.behavior.stochasticgames.agents.madp.MultiAgentDPPlanningAgent
-
- action(State) - Method in class burlap.behavior.stochasticgames.agents.maql.MultiAgentQLearning
-
- action(State) - Method in class burlap.behavior.stochasticgames.agents.naiveq.history.SGQWActionHistory
-
- action(State) - Method in class burlap.behavior.stochasticgames.agents.naiveq.SGNaiveQLAgent
-
- action(State) - Method in class burlap.behavior.stochasticgames.agents.RandomSGAgent
-
- action(State) - Method in class burlap.behavior.stochasticgames.agents.SetStrategySGAgent
-
- action(State) - Method in class burlap.behavior.stochasticgames.agents.twoplayer.repeatedsinglestage.GrimTrigger
-
- action(State) - Method in class burlap.behavior.stochasticgames.agents.twoplayer.repeatedsinglestage.TitForTat
-
- action(State) - Method in class burlap.behavior.stochasticgames.agents.twoplayer.singlestage.equilibriumplayer.EquilibriumPlayingSGAgent
-
- action(State) - Method in class burlap.behavior.stochasticgames.madynamicprogramming.policies.ECorrelatedQJointPolicy
-
- action(State) - Method in class burlap.behavior.stochasticgames.madynamicprogramming.policies.EGreedyJointPolicy
-
- action(State) - Method in class burlap.behavior.stochasticgames.madynamicprogramming.policies.EGreedyMaxWellfare
-
- action(State) - Method in class burlap.behavior.stochasticgames.madynamicprogramming.policies.EMinMaxPolicy
-
- action(State) - Method in class burlap.behavior.stochasticgames.PolicyFromJointPolicy
-
- Action - Interface in burlap.mdp.core.action
-
An interface for action definitions.
- action - Variable in class burlap.mdp.core.action.UniversalActionType
-
The
Action
object that will be returned.
- action(State) - Method in interface burlap.mdp.stochasticgames.agent.SGAgent
-
This method is called by the world when it needs the agent to choose an action
- action(int) - Method in class burlap.mdp.stochasticgames.JointAction
-
Returns the action taken by the agent
- action(State) - Method in class burlap.shell.command.world.ManualAgentsCommands.ManualSGAgent
-
- ACTION_BACKWARDS - Static variable in class burlap.domain.singleagent.mountaincar.MountainCar
-
A constant for the name of the backwards action
- ACTION_COAST - Static variable in class burlap.domain.singleagent.mountaincar.MountainCar
-
A constant for the name of the coast action
- ACTION_DO_NOTHING - Static variable in class burlap.domain.singleagent.pomdp.tiger.TigerDomain
-
The do nothing action name
- ACTION_EAST - Static variable in class burlap.domain.singleagent.blockdude.BlockDude
-
Name for the east action
- ACTION_EAST - Static variable in class burlap.domain.singleagent.frostbite.FrostbiteDomain
-
Constant for the name of the east action
- ACTION_EAST - Static variable in class burlap.domain.singleagent.gridworld.GridWorldDomain
-
Constant for the name of the east action
- ACTION_EAST - Static variable in class burlap.domain.stochasticgames.gridgame.GridGame
-
A constant for the name of the east action.
- ACTION_FORWARD - Static variable in class burlap.domain.singleagent.mountaincar.MountainCar
-
A constant for the name of the forward action
- ACTION_IDLE - Static variable in class burlap.domain.singleagent.frostbite.FrostbiteDomain
-
Constant for the name of the west action
- ACTION_IDLE - Static variable in class burlap.domain.singleagent.lunarlander.LunarLanderDomain
-
Constant for the name of the idle action which causes the agent to do nothing by drift for a time step
- ACTION_LEFT - Static variable in class burlap.domain.singleagent.cartpole.CartPoleDomain
-
A constant for the name of the left action
- ACTION_LEFT - Static variable in class burlap.domain.singleagent.pomdp.tiger.TigerDomain
-
The open left door action name
- ACTION_LISTEN - Static variable in class burlap.domain.singleagent.pomdp.tiger.TigerDomain
-
The listen action name
- ACTION_NO_FORCE - Static variable in class burlap.domain.singleagent.cartpole.InvertedPendulum
-
A constant for the name of the no force action (which due to stochasticity may include a small force)
- ACTION_NOOP - Static variable in class burlap.domain.stochasticgames.gridgame.GridGame
-
A constant for the name of the no operation (do nothing) action.
- ACTION_NORTH - Static variable in class burlap.domain.singleagent.frostbite.FrostbiteDomain
-
Constant for the name of the north action
- ACTION_NORTH - Static variable in class burlap.domain.singleagent.gridworld.GridWorldDomain
-
Constant for the name of the north action
- ACTION_NORTH - Static variable in class burlap.domain.stochasticgames.gridgame.GridGame
-
A constant for the name of the north action.
- ACTION_PICKUP - Static variable in class burlap.domain.singleagent.blockdude.BlockDude
-
Name for the pickup action
- ACTION_PUT_DOWN - Static variable in class burlap.domain.singleagent.blockdude.BlockDude
-
Name for the put down action
- ACTION_RIGHT - Static variable in class burlap.domain.singleagent.cartpole.CartPoleDomain
-
A constant for the name of the right action
- ACTION_RIGHT - Static variable in class burlap.domain.singleagent.pomdp.tiger.TigerDomain
-
The open right door action name
- ACTION_SOUTH - Static variable in class burlap.domain.singleagent.frostbite.FrostbiteDomain
-
Constant for the name of the south action
- ACTION_SOUTH - Static variable in class burlap.domain.singleagent.gridworld.GridWorldDomain
-
Constant for the name of the south action
- ACTION_SOUTH - Static variable in class burlap.domain.stochasticgames.gridgame.GridGame
-
A constant for the name of the south action.
- ACTION_STACK - Static variable in class burlap.domain.singleagent.blocksworld.BlocksWorld
-
Constant for the stack action name
- ACTION_THRUST - Static variable in class burlap.domain.singleagent.lunarlander.LunarLanderDomain
-
Constant for the base name of thrust actions.
- ACTION_TURN_LEFT - Static variable in class burlap.domain.singleagent.lunarlander.LunarLanderDomain
-
Constant for the name of the turn/rotate left/counterclockwise action
- ACTION_TURN_RIGHT - Static variable in class burlap.domain.singleagent.lunarlander.LunarLanderDomain
-
Constant for the name of the turn/rotate right/clockwise action
- ACTION_UNSTACK - Static variable in class burlap.domain.singleagent.blocksworld.BlocksWorld
-
Constant for the unstack action name
- ACTION_UP - Static variable in class burlap.domain.singleagent.blockdude.BlockDude
-
Name for the up action
- ACTION_WEST - Static variable in class burlap.domain.singleagent.blockdude.BlockDude
-
Name for the west action
- ACTION_WEST - Static variable in class burlap.domain.singleagent.frostbite.FrostbiteDomain
-
Constant for the name of the west action
- ACTION_WEST - Static variable in class burlap.domain.singleagent.gridworld.GridWorldDomain
-
Constant for the name of the west action
- ACTION_WEST - Static variable in class burlap.domain.stochasticgames.gridgame.GridGame
-
A constant for the name of the west action.
- actionArgs(List<String>) - Method in class burlap.shell.command.env.ExecuteActionCommand
-
- actionArgs(List<String>) - Method in class burlap.shell.command.world.JointActionCommand
-
- actionArgs(List<String>) - Method in class burlap.shell.command.world.ManualAgentsCommands.SetAgentAction
-
- actionButton - Variable in class burlap.shell.visual.VisualExplorer
-
- actionFeature(Action, int) - Method in class burlap.behavior.functionapproximation.sparse.SparseCrossProductFeatures
-
- ActionFeatureID(Action, int) - Constructor for class burlap.behavior.functionapproximation.sparse.tilecoding.TileCodingFeatures.ActionFeatureID
-
- actionFeatureMultiplier - Variable in class burlap.behavior.functionapproximation.dense.fourier.FourierBasis
-
A map for returning a multiplier to the number of state features for each action.
- actionFeatureMultiplier - Variable in class burlap.behavior.functionapproximation.dense.rbf.RBFFeatures
-
A map for returning a multiplier to the number of RBF state features for each action.
- actionFeatures - Variable in class burlap.behavior.functionapproximation.sparse.SparseCrossProductFeatures
-
- actionField - Variable in class burlap.shell.visual.VisualExplorer
-
- actionForce - Variable in class burlap.domain.singleagent.cartpole.InvertedPendulum.IPPhysicsParams
-
The force (magnitude) applied by a left or right action.
- ActionGlyphPainter - Interface in burlap.behavior.singleagent.auxiliary.valuefunctionvis.common
-
An interface for painting glyphs that correspond to actions.
- actionId - Variable in class burlap.domain.stochasticgames.normalform.SingleStageNormalFormGame.MatrixAction
-
- actionInd(String) - Method in class burlap.domain.singleagent.gridworld.GridWorldDomain.GridWorldModel
-
- actionIndex(int, String) - Method in class burlap.domain.stochasticgames.normalform.SingleStageNormalFormGame
-
Returns the action index of the action named actionName
of player pn
- actionMap - Variable in class burlap.behavior.singleagent.learnfromdemo.mlirl.commonrfs.LinearStateActionDifferentiableRF
-
An ordering of grounded actions
- actionMap - Variable in class burlap.domain.singleagent.rlglue.RLGlueEnvironment
-
A mapping from action index identifiers (that RLGlue will use) to BURLAP actions and their parametrization specified as the index of objects in a state.
- actionMap - Variable in class burlap.mdp.singleagent.SADomain
-
- actionMap - Variable in class burlap.mdp.stochasticgames.SGDomain
-
A map from action type names to their corresponding
ActionType
- actionName() - Method in class burlap.behavior.policy.support.AnnotatedAction
-
- actionName() - Method in class burlap.behavior.singleagent.interfaces.rlglue.RLGlueDomain.RLGlueActionType.RLGLueAction
-
- actionName() - Method in class burlap.behavior.singleagent.options.MacroAction
-
- actionName() - Method in class burlap.behavior.singleagent.options.SubgoalOption
-
- actionName() - Method in class burlap.domain.singleagent.graphdefined.GraphDefinedDomain.GraphActionType.GraphAction
-
- actionName() - Method in class burlap.domain.singleagent.lunarlander.LunarLanderDomain.ThrustType.ThrustAction
-
- actionName(int, int) - Method in class burlap.domain.stochasticgames.normalform.SingleStageNormalFormGame
-
Returns the name of the an
action of player pn
- actionName() - Method in class burlap.domain.stochasticgames.normalform.SingleStageNormalFormGame.MatrixAction
-
- actionName() - Method in interface burlap.mdp.core.action.Action
-
Returns the action name for this grounded action.
- actionName() - Method in class burlap.mdp.core.action.SimpleAction
-
- actionName() - Method in class burlap.mdp.singleagent.oo.ObjectParameterizedActionType.SAObjectParameterizedAction
-
- actionName() - Method in class burlap.mdp.stochasticgames.JointAction
-
- ActionNameMap() - Constructor for class burlap.domain.stochasticgames.normalform.SingleStageNormalFormGame.ActionNameMap
-
- actionNameMap - Variable in class burlap.shell.command.env.ExecuteActionCommand
-
- actionNameToGlyphPainter - Variable in class burlap.behavior.singleagent.auxiliary.valuefunctionvis.common.PolicyGlyphPainter2D
-
The map from action names to glyphs that will be used to represent them.
- actionNameToIndex - Variable in class burlap.domain.stochasticgames.normalform.SingleStageNormalFormGame
-
Returns the the int index of an action for a given name for each player
- actionNodeConstructor - Variable in class burlap.behavior.singleagent.planning.stochastic.montecarlo.uct.UCT
-
- actionNodes - Variable in class burlap.behavior.singleagent.planning.stochastic.montecarlo.uct.UCTStateNode
-
The possible actions (nodes) that can be performed from this state.
- actionNoise - Variable in class burlap.domain.singleagent.cartpole.InvertedPendulum.IPPhysicsParams
-
The force (magnitude) noise in any action, including the no force action.
- actionOffset - Variable in class burlap.behavior.functionapproximation.dense.DenseCrossProductFeatures
-
A feature index offset for each action when using Q-value function approximation.
- actionOffset - Variable in class burlap.behavior.functionapproximation.dense.DenseLinearVFA
-
A feature index offset for each action when using Q-value function approximation.
- actionProb(State, Action) - Method in class burlap.behavior.policy.BoltzmannQPolicy
-
- actionProb(State, Action) - Method in class burlap.behavior.policy.CachedPolicy
-
- actionProb(State, Action) - Method in class burlap.behavior.policy.EpsilonGreedy
-
- actionProb(State, Action) - Method in class burlap.behavior.policy.GreedyDeterministicQPolicy
-
- actionProb(State, Action) - Method in class burlap.behavior.policy.GreedyQPolicy
-
- actionProb(State, Action) - Method in interface burlap.behavior.policy.Policy
-
Returns the probability/probability density that the given action will be taken in the given state.
- actionProb(State, Action) - Method in class burlap.behavior.policy.RandomPolicy
-
- ActionProb - Class in burlap.behavior.policy.support
-
Class for storing an action and probability tuple.
- ActionProb() - Constructor for class burlap.behavior.policy.support.ActionProb
-
- ActionProb(Action, double) - Constructor for class burlap.behavior.policy.support.ActionProb
-
Initializes the action, probability tuple.
- actionProb(State, Action) - Method in class burlap.behavior.singleagent.learnfromdemo.apprenticeship.ApprenticeshipLearning.StationaryRandomDistributionPolicy
-
- actionProb(State, Action) - Method in class burlap.behavior.singleagent.learning.actorcritic.actor.BoltzmannActor
-
- actionProb(State, Action) - Method in class burlap.behavior.singleagent.learning.modellearning.rmax.UnmodeledFavoredPolicy
-
- actionProb(State, Action) - Method in class burlap.behavior.singleagent.planning.deterministic.DDPlannerPolicy
-
- actionProb(State, Action) - Method in class burlap.behavior.singleagent.planning.deterministic.SDPlannerPolicy
-
- actionProb(State, Action) - Method in class burlap.behavior.singleagent.planning.stochastic.montecarlo.uct.UCTTreeWalkPolicy
-
- actionProb(State, Action) - Method in class burlap.behavior.stochasticgames.madynamicprogramming.policies.ECorrelatedQJointPolicy
-
- actionProb(State, Action) - Method in class burlap.behavior.stochasticgames.madynamicprogramming.policies.EGreedyJointPolicy
-
- actionProb(State, Action) - Method in class burlap.behavior.stochasticgames.madynamicprogramming.policies.EGreedyMaxWellfare
-
- actionProb(State, Action) - Method in class burlap.behavior.stochasticgames.madynamicprogramming.policies.EMinMaxPolicy
-
- actionProb(State, Action) - Method in class burlap.behavior.stochasticgames.PolicyFromJointPolicy
-
- actionProbFromEnum(EnumerablePolicy, State, Action) - Static method in class burlap.behavior.policy.PolicyUtils
-
Returns the probability of the policy taking action a in state s by searching for the action
in the returned policy distribution from the provided
EnumerablePolicy
.
- actionProbGivenDistribution(Action, List<ActionProb>) - Static method in class burlap.behavior.policy.PolicyUtils
-
Searches the input distribution for the occurrence of the input action and returns its probability.
- ActionReference() - Constructor for class burlap.behavior.stochasticgames.agents.interfacing.singleagent.LearningAgentToSGAgentInterface.ActionReference
-
- actionRenderDelay - Variable in class burlap.mdp.singleagent.common.VisualActionObserver
-
How long to wait in ms for a state to be rendered before returning control to the agent.
- actionRenderDelay - Variable in class burlap.mdp.stochasticgames.common.VisualWorldObserver
-
How long to wait in ms for a state to be rendered before returning control to the world.
- actions - Variable in class burlap.mdp.stochasticgames.agent.SGAgentType
-
- actions - Variable in class burlap.mdp.stochasticgames.JointAction
-
- actionSelection - Variable in class burlap.behavior.policy.CachedPolicy
-
The cached action selection probabilities
- actionSequence - Variable in class burlap.behavior.singleagent.Episode
-
The sequence of actions taken
- actionSequence - Variable in class burlap.behavior.singleagent.options.MacroAction
-
The list of actions that will be executed in order when this macro-action is called.
- actionSequenceSize() - Method in class burlap.behavior.singleagent.options.MacroAction
-
- actionSets - Variable in class burlap.domain.stochasticgames.normalform.SingleStageNormalFormGame
-
The ordered action set for each player
- actionString() - Method in class burlap.behavior.singleagent.Episode
-
Returns a string representing the actions taken in this episode.
- actionString(String) - Method in class burlap.behavior.singleagent.Episode
-
Returns a string representing the actions taken in this episode.
- actionsTypes - Variable in class burlap.behavior.singleagent.learning.modellearning.rmax.RMaxModel
-
- ActionType - Interface in burlap.mdp.core.action
-
- actionTypes - Variable in class burlap.behavior.policy.RandomPolicy
-
The actions from which selection is performed
- actionTypes - Variable in class burlap.behavior.singleagent.learning.actorcritic.actor.BoltzmannActor
-
The actions the agent can perform
- actionTypes - Variable in class burlap.behavior.singleagent.learning.lspi.SARSCollector
-
The actions used for collecting data.
- actionTypes - Variable in class burlap.behavior.singleagent.MDPSolver
-
The list of actions this solver can use.
- actionTypes - Variable in class burlap.mdp.singleagent.SADomain
-
- ActionUtils - Class in burlap.mdp.core.action
-
- ActionUtils() - Constructor for class burlap.mdp.core.action.ActionUtils
-
- activated - Variable in class burlap.domain.singleagent.frostbite.state.FrostbitePlatform
-
- activatedPlatformReward - Variable in class burlap.domain.singleagent.frostbite.FrostbiteRF
-
- Actor - Class in burlap.behavior.singleagent.learning.actorcritic
-
This class provides interface necessary for the actor portion of an Actor-Critic learning algorithm.
- Actor() - Constructor for class burlap.behavior.singleagent.learning.actorcritic.Actor
-
- actor - Variable in class burlap.behavior.singleagent.learning.actorcritic.ActorCritic
-
The actor component to use.
- ActorCritic - Class in burlap.behavior.singleagent.learning.actorcritic
-
This is a general class structure for implementing Actor-critic learning.
- ActorCritic(SADomain, double, Actor, Critic) - Constructor for class burlap.behavior.singleagent.learning.actorcritic.ActorCritic
-
Initializes the learning algorithm.
- ActorCritic(SADomain, double, Actor, Critic, int) - Constructor for class burlap.behavior.singleagent.learning.actorcritic.ActorCritic
-
Initializes the learning algorithm.
- actUntilTerminal() - Method in class burlap.mdp.singleagent.pomdp.BeliefAgent
-
Causes the agent to act until the environment reaches a termination condition.
- actUntilTerminalOrMaxSteps(int) - Method in class burlap.mdp.singleagent.pomdp.BeliefAgent
-
Causes the agent to act for some fixed number of steps.
- add(QValue, QGradientTuple) - Method in class burlap.behavior.singleagent.learnfromdemo.mlirl.differentiableplanners.DifferentiableSparseSampling.QAndQGradient
-
- add(SARSData.SARS) - Method in class burlap.behavior.singleagent.learning.lspi.SARSData
-
- add(State, Action, double, State) - Method in class burlap.behavior.singleagent.learning.lspi.SARSData
-
- add(K, double) - Method in class burlap.datastructures.HashedAggregator
-
Adds a specified value to a key.
- addAction(ActionType) - Method in class burlap.behavior.policy.RandomPolicy
-
Adds an aciton to consider in selection.
- addAction(Action) - Method in class burlap.behavior.singleagent.Episode
-
Adds a GroundedAction to the action sequence.
- addAction(Action) - Method in class burlap.behavior.singleagent.learnfromdemo.mlirl.commonrfs.LinearStateActionDifferentiableRF
-
Adds a possible grounded action.
- addAction(Action) - Method in class burlap.mdp.stochasticgames.JointAction
-
Adds a single
Action
object to this joint action.
- addActionType(ActionType) - Method in class burlap.behavior.singleagent.learning.actorcritic.ActorCritic
-
- addActionType(ActionType) - Method in interface burlap.behavior.singleagent.learning.actorcritic.Critic
-
This method allows the critic to critique actions that are not apart of the domain definition.
- addActionType(ActionType) - Method in class burlap.behavior.singleagent.MDPSolver
-
- addActionType(ActionType) - Method in interface burlap.behavior.singleagent.MDPSolverInterface
-
Adds an additional action the solver that is not included in the domain definition.
- addActionType(ActionType) - Method in class burlap.mdp.singleagent.SADomain
-
- addActionType(ActionType) - Method in class burlap.mdp.stochasticgames.SGDomain
-
- addActionTypes(ActionType...) - Method in class burlap.mdp.singleagent.SADomain
-
- addAgent(AgentFactory) - Method in class burlap.mdp.stochasticgames.tournament.Tournament
-
Adds an agent to the tournament
- addBackTransition(PrioritizedSweeping.BPTRNode) - Method in class burlap.behavior.singleagent.planning.stochastic.valueiteration.PrioritizedSweeping.BPTRNode
-
Adds a backpointer transition
- addCommand(ShellCommand) - Method in class burlap.shell.BurlapShell
-
- addCommandAs(ShellCommand, String) - Method in class burlap.shell.BurlapShell
-
- addCorrelatedEquilibriumMainConstraints(LinearProgram, double[][], double[][], int, int) - Static method in class burlap.behavior.stochasticgames.solvers.CorrelatedEquilibriumSolver
-
Adds the common LP constraints for the correlated equilribum problem: rationalaity constraits (no agent has a motivation to diverge
from a joint policy selection), the probability of all joint action variables must sum to 1, and all joint action variables are lower
bound at 0.0.
- addFloor(int[][]) - Static method in class burlap.domain.singleagent.blockdude.BlockDudeLevelConstructor
-
- addFloorDiscretizingMultipleFor(Object, double) - Method in class burlap.statehashing.discretized.DiscConfig
-
Sets the multiple to use for discretization for the given key.
- addFloorDiscretizingMultipleFor(Object, double) - Method in class burlap.statehashing.discretized.DiscretizingHashableStateFactory
-
Sets the multiple to use for discretization for the given key.
- addFloorDiscretizingMultipleFor(Object, double) - Method in class burlap.statehashing.maskeddiscretized.DiscMaskedConfig
-
Sets the multiple to use for discretization for the given key.
- addFloorDiscretizingMultipleFor(Object, double) - Method in class burlap.statehashing.maskeddiscretized.DiscretizingMaskedHashableStateFactory
-
Sets the multiple to use for discretization for the given key.
- additiveReward(State, Action, State) - Method in class burlap.behavior.singleagent.shaping.potential.PotentialShapedRF
-
- additiveReward(State, Action, State) - Method in class burlap.behavior.singleagent.shaping.ShapedRewardFunction
-
Returns the reward value to add to the base objective reward function.
- addKeyAction(String, int, Action) - Method in class burlap.shell.visual.SGVisualExplorer
-
Specifies the action to set for a given key press.
- addKeyAction(String, int, String, String) - Method in class burlap.shell.visual.SGVisualExplorer
-
Adds a key action mapping.
- addKeyAction(String, Action) - Method in class burlap.shell.visual.VisualExplorer
-
Specifies which action to execute for a given key press
- addKeyAction(String, String, String) - Method in class burlap.shell.visual.VisualExplorer
-
Adds a key action mapping.
- addKeyShellCommand(String, String) - Method in class burlap.shell.visual.SGVisualExplorer
-
Causes a shell command to be executed when a key is pressed with the visualizer in focus.
- addKeyShellCommand(String, String) - Method in class burlap.shell.visual.VisualExplorer
-
Cause a shell command to be executed when key is pressed with the visualizer highlighted.
- addNextLandMark(double, Color) - Method in class burlap.behavior.singleagent.auxiliary.valuefunctionvis.common.LandmarkColorBlendInterpolation
-
Adds the next landmark between which interpolation should occur.
- addNodeToIndexTree(UCTStateNode) - Method in class burlap.behavior.singleagent.planning.stochastic.montecarlo.uct.UCT
-
- addNonDomainReferencedAction(ActionType) - Method in class burlap.behavior.singleagent.learning.actorcritic.Actor
-
This method allows the actor to utilize actions that are not apart of the domain definition.
- addNonDomainReferencedAction(ActionType) - Method in class burlap.behavior.singleagent.learning.actorcritic.actor.BoltzmannActor
-
- addObject(ObjectInstance) - Method in class burlap.domain.singleagent.blockdude.state.BlockDudeState
-
- addObject(ObjectInstance) - Method in class burlap.domain.singleagent.blocksworld.BlocksWorldState
-
- addObject(ObjectInstance) - Method in class burlap.domain.singleagent.frostbite.state.FrostbiteState
-
- addObject(ObjectInstance) - Method in class burlap.domain.singleagent.gridworld.state.GridWorldState
-
- addObject(ObjectInstance) - Method in class burlap.domain.singleagent.lunarlander.state.LLState
-
- addObject(ObjectInstance) - Method in class burlap.mdp.core.oo.state.generic.GenericOOState
-
- addObject(ObjectInstance) - Method in interface burlap.mdp.core.oo.state.MutableOOState
-
Adds object instance o to this state.
- addObjectClassMasks(String...) - Method in class burlap.statehashing.masked.MaskedConfig
-
Adds masks for entire OO-MDP objects that belong to the specified OO-MDP object class.
- addObjectClassMasks(String...) - Method in class burlap.statehashing.masked.MaskedHashableStateFactory
-
Adds masks for entire OO-MDP objects that belong to the specified OO-MDP object class.
- addObjectClassPainter(String, ObjectPainter) - Method in class burlap.visualizer.OOStatePainter
-
Adds a class that will paint objects that belong to a given OO-MDPclass.
- addObjectVectorizion(String, DenseStateFeatures) - Method in class burlap.behavior.functionapproximation.dense.ConcatenatedObjectFeatures
-
Adds an OO-MDP class next in the list of object classes to vectorize with the given
DenseStateFeatures
.
- addObservers(EnvironmentObserver...) - Method in class burlap.mdp.singleagent.environment.extensions.EnvironmentServer
-
- addObservers(EnvironmentObserver...) - Method in interface burlap.mdp.singleagent.environment.extensions.EnvironmentServerInterface
-
- addObservers(EnvironmentObserver...) - Method in class burlap.mdp.singleagent.environment.SimulatedEnvironment
-
- addObservers(ShellObserver...) - Method in class burlap.shell.BurlapShell
-
- addPfsToDomain(OODomain, List<PropositionalFunction>) - Static method in class burlap.mdp.core.oo.OODomain.Helper
-
- addPfsToDomain(OODomain, PropositionalFunction...) - Static method in class burlap.mdp.core.oo.OODomain.Helper
-
- addPropFunction(PropositionalFunction) - Method in interface burlap.mdp.core.oo.OODomain
-
Add a propositional function that can be used to evaluate objects that belong to object classes
of this domain.
- addPropFunction(PropositionalFunction) - Method in class burlap.mdp.singleagent.oo.OOSADomain
-
- addPropFunction(PropositionalFunction) - Method in class burlap.mdp.stochasticgames.oo.OOSGDomain
-
- addQValue(Action, double) - Method in class burlap.behavior.singleagent.learning.tdmethods.QLearningStateNode
-
Adds a Q-value to this state with the given numeric Q-value.
- addRBF(RBF) - Method in class burlap.behavior.functionapproximation.dense.rbf.RBFFeatures
-
Adds the specified RBF unit to the list of RBF units.
- addRBFs(List<RBF>) - Method in class burlap.behavior.functionapproximation.dense.rbf.RBFFeatures
-
Adds all of the specified RBF units to this object's list of RBF units.
- addRenderLayer(RenderLayer) - Method in class burlap.visualizer.MultiLayerRenderer
-
Adds the specified
RenderLayer
to the end of the render layer ordered list.
- addReward(double) - Method in class burlap.behavior.singleagent.Episode
-
Adds a reward to the reward sequence.
- addSpecificObjectPainter(String, ObjectPainter) - Method in class burlap.visualizer.OOStatePainter
-
Adds a painter that will be used to paint a specific object in states
- addStandardThrustActions() - Method in class burlap.domain.singleagent.lunarlander.LunarLanderDomain
-
Adds two standard thrust actions.
- addState(State) - Method in class burlap.behavior.singleagent.Episode
-
Adds a state to the state sequence.
- addStateClass(String, Class<?>) - Method in interface burlap.mdp.core.oo.OODomain
-
Adds the Java class definition for an OO-MDP class with the given name
- addStateClass(String, Class<?>) - Method in class burlap.mdp.singleagent.oo.OOSADomain
-
- addStateClass(String, Class<?>) - Method in class burlap.mdp.stochasticgames.oo.OOSGDomain
-
- AddStateObjectCommand - Class in burlap.shell.command.env
-
- AddStateObjectCommand(Domain) - Constructor for class burlap.shell.command.env.AddStateObjectCommand
-
- AddStateObjectSGCommand - Class in burlap.shell.command.world
-
- AddStateObjectSGCommand(Domain) - Constructor for class burlap.shell.command.world.AddStateObjectSGCommand
-
- addStatePainter(StatePainter) - Method in class burlap.visualizer.StateRenderLayer
-
Adds a static painter for the domain.
- addStatePainter(StatePainter) - Method in class burlap.visualizer.Visualizer
-
Adds a static painter for the domain.
- addStatesToStateSpace(Collection<State>) - Method in class burlap.behavior.singleagent.learnfromdemo.mlirl.differentiableplanners.DifferentiableVI
-
Adds a Collection
of states over which VI will iterate.
- addStateToStateSpace(State) - Method in class burlap.behavior.singleagent.learnfromdemo.mlirl.differentiableplanners.DifferentiableVI
-
Adds the given state to the state space over which VI iterates.
- addSuccessor(UCTStateNode) - Method in class burlap.behavior.singleagent.planning.stochastic.montecarlo.uct.UCTActionNode
-
Adds a successor node to the list of possible successors
- addTerminals(int...) - Method in class burlap.domain.singleagent.graphdefined.GraphTF
-
Adds additional terminal states
- addThrustActionWithThrust(double) - Method in class burlap.domain.singleagent.lunarlander.LunarLanderDomain
-
Adds a thrust action with thrust force t
- addTilingsForAllDimensionsWithWidths(double[], int, TilingArrangement) - Method in class burlap.behavior.functionapproximation.sparse.tilecoding.TileCodingFeatures
-
Adss a number of tilings where each tile is dependent on *all* the dimensions of a state feature vector.
- addTilingsForDimensionsAndWidths(boolean[], double[], int, TilingArrangement) - Method in class burlap.behavior.functionapproximation.sparse.tilecoding.TileCodingFeatures
-
Adss a number of tilings where each tile is dependent on the dimensions that are labeled as "true" in the dimensionMask parameter.
- addToVector(double[], double[]) - Static method in class burlap.behavior.singleagent.learnfromdemo.mlirl.MLIRL
-
Performs a vector addition and stores the results in sumVector
- addToWhiteList(Object) - Method in class burlap.behavior.functionapproximation.dense.NumericVariableFeatures
-
- addVariableMasks(Object...) - Method in class burlap.statehashing.masked.MaskedConfig
-
Adds masks for specific state variables.
- addVariableMasks(Object...) - Method in class burlap.statehashing.masked.MaskedHashableStateFactory
-
Adds masks for specific state variables.
- addWorldObserver(WorldObserver) - Method in class burlap.mdp.stochasticgames.world.World
-
Adds a world observer to this world
- aerAvgSeries - Variable in class burlap.behavior.singleagent.auxiliary.performance.PerformancePlotter.AgentDatasets
-
All trial's average average reward per episode series data
- aerAvgSeries - Variable in class burlap.behavior.stochasticgames.auxiliary.performance.MultiAgentPerformancePlotter.AgentDatasets
-
All trial's average average reward per episode series data
- agent - Variable in class burlap.domain.singleagent.blockdude.state.BlockDudeState
-
- agent - Variable in class burlap.domain.singleagent.frostbite.state.FrostbiteState
-
- agent - Variable in class burlap.domain.singleagent.gridworld.state.GridWorldState
-
- agent - Variable in class burlap.domain.singleagent.lunarlander.state.LLState
-
- agent_cleanup() - Method in class burlap.behavior.singleagent.interfaces.rlglue.RLGlueAgent
-
- agent_end(double) - Method in class burlap.behavior.singleagent.interfaces.rlglue.RLGlueAgent
-
- agent_init(String) - Method in class burlap.behavior.singleagent.interfaces.rlglue.RLGlueAgent
-
- agent_message(String) - Method in class burlap.behavior.singleagent.interfaces.rlglue.RLGlueAgent
-
- agent_start(Observation) - Method in class burlap.behavior.singleagent.interfaces.rlglue.RLGlueAgent
-
- agent_step(double, Observation) - Method in class burlap.behavior.singleagent.interfaces.rlglue.RLGlueAgent
-
- agentAction(int, int) - Method in class burlap.behavior.stochasticgames.GameEpisode
-
Returns the action taken for the given agent at the given time step where t=0 refers to the joint action taken in the initial state.
- agentCumulativeReward - Variable in class burlap.mdp.stochasticgames.world.World
-
- AgentDatasets(String) - Constructor for class burlap.behavior.singleagent.auxiliary.performance.PerformancePlotter.AgentDatasets
-
Initializes the datastructures for an agent with the given name
- AgentDatasets(String) - Constructor for class burlap.behavior.stochasticgames.auxiliary.performance.MultiAgentPerformancePlotter.AgentDatasets
-
Initializes the datastructures for an agent with the given name
- agentDefinitions - Variable in class burlap.behavior.stochasticgames.agents.madp.MADPPlannerFactory.MAVIPlannerFactory
-
The agent definitions for which planning is performed.
- agentDefinitions - Variable in class burlap.behavior.stochasticgames.madynamicprogramming.MADynamicProgramming
-
The agent types
- agentFactories - Variable in class burlap.behavior.singleagent.auxiliary.performance.LearningAlgorithmExperimenter
-
The array of agent factories for the agents to be compared.
- agentFactoriesAndTypes - Variable in class burlap.behavior.stochasticgames.auxiliary.performance.MultiAgentExperimenter
-
The agent factories for the agents to be tested
- agentFactory - Variable in class burlap.behavior.stochasticgames.auxiliary.performance.AgentFactoryAndType
-
- AgentFactory - Interface in burlap.mdp.stochasticgames.agent
-
An interface for generating agents
- AgentFactoryAndType - Class in burlap.behavior.stochasticgames.auxiliary.performance
-
A pair storing an agent factory and the agent type that the generated agent will join the world as.
- AgentFactoryAndType(AgentFactory, SGAgentType) - Constructor for class burlap.behavior.stochasticgames.auxiliary.performance.AgentFactoryAndType
-
Initializes
- AgentFactoryWithSubjectiveReward - Class in burlap.mdp.stochasticgames.common
-
An agent generating factory that will produce an agent that uses an internal subjective reward function.
- AgentFactoryWithSubjectiveReward(AgentFactory, JointRewardFunction) - Constructor for class burlap.mdp.stochasticgames.common.AgentFactoryWithSubjectiveReward
-
Initializes the factory.
- agentId - Variable in class burlap.mdp.stochasticgames.tournament.MatchEntry
-
- agentName - Variable in class burlap.behavior.stochasticgames.auxiliary.performance.MultiAgentPerformancePlotter.DatasetsAndTrials
-
The name of the agent
- agentName(int, OOState) - Method in class burlap.domain.stochasticgames.gridgame.GridGameStandardMechanics
-
- agentName() - Method in interface burlap.mdp.stochasticgames.agent.SGAgent
-
Returns this agent's name
- agentName() - Method in class burlap.mdp.stochasticgames.agent.SGAgentBase
-
Returns this agent's name
- agentNum - Variable in class burlap.behavior.stochasticgames.agents.interfacing.singleagent.LearningAgentToSGAgentInterface
-
- agentNum - Variable in class burlap.behavior.stochasticgames.agents.madp.MultiAgentDPPlanningAgent
-
- agentNum - Variable in class burlap.behavior.stochasticgames.agents.maql.MultiAgentQLearning
-
- agentNum - Variable in class burlap.behavior.stochasticgames.agents.naiveq.SGNaiveQLAgent
-
- agentNum - Variable in class burlap.behavior.stochasticgames.agents.twoplayer.repeatedsinglestage.GrimTrigger
-
- agentNum - Variable in class burlap.behavior.stochasticgames.agents.twoplayer.repeatedsinglestage.TitForTat
-
- agentNum - Variable in class burlap.behavior.stochasticgames.agents.twoplayer.singlestage.equilibriumplayer.EquilibriumPlayingSGAgent
-
- agentNum - Variable in class burlap.behavior.stochasticgames.madynamicprogramming.MADynamicProgramming.BackupBasedQSource
-
The agent for which this value function is assigned.
- AgentPainter(int, int) - Constructor for class burlap.domain.singleagent.blockdude.BlockDudeVisualizer.AgentPainter
-
Initializes.
- AgentPainter(int) - Constructor for class burlap.domain.singleagent.frostbite.FrostbiteVisualizer.AgentPainter
-
- AgentPainter(LunarLanderDomain.LLPhysicsParams) - Constructor for class burlap.domain.singleagent.lunarlander.LLVisualizer.AgentPainter
-
- AgentPainter(MountainCar.MCPhysicsParams) - Constructor for class burlap.domain.singleagent.mountaincar.MountainCarVisualizer.AgentPainter
-
Initializes with the mountain car physics used
- AgentPayoutFunction() - Constructor for class burlap.domain.stochasticgames.normalform.SingleStageNormalFormGame.AgentPayoutFunction
-
- agentQSource(int) - Method in interface burlap.behavior.stochasticgames.madynamicprogramming.AgentQSourceMap
-
Returns a QSource which can be used to query the Q-values of a given agent.
- agentQSource(int) - Method in class burlap.behavior.stochasticgames.madynamicprogramming.AgentQSourceMap.HashMapAgentQSourceMap
-
- agentQSource(int) - Method in class burlap.behavior.stochasticgames.madynamicprogramming.AgentQSourceMap.MAQLControlledQSourceMap
-
- AgentQSourceMap - Interface in burlap.behavior.stochasticgames.madynamicprogramming
-
Multiagent value function planning typicall entails storing a separate Q value for each joint action for each agent.
- AgentQSourceMap.HashMapAgentQSourceMap - Class in burlap.behavior.stochasticgames.madynamicprogramming
-
An implementation of the
AgentQSourceMap
in which the sources are specified by a hash map.
- AgentQSourceMap.MAQLControlledQSourceMap - Class in burlap.behavior.stochasticgames.madynamicprogramming
-
An implementation of the
AgentQSourceMap
in which different agent objects each maintain their own personal Q-source.
- agentReward(int, int) - Method in class burlap.behavior.stochasticgames.GameEpisode
-
Returns the reward received for the agent with the given name at the given step.
- agents - Variable in class burlap.mdp.stochasticgames.tournament.Tournament
-
- agents - Variable in class burlap.mdp.stochasticgames.world.World
-
- agentsEqual(OOState, OOState) - Method in class burlap.domain.stochasticgames.gridgame.GridGameStandardMechanics
-
Returns true if the agent objects between these two states are equal
- agentsInJointPolicy - Variable in class burlap.behavior.stochasticgames.JointPolicy
-
The agent definitions that define the set of possible joint actions in each state.
- agentSize - Variable in class burlap.domain.singleagent.frostbite.FrostbiteModel
-
- agentSize - Variable in class burlap.domain.singleagent.frostbite.FrostbiteVisualizer.AgentPainter
-
- agentsSynchronizedSoFar - Variable in class burlap.behavior.stochasticgames.JointPolicy
-
The agents whose actions have been synchronized so far
- agentTrials - Variable in class burlap.behavior.singleagent.auxiliary.performance.PerformancePlotter
-
contains all trial data for each agent
- agentType() - Method in interface burlap.mdp.stochasticgames.agent.SGAgent
-
Returns this agent's type
- agentType - Variable in class burlap.mdp.stochasticgames.agent.SGAgentBase
-
- agentType() - Method in class burlap.mdp.stochasticgames.agent.SGAgentBase
-
Returns this agent's type
- agentType - Variable in class burlap.mdp.stochasticgames.tournament.MatchEntry
-
- agentWiseData - Variable in class burlap.behavior.stochasticgames.auxiliary.performance.MultiAgentPerformancePlotter
-
Datastructure for maintaining data for each agent playing in the game.
- agentWithName(String) - Method in class burlap.mdp.stochasticgames.world.World
-
Returns the agent with the given name, or null if there is no agent with that name.
- aId - Variable in class burlap.domain.singleagent.graphdefined.GraphDefinedDomain.GraphActionType
-
The action number of this action
- aId - Variable in class burlap.domain.singleagent.graphdefined.GraphDefinedDomain.GraphActionType.GraphAction
-
- AliasCommand - Class in burlap.shell.command.reserved
-
A reserved
ShellCommand
for creating a command alias for a given command.
- AliasCommand() - Constructor for class burlap.shell.command.reserved.AliasCommand
-
- aliases - Variable in class burlap.shell.BurlapShell
-
- AliasesCommand - Class in burlap.shell.command.reserved
-
A reserved
ShellCommand
for listing the set of aliases the shell knows.
- AliasesCommand() - Constructor for class burlap.shell.command.reserved.AliasesCommand
-
- aliasPointer(String) - Method in class burlap.shell.BurlapShell
-
- allActions - Variable in class burlap.mdp.core.action.UniversalActionType
-
- allActionTypes - Variable in class burlap.behavior.singleagent.learning.modellearning.rmax.UnmodeledFavoredPolicy
-
- allApplicableActions(State) - Method in class burlap.behavior.singleagent.interfaces.rlglue.RLGlueDomain.RLGlueActionType
-
- allApplicableActions(State) - Method in class burlap.behavior.singleagent.options.OptionType
-
- allApplicableActions(State) - Method in class burlap.domain.singleagent.graphdefined.GraphDefinedDomain.GraphActionType
-
- allApplicableActions(State) - Method in class burlap.domain.singleagent.lunarlander.LunarLanderDomain.ThrustType
-
- allApplicableActions(State) - Method in interface burlap.mdp.core.action.ActionType
-
Returns all possible actions of this type that can be applied in the provided
State
.
- allApplicableActions(State) - Method in class burlap.mdp.core.action.UniversalActionType
-
- allApplicableActions(State) - Method in class burlap.mdp.singleagent.oo.ObjectParameterizedActionType
-
- allApplicableActionsForTypes(List<ActionType>, State) - Static method in class burlap.mdp.core.action.ActionUtils
-
Returns all
Action
s that are applicable in the given
State
for all
ActionType
objects in the provided list.
- allGroundings(OOState) - Method in class burlap.mdp.core.oo.propositional.PropositionalFunction
-
- allGroundingsFromList(List<PropositionalFunction>, OOState) - Static method in class burlap.mdp.core.oo.propositional.PropositionalFunction
-
- allJointActionsHelper(List<List<Action>>, int, LinkedList<Action>, List<JointAction>) - Static method in class burlap.mdp.stochasticgames.JointAction
-
- allObservations() - Method in class burlap.domain.singleagent.pomdp.tiger.TigerObservations
-
- allObservations() - Method in interface burlap.mdp.singleagent.pomdp.observations.DiscreteObservationFunction
-
Returns a List
containing all possible observations.
- allowActionFromTerminalStates - Variable in class burlap.mdp.singleagent.environment.SimulatedEnvironment
-
A flag indicating whether the environment will respond to actions from a terminal state.
- AllPairWiseSameTypeMS - Class in burlap.mdp.stochasticgames.tournament.common
-
This class defines a MatchSelctory that plays all pairwise matches of agents in a round robin.
- AllPairWiseSameTypeMS(SGAgentType, int) - Constructor for class burlap.mdp.stochasticgames.tournament.common.AllPairWiseSameTypeMS
-
Initializes the selector
- AlphanumericSorting - Class in burlap.datastructures
-
- AlphanumericSorting() - Constructor for class burlap.datastructures.AlphanumericSorting
-
- alreadyInitedGUI - Variable in class burlap.behavior.singleagent.auxiliary.EpisodeSequenceVisualizer
-
- alreadyInitedGUI - Variable in class burlap.behavior.stochasticgames.auxiliary.GameSequenceVisualizer
-
- anginc - Variable in class burlap.domain.singleagent.lunarlander.LunarLanderDomain.LLPhysicsParams
-
The change in orientation angle the lander makes when a turn/rotate action is taken
- angle - Variable in class burlap.domain.singleagent.cartpole.states.InvertedPendulumState
-
- angle - Variable in class burlap.domain.singleagent.lunarlander.state.LLAgent
-
- angleRange - Variable in class burlap.domain.singleagent.cartpole.CartPoleDomain.CPPhysicsParams
-
The maximum radius the pole can fall.
- angleRange - Variable in class burlap.domain.singleagent.cartpole.InvertedPendulum.IPPhysicsParams
-
The maximum radius the pole can fall.
- angleV - Variable in class burlap.domain.singleagent.cartpole.states.InvertedPendulumState
-
- angmax - Variable in class burlap.domain.singleagent.lunarlander.LunarLanderDomain.LLPhysicsParams
-
The maximum angle the lander can be rotated in either the clockwise or counterclockwise direction
- AnnotatedAction - Class in burlap.behavior.policy.support
-
- AnnotatedAction() - Constructor for class burlap.behavior.policy.support.AnnotatedAction
-
- AnnotatedAction(Action, String) - Constructor for class burlap.behavior.policy.support.AnnotatedAction
-
- annotation - Variable in class burlap.behavior.policy.support.AnnotatedAction
-
- appendAndMergeEpisodeAnalysis(Episode) - Method in class burlap.behavior.singleagent.Episode
-
This method will append execution results in e to this object's results.
- appendHashCodeForValue(HashCodeBuilder, Object, Object) - Method in class burlap.statehashing.discretized.IDDiscHashableState
-
- appendHashCodeForValue(HashCodeBuilder, Object, Object) - Method in class burlap.statehashing.discretized.IIDiscHashableState
-
- appendHashCodeForValue(HashCodeBuilder, Object, Object) - Method in class burlap.statehashing.masked.IDMaskedHashableState
-
- appendHashCodeForValue(HashCodeBuilder, Object, Object) - Method in class burlap.statehashing.masked.IIMaskedHashableState
-
- appendHashCodeForValue(HashCodeBuilder, Object, Object) - Method in class burlap.statehashing.maskeddiscretized.IDDiscMaskedHashableState
-
- appendHashCodeForValue(HashCodeBuilder, Object, Object) - Method in class burlap.statehashing.maskeddiscretized.IIDiscMaskedHashableState
-
- appendHashCodeForValue(HashCodeBuilder, Object, Object) - Method in class burlap.statehashing.simple.IDSimpleHashableState
-
- appendHashCodeForValue(HashCodeBuilder, Object, Object) - Method in class burlap.statehashing.simple.IISimpleHashableState
-
- applicableActions(State) - Method in class burlap.behavior.singleagent.MDPSolver
-
Returns all applicable actions in the provided state for all the actions that this MDP Solver can use.
- applicableInState(State, ObjectParameterizedAction) - Method in class burlap.domain.singleagent.blocksworld.BlocksWorld.StackActionType
-
- applicableInState(State, ObjectParameterizedAction) - Method in class burlap.domain.singleagent.blocksworld.BlocksWorld.UnstackActionType
-
- applicableInState(State) - Method in class burlap.domain.singleagent.graphdefined.GraphDefinedDomain.GraphActionType
-
- applicableInState(State, ObjectParameterizedAction) - Method in class burlap.mdp.singleagent.oo.ObjectParameterizedActionType
-
Indicates whether the input action can be applied in the state
- apply(double[]) - Method in class burlap.behavior.singleagent.planning.stochastic.dpoperator.BellmanOperator
-
- apply(double[]) - Method in interface burlap.behavior.singleagent.planning.stochastic.dpoperator.DPOperator
-
Applies the operator on the input q-values and returns the result.
- apply(double[]) - Method in class burlap.behavior.singleagent.planning.stochastic.dpoperator.SoftmaxOperator
-
- ApprenticeshipLearning - Class in burlap.behavior.singleagent.learnfromdemo.apprenticeship
-
This algorithm will take expert trajectors and return a policy that models them.
- ApprenticeshipLearning.StationaryRandomDistributionPolicy - Class in burlap.behavior.singleagent.learnfromdemo.apprenticeship
-
This class extends Policy.
- ApprenticeshipLearningRequest - Class in burlap.behavior.singleagent.learnfromdemo.apprenticeship
-
A data structure for setting all the parameters of Max Margin Apprenticeship learning.
- ApprenticeshipLearningRequest() - Constructor for class burlap.behavior.singleagent.learnfromdemo.apprenticeship.ApprenticeshipLearningRequest
-
- ApprenticeshipLearningRequest(SADomain, Planner, DenseStateFeatures, List<Episode>, StateGenerator) - Constructor for class burlap.behavior.singleagent.learnfromdemo.apprenticeship.ApprenticeshipLearningRequest
-
- area - Variable in class burlap.shell.visual.TextAreaStreams
-
- arrayIndexForStepsBack(int) - Method in class burlap.behavior.stochasticgames.agents.naiveq.history.HistoryState
-
- arrayIndexMap - Variable in class burlap.datastructures.HashIndexedHeap
-
Hash map from objects to their index in the heap
- ArrowActionGlyph - Class in burlap.behavior.singleagent.auxiliary.valuefunctionvis.common
-
- ArrowActionGlyph(int) - Constructor for class burlap.behavior.singleagent.auxiliary.valuefunctionvis.common.ArrowActionGlyph
-
creates an arrow action glyph painter in the specified direction
- ARTDP - Class in burlap.behavior.singleagent.learning.modellearning.artdp
-
This class provides an implementation of Adaptive Realtime Dynamic Programming [1].
- ARTDP(SADomain, double, HashableStateFactory, double) - Constructor for class burlap.behavior.singleagent.learning.modellearning.artdp.ARTDP
-
Initializes using a tabular model of the world and a Boltzmann policy with a fixed temperature of 0.1.
- ARTDP(SADomain, double, HashableStateFactory, ValueFunction) - Constructor for class burlap.behavior.singleagent.learning.modellearning.artdp.ARTDP
-
Initializes using a tabular model of the world and a Boltzmann policy with a fixed temperature of 0.1.
- ARTDP(SADomain, double, HashableStateFactory, LearnedModel, ValueFunction) - Constructor for class burlap.behavior.singleagent.learning.modellearning.artdp.ARTDP
-
Initializes using the provided model algorithm and a Boltzmann policy with a fixed temperature of 0.1.
- assertPFs(State, boolean[]) - Method in class burlap.testing.TestGridWorld
-
- associatedAction(String) - Method in class burlap.behavior.singleagent.interfaces.rlglue.RLGlueDomain.RLGlueActionType
-
- associatedAction(String) - Method in class burlap.behavior.singleagent.options.OptionType
-
- associatedAction(String) - Method in class burlap.domain.singleagent.graphdefined.GraphDefinedDomain.GraphActionType
-
- associatedAction(String) - Method in class burlap.domain.singleagent.lunarlander.LunarLanderDomain.ThrustType
-
- associatedAction(String) - Method in interface burlap.mdp.core.action.ActionType
-
Returns an
Action
whose parameters are specified by the given
String
representation (if
the
ActionType
manages multiple parameterizations)
- associatedAction(String) - Method in class burlap.mdp.core.action.UniversalActionType
-
- associatedAction(String) - Method in class burlap.mdp.singleagent.oo.ObjectParameterizedActionType
-
- AStar - Class in burlap.behavior.singleagent.planning.deterministic.informed.astar
-
An implementation of A*.
- AStar(SADomain, StateConditionTest, HashableStateFactory, Heuristic) - Constructor for class burlap.behavior.singleagent.planning.deterministic.informed.astar.AStar
-
Initializes A*.
- at - Variable in class burlap.behavior.stochasticgames.auxiliary.performance.AgentFactoryAndType
-
- at - Variable in class burlap.mdp.stochasticgames.tournament.common.AllPairWiseSameTypeMS
-
- AtExitPF() - Constructor for class burlap.domain.singleagent.blockdude.BlockDude.AtExitPF
-
- AtLocationPF(String, String[]) - Constructor for class burlap.domain.singleagent.gridworld.GridWorldDomain.AtLocationPF
-
Initializes with given name domain and parameter object class types
- ATT_V - Static variable in class burlap.domain.singleagent.mountaincar.MountainCar
-
A constant for the name of the velocity attribute
- ATT_X - Static variable in class burlap.domain.singleagent.mountaincar.MountainCar
-
A constant for the name of the x attribute
- attemptedDelta(String) - Method in class burlap.domain.stochasticgames.gridgame.GridGameStandardMechanics
-
Returns the attempted change in position by the agent for the given action.
- autoRecord - Variable in class burlap.shell.command.env.EpisodeRecordingCommands
-
- averageEpisodeReward - Variable in class burlap.behavior.singleagent.auxiliary.performance.PerformancePlotter.Trial
-
Stores the average reward by episode
- averageEpisodeReward - Variable in class burlap.behavior.stochasticgames.auxiliary.performance.MultiAgentPerformancePlotter.Trial
-
Stores the average reward by episode
- averageEpisodeRewardSeries - Variable in class burlap.behavior.singleagent.auxiliary.performance.PerformancePlotter.AgentDatasets
-
Most recent trial's average reward per step episode data
- averageEpisodeRewardSeries - Variable in class burlap.behavior.stochasticgames.auxiliary.performance.MultiAgentPerformancePlotter.AgentDatasets
-
Most recent trial's average reward per step episode data
- averageReturn() - Method in class burlap.behavior.singleagent.planning.stochastic.montecarlo.uct.UCTActionNode
-
Returns the average return
- averagesEnabled() - Method in enum burlap.behavior.singleagent.auxiliary.performance.TrialMode
-
Returns true if the trial average plots will be plotted by this mode.