- c - Variable in class burlap.behavior.singleagent.learnfromdemo.mlirl.differentiableplanners.DifferentiableSparseSampling
-
The number of transition dynamics samples (for the root if depth-variable C is used)
- c - Variable in class burlap.behavior.singleagent.planning.stochastic.sparsesampling.SparseSampling
-
The number of transition dynamics samples (for the root if depth-variable C is used)
- c(int, String) - Static method in class burlap.debugtools.DPrint
-
A print command for the given debug code.
- cachedExpectations - Variable in class burlap.behavior.singleagent.options.model.BFSMarkovOptionModel.CachedModel
-
The cached transition probabilities from each initiation state
- CachedModel() - Constructor for class burlap.behavior.singleagent.options.model.BFSMarkovOptionModel.CachedModel
-
- cachedModels - Variable in class burlap.behavior.singleagent.options.model.BFSMarkovOptionModel
-
- CachedPolicy - Class in burlap.behavior.policy
-
This class can be used to lazily cache the policy of a source policy.
- CachedPolicy(HashableStateFactory, EnumerablePolicy) - Constructor for class burlap.behavior.policy.CachedPolicy
-
Initializes
- CachedPolicy(HashableStateFactory, EnumerablePolicy, int) - Constructor for class burlap.behavior.policy.CachedPolicy
-
Initializes
- call(BurlapShell, String, Scanner, PrintStream) - Method in class burlap.shell.command.env.AddStateObjectCommand
-
- call(BurlapShell, String, Scanner, PrintStream) - Method in class burlap.shell.command.env.EpisodeRecordingCommands.EpisodeBrowserCommand
-
- call(BurlapShell, String, Scanner, PrintStream) - Method in class burlap.shell.command.env.EpisodeRecordingCommands.RecordCommand
-
- call(BurlapShell, String, Scanner, PrintStream) - Method in class burlap.shell.command.env.ExecuteActionCommand
-
- call(BurlapShell, String, Scanner, PrintStream) - Method in class burlap.shell.command.env.IsTerminalCommand
-
- call(BurlapShell, String, Scanner, PrintStream) - Method in class burlap.shell.command.env.ListActionsCommand
-
- call(BurlapShell, String, Scanner, PrintStream) - Method in class burlap.shell.command.env.ListPropFunctions
-
- call(BurlapShell, String, Scanner, PrintStream) - Method in class burlap.shell.command.env.ObservationCommand
-
- call(BurlapShell, String, Scanner, PrintStream) - Method in class burlap.shell.command.env.RemoveStateObjectCommand
-
- call(BurlapShell, String, Scanner, PrintStream) - Method in class burlap.shell.command.env.ResetEnvCommand
-
- call(BurlapShell, String, Scanner, PrintStream) - Method in class burlap.shell.command.env.RewardCommand
-
- call(BurlapShell, String, Scanner, PrintStream) - Method in class burlap.shell.command.env.SetVarCommand
-
- call(BurlapShell, String, Scanner, PrintStream) - Method in class burlap.shell.command.reserved.AliasCommand
-
- call(BurlapShell, String, Scanner, PrintStream) - Method in class burlap.shell.command.reserved.AliasesCommand
-
- call(BurlapShell, String, Scanner, PrintStream) - Method in class burlap.shell.command.reserved.CommandsCommand
-
- call(BurlapShell, String, Scanner, PrintStream) - Method in class burlap.shell.command.reserved.HelpCommand
-
- call(BurlapShell, String, Scanner, PrintStream) - Method in class burlap.shell.command.reserved.QuitCommand
-
- call(BurlapShell, String, Scanner, PrintStream) - Method in interface burlap.shell.command.ShellCommand
-
Executes this command.
- call(BurlapShell, String, Scanner, PrintStream) - Method in class burlap.shell.command.world.AddStateObjectSGCommand
-
- call(BurlapShell, String, Scanner, PrintStream) - Method in class burlap.shell.command.world.GameCommand
-
- call(BurlapShell, String, Scanner, PrintStream) - Method in class burlap.shell.command.world.GenerateStateCommand
-
- call(BurlapShell, String, Scanner, PrintStream) - Method in class burlap.shell.command.world.IsTerminalSGCommand
-
- call(BurlapShell, String, Scanner, PrintStream) - Method in class burlap.shell.command.world.JointActionCommand
-
- call(BurlapShell, String, Scanner, PrintStream) - Method in class burlap.shell.command.world.LastJointActionCommand
-
- call(BurlapShell, String, Scanner, PrintStream) - Method in class burlap.shell.command.world.ManualAgentsCommands.ListManualAgents
-
- call(BurlapShell, String, Scanner, PrintStream) - Method in class burlap.shell.command.world.ManualAgentsCommands.LSManualAgentActionsCommands
-
- call(BurlapShell, String, Scanner, PrintStream) - Method in class burlap.shell.command.world.ManualAgentsCommands.RegisterAgentCommand
-
- call(BurlapShell, String, Scanner, PrintStream) - Method in class burlap.shell.command.world.ManualAgentsCommands.SetAgentAction
-
- call(BurlapShell, String, Scanner, PrintStream) - Method in class burlap.shell.command.world.RemoveStateObjectSGCommand
-
- call(BurlapShell, String, Scanner, PrintStream) - Method in class burlap.shell.command.world.RewardsCommand
-
- call(BurlapShell, String, Scanner, PrintStream) - Method in class burlap.shell.command.world.SetVarSGCommand
-
- call(BurlapShell, String, Scanner, PrintStream) - Method in class burlap.shell.command.world.WorldObservationCommand
-
- call(BurlapShell, String, Scanner, PrintStream) - Method in class burlap.shell.visual.VisualExplorer.LivePollCommand
-
- cartFriction - Variable in class burlap.domain.singleagent.cartpole.CartPoleDomain.CPPhysicsParams
-
The friction between the cart and ground
- cartMass - Variable in class burlap.domain.singleagent.cartpole.CartPoleDomain.CPPhysicsParams
-
The mass of the cart.
- cartMass - Variable in class burlap.domain.singleagent.cartpole.InvertedPendulum.IPPhysicsParams
-
The mass of the cart.
- CartPoleDomain - Class in burlap.domain.singleagent.cartpole
-
The classic cart pole balancing problem as described by Barto, Sutton, and Anderson [2] with correct mechanics as described by Florian [1].
- CartPoleDomain() - Constructor for class burlap.domain.singleagent.cartpole.CartPoleDomain
-
- CartPoleDomain.CartPoleRewardFunction - Class in burlap.domain.singleagent.cartpole
-
A default reward function for this task.
- CartPoleDomain.CartPoleTerminalFunction - Class in burlap.domain.singleagent.cartpole
-
A default terminal function for this domain.
- CartPoleDomain.CPPhysicsParams - Class in burlap.domain.singleagent.cartpole
-
- CartPoleFullState - Class in burlap.domain.singleagent.cartpole.states
-
- CartPoleFullState() - Constructor for class burlap.domain.singleagent.cartpole.states.CartPoleFullState
-
- CartPoleFullState(double, double, double, double, double) - Constructor for class burlap.domain.singleagent.cartpole.states.CartPoleFullState
-
- CartPolePainter() - Constructor for class burlap.domain.singleagent.cartpole.CartPoleVisualizer.CartPolePainter
-
- CartPoleRewardFunction() - Constructor for class burlap.domain.singleagent.cartpole.CartPoleDomain.CartPoleRewardFunction
-
Initializes with max pole angle threshold of 12 degrees (about 0.2 radians)
- CartPoleRewardFunction(double) - Constructor for class burlap.domain.singleagent.cartpole.CartPoleDomain.CartPoleRewardFunction
-
Initializes with a max pole angle as specified in radians
- CartPoleState - Class in burlap.domain.singleagent.cartpole.states
-
- CartPoleState() - Constructor for class burlap.domain.singleagent.cartpole.states.CartPoleState
-
- CartPoleState(double, double, double, double) - Constructor for class burlap.domain.singleagent.cartpole.states.CartPoleState
-
- CartPoleTerminalFunction() - Constructor for class burlap.domain.singleagent.cartpole.CartPoleDomain.CartPoleTerminalFunction
-
Initializes with default max angle of 12 degrees (about 0.2 radians)
- CartPoleTerminalFunction(double) - Constructor for class burlap.domain.singleagent.cartpole.CartPoleDomain.CartPoleTerminalFunction
-
Initializes with a max pole angle as specified in radians
- CartPoleVisualizer - Class in burlap.domain.singleagent.cartpole
-
Class for returning cart pole visualizer objects.
- CartPoleVisualizer.CartPolePainter - Class in burlap.domain.singleagent.cartpole
-
An object painter for the cart pole object.
- CellPainter(Color, int[][]) - Constructor for class burlap.domain.singleagent.gridworld.GridWorldVisualizer.CellPainter
-
Initializes painter for a rectangle shape cell
- CellPainter(int, Color, int[][]) - Constructor for class burlap.domain.singleagent.gridworld.GridWorldVisualizer.CellPainter
-
Initializes painter with filling the cell with the given shape
- cellWidth() - Method in class burlap.behavior.singleagent.auxiliary.gridset.VariableGridSpec
-
Returns the width of a grid cell along this attribute.
- centeredState - Variable in class burlap.behavior.functionapproximation.dense.rbf.RBF
-
The center state of the RBF unit.
- cerAvgSeries - Variable in class burlap.behavior.singleagent.auxiliary.performance.PerformancePlotter.AgentDatasets
-
All trial's average cumulative reward per episode series data
- cerAvgSeries - Variable in class burlap.behavior.stochasticgames.auxiliary.performance.MultiAgentPerformancePlotter.AgentDatasets
-
All trial's average cumulative reward per episode series data
- cf(int, String, Object...) - Static method in class burlap.debugtools.DPrint
-
A printf command for the given debug code.
- changeWeight(T, double) - Method in class burlap.datastructures.StochasticTree
-
Changes the weight of the given element.
- cHeight - Variable in class burlap.behavior.singleagent.auxiliary.EpisodeSequenceVisualizer
-
- cHeight - Variable in class burlap.behavior.singleagent.auxiliary.valuefunctionvis.ValueFunctionVisualizerGUI
-
Visualizer canvas height
- cHeight - Variable in class burlap.behavior.stochasticgames.auxiliary.GameSequenceVisualizer
-
- cHeight - Variable in class burlap.mdp.singleagent.common.VisualActionObserver
-
The height of the painter
- cHeight - Variable in class burlap.mdp.stochasticgames.common.VisualWorldObserver
-
The height of the painter
- cHeight - Variable in class burlap.shell.visual.SGVisualExplorer
-
- cHeight - Variable in class burlap.shell.visual.VisualExplorer
-
- cl(int, String) - Static method in class burlap.debugtools.DPrint
-
A print line command for the given debug code.
- CLASS_AGENT - Static variable in class burlap.domain.singleagent.blockdude.BlockDude
-
Name for the agent OO-MDP class
- CLASS_AGENT - Static variable in class burlap.domain.singleagent.frostbite.FrostbiteDomain
-
Constant for the name of the agent OO-MDP class
- CLASS_AGENT - Static variable in class burlap.domain.singleagent.gridworld.GridWorldDomain
-
Constant OO-MDP class name for agent
- CLASS_AGENT - Static variable in class burlap.domain.singleagent.lunarlander.LunarLanderDomain
-
Constant for the name of the agent OO-MDP class
- CLASS_AGENT - Static variable in class burlap.domain.stochasticgames.gridgame.GridGame
-
A constant for the name of the agent class.
- CLASS_BLOCK - Static variable in class burlap.domain.singleagent.blockdude.BlockDude
-
Name for the block OO-MDP class
- CLASS_BLOCK - Static variable in class burlap.domain.singleagent.blocksworld.BlocksWorld
-
Constant for the name of the block class.
- CLASS_DIM_H_WALL - Static variable in class burlap.domain.stochasticgames.gridgame.GridGame
-
A constant for the name of the horizontal wall class.
- CLASS_DIM_V_WALL - Static variable in class burlap.domain.stochasticgames.gridgame.GridGame
-
A constant for the name of the vertical wall class.
- CLASS_EXIT - Static variable in class burlap.domain.singleagent.blockdude.BlockDude
-
Name for the exit OO-MDP class
- CLASS_GOAL - Static variable in class burlap.domain.stochasticgames.gridgame.GridGame
-
A constant for the name of the goal class.
- CLASS_IGLOO - Static variable in class burlap.domain.singleagent.frostbite.FrostbiteDomain
-
Constant for the name of the igloo OO-MDP class
- CLASS_LOCATION - Static variable in class burlap.domain.singleagent.gridworld.GridWorldDomain
-
Constant OO-MDP class name for a location
- CLASS_MAP - Static variable in class burlap.domain.singleagent.blockdude.BlockDude
-
Name for the bricks OO-MDP class
- CLASS_OBSTACLE - Static variable in class burlap.domain.singleagent.lunarlander.LunarLanderDomain
-
Constant for the name of the obstacle OO-MDP class
- CLASS_PAD - Static variable in class burlap.domain.singleagent.lunarlander.LunarLanderDomain
-
Constant for the name of the goal landing pad OO-MDP class
- CLASS_PLATFORM - Static variable in class burlap.domain.singleagent.frostbite.FrostbiteDomain
-
Constant for the name of the obstacle OO-MDP class
- classesToGrid - Variable in class burlap.behavior.singleagent.auxiliary.gridset.OOStateGridder
-
- ClassicMCTF() - Constructor for class burlap.domain.singleagent.mountaincar.MountainCar.ClassicMCTF
-
Sets terminal states to be those that are >= the maximum position in the world.
- ClassicMCTF(double) - Constructor for class burlap.domain.singleagent.mountaincar.MountainCar.ClassicMCTF
-
Sets terminal states to be those >= the given threshold.
- className() - Method in class burlap.domain.singleagent.blockdude.state.BlockDudeAgent
-
- className - Variable in class burlap.domain.singleagent.blockdude.state.BlockDudeCell
-
- className() - Method in class burlap.domain.singleagent.blockdude.state.BlockDudeCell
-
- className() - Method in class burlap.domain.singleagent.blockdude.state.BlockDudeMap
-
- className() - Method in class burlap.domain.singleagent.blocksworld.BlocksWorldBlock
-
- className() - Method in class burlap.domain.singleagent.frostbite.state.FrostbiteAgent
-
- className() - Method in class burlap.domain.singleagent.frostbite.state.FrostbiteIgloo
-
- className() - Method in class burlap.domain.singleagent.frostbite.state.FrostbitePlatform
-
- className() - Method in class burlap.domain.singleagent.gridworld.state.GridAgent
-
- className() - Method in class burlap.domain.singleagent.gridworld.state.GridLocation
-
- className() - Method in class burlap.domain.singleagent.lunarlander.state.LLAgent
-
- className() - Method in class burlap.domain.singleagent.lunarlander.state.LLBlock.LLObstacle
-
- className() - Method in class burlap.domain.singleagent.lunarlander.state.LLBlock.LLPad
-
- className() - Method in class burlap.domain.stochasticgames.gridgame.state.GGAgent
-
- className() - Method in class burlap.domain.stochasticgames.gridgame.state.GGGoal
-
- className() - Method in class burlap.domain.stochasticgames.gridgame.state.GGWall.GGHorizontalWall
-
- className() - Method in class burlap.domain.stochasticgames.gridgame.state.GGWall.GGVerticalWall
-
- className() - Method in interface burlap.mdp.core.oo.state.ObjectInstance
-
Returns the name of this OO-MDP object class
- clear() - Method in class burlap.behavior.singleagent.learning.lspi.SARSData
-
Clears this dataset of all elements.
- clear - Variable in class burlap.domain.singleagent.blocksworld.BlocksWorldBlock
-
- clearActions() - Method in class burlap.behavior.policy.RandomPolicy
-
Clears the action list used in action selection.
- clearActionTypes() - Method in class burlap.mdp.singleagent.SADomain
-
- clearAllAttributeMasks() - Method in class burlap.statehashing.masked.MaskedConfig
-
Clears all state variable masks.
- clearAllAttributeMasks() - Method in class burlap.statehashing.masked.MaskedHashableStateFactory
-
Clears all state variable masks.
- clearAllObjectClassMasks() - Method in class burlap.statehashing.masked.MaskedConfig
-
Clears all object class masks.
- clearAllObjectClassMasks() - Method in class burlap.statehashing.masked.MaskedHashableStateFactory
-
Clears all object class masks.
- clearAllObservers() - Method in class burlap.mdp.singleagent.environment.extensions.EnvironmentServer
-
- clearAllObservers() - Method in interface burlap.mdp.singleagent.environment.extensions.EnvironmentServerInterface
-
- clearAllObservers() - Method in class burlap.mdp.singleagent.environment.SimulatedEnvironment
-
- clearAllWorldObserver() - Method in class burlap.mdp.stochasticgames.world.World
-
Clears all world observers from this world.
- clearLocationOfWalls(int, int) - Method in class burlap.domain.singleagent.gridworld.GridWorldDomain
-
Removes any obstacles or walls at the specified location.
- clearNonAverages() - Method in class burlap.behavior.singleagent.auxiliary.performance.PerformancePlotter.AgentDatasets
-
clears all the series data for the most recent trial.
- clearNonAverages() - Method in class burlap.behavior.stochasticgames.auxiliary.performance.MultiAgentPerformancePlotter.AgentDatasets
-
clears all the series data for the most recent trial.
- ClearPF(String) - Constructor for class burlap.domain.singleagent.blocksworld.BlocksWorld.ClearPF
-
- clearRenderedStateAction() - Method in class burlap.visualizer.StateActionRenderLayer
-
- clearStateActionTransitions(int, int) - Method in class burlap.domain.singleagent.graphdefined.GraphDefinedDomain
-
Clears all (stochastic) edges for a given state-action pair.
- clearStateTransitionsFrom(int) - Method in class burlap.domain.singleagent.graphdefined.GraphDefinedDomain
-
Clears all transitions from a given state node
- clone() - Method in class burlap.mdp.core.oo.propositional.GroundedProp
-
- clusterPriors - Variable in class burlap.behavior.singleagent.learnfromdemo.mlirl.MultipleIntentionsMLIRL
-
The prior probabilities on each cluster.
- clusterRequests - Variable in class burlap.behavior.singleagent.learnfromdemo.mlirl.MultipleIntentionsMLIRL
-
The invididual
MLIRLRequest
objects for each behavior cluster.
- CoCoQ - Class in burlap.behavior.stochasticgames.madynamicprogramming.backupOperators
-
The CoCoQ backup operator for sequential stochastic games [1].
- CoCoQ() - Constructor for class burlap.behavior.stochasticgames.madynamicprogramming.backupOperators.CoCoQ
-
- CoCoQLearningFactory(SGDomain, double, LearningRate, HashableStateFactory, QFunction, boolean, double) - Constructor for class burlap.behavior.stochasticgames.agents.maql.MAQLFactory.CoCoQLearningFactory
-
- coefficientNorm(int) - Method in class burlap.behavior.functionapproximation.dense.fourier.FourierBasis
-
Returns the norm of the coefficient vector for the given basis function index.
- coefficientVectors - Variable in class burlap.behavior.functionapproximation.dense.fourier.FourierBasis
-
The coefficient vectors used
- col - Variable in class burlap.domain.singleagent.blocksworld.BlocksWorld.NamedColor
-
- col - Variable in class burlap.domain.singleagent.gridworld.GridWorldVisualizer.CellPainter
-
- colAER - Variable in class burlap.behavior.singleagent.auxiliary.performance.PerformancePlotter
-
All agent plot series for the the most recetent trial's average reward per episode
- colAER - Variable in class burlap.behavior.stochasticgames.auxiliary.performance.MultiAgentPerformancePlotter
-
All agent plot series for the the most recetent trial's average reward per episode
- colAERAvg - Variable in class burlap.behavior.singleagent.auxiliary.performance.PerformancePlotter
-
All agent plot series for the average of all trial's average reward per episode
- colAERAvg - Variable in class burlap.behavior.stochasticgames.auxiliary.performance.MultiAgentPerformancePlotter
-
All agent plot series for the average of all trial's average reward per episode
- colCER - Variable in class burlap.behavior.singleagent.auxiliary.performance.PerformancePlotter
-
All agent plot series for the the most recetent trial's cumulative reward per episode
- colCER - Variable in class burlap.behavior.stochasticgames.auxiliary.performance.MultiAgentPerformancePlotter
-
All agent plot series for the the most recetent trial's cumulative reward per episode
- colCERAvg - Variable in class burlap.behavior.singleagent.auxiliary.performance.PerformancePlotter
-
All agent plot series for the average of all trial's cumulative reward per episode
- colCERAvg - Variable in class burlap.behavior.stochasticgames.auxiliary.performance.MultiAgentPerformancePlotter
-
All agent plot series for the average of all trial's cumulative reward per episode
- colCSE - Variable in class burlap.behavior.singleagent.auxiliary.performance.PerformancePlotter
-
All agent plot series for the the most recetent trial's cumulative step per episode
- colCSE - Variable in class burlap.behavior.stochasticgames.auxiliary.performance.MultiAgentPerformancePlotter
-
All agent plot series for the the most recetent trial's cumulative step per episode
- colCSEAvg - Variable in class burlap.behavior.singleagent.auxiliary.performance.PerformancePlotter
-
All agent plot series for the average of all trial's cumulative steps per episode
- colCSEAvg - Variable in class burlap.behavior.stochasticgames.auxiliary.performance.MultiAgentPerformancePlotter
-
All agent plot series for the average of all trial's cumulative steps per episode
- colCSR - Variable in class burlap.behavior.singleagent.auxiliary.performance.PerformancePlotter
-
All agent plot series for the the most recetent trial's cumulative reward per step
- colCSR - Variable in class burlap.behavior.stochasticgames.auxiliary.performance.MultiAgentPerformancePlotter
-
All agent plot series for the the most recetent trial's cumulative reward per step
- colCSRAvg - Variable in class burlap.behavior.singleagent.auxiliary.performance.PerformancePlotter
-
All agent plot series for the average of all trial's cumulative reward per step
- colCSRAvg - Variable in class burlap.behavior.stochasticgames.auxiliary.performance.MultiAgentPerformancePlotter
-
All agent plot series for the average of all trial's cumulative reward per step
- collectData - Variable in class burlap.behavior.singleagent.auxiliary.performance.PerformancePlotter
-
Whether the data from action observations received should be recoreded or not.
- collectData - Variable in class burlap.behavior.stochasticgames.auxiliary.performance.MultiAgentPerformancePlotter
-
Whether the data from action observations received should be recoreded or not.
- collectDataFrom(State, SampleModel, int, SARSData) - Method in class burlap.behavior.singleagent.learning.lspi.SARSCollector
-
Collects data from an initial state until either a terminal state is reached or until the maximum number of steps is taken.
- collectDataFrom(Environment, int, SARSData) - Method in class burlap.behavior.singleagent.learning.lspi.SARSCollector
-
Collects data from an
Environment
's current state until either the maximum
number of steps is taken or a terminal state is reached.
- collectDataFrom(State, SampleModel, int, SARSData) - Method in class burlap.behavior.singleagent.learning.lspi.SARSCollector.UniformRandomSARSCollector
-
- collectDataFrom(Environment, int, SARSData) - Method in class burlap.behavior.singleagent.learning.lspi.SARSCollector.UniformRandomSARSCollector
-
- collectNInstances(StateGenerator, SampleModel, int, int, SARSData) - Method in class burlap.behavior.singleagent.learning.lspi.SARSCollector
-
Collects nSamples of SARS tuples and returns it in a
SARSData
object.
- collectNInstances(Environment, int, int, SARSData) - Method in class burlap.behavior.singleagent.learning.lspi.SARSCollector
-
- collisionReward - Variable in class burlap.domain.singleagent.lunarlander.LunarLanderRF
-
The reward for hitting the ground or an obstacle
- colMER - Variable in class burlap.behavior.singleagent.auxiliary.performance.PerformancePlotter
-
All agent plot series for the the most recetent trial's median reward per episode
- colMER - Variable in class burlap.behavior.stochasticgames.auxiliary.performance.MultiAgentPerformancePlotter
-
All agent plot series for the the most recetent trial's median reward per episode
- colMERAvg - Variable in class burlap.behavior.singleagent.auxiliary.performance.PerformancePlotter
-
All agent plot series for the average of all trial's median reward per episode
- colMERAvg - Variable in class burlap.behavior.stochasticgames.auxiliary.performance.MultiAgentPerformancePlotter
-
All agent plot series for the average of all trial's median reward per episode
- color(double) - Method in interface burlap.behavior.singleagent.auxiliary.valuefunctionvis.common.ColorBlend
-
Returns a Color
for a given double value
- color(double) - Method in class burlap.behavior.singleagent.auxiliary.valuefunctionvis.common.LandmarkColorBlendInterpolation
-
- color - Variable in class burlap.domain.singleagent.blocksworld.BlocksWorld.ColorPF
-
- color - Variable in class burlap.domain.singleagent.blocksworld.BlocksWorldBlock
-
- ColorBlend - Interface in burlap.behavior.singleagent.auxiliary.valuefunctionvis.common
-
An interface for defining methods that return a color for a given double value.
- colorBlend - Variable in class burlap.behavior.singleagent.auxiliary.valuefunctionvis.common.StateValuePainter2D
-
The object to use for returning the color with which to fill the state cell given its value.
- ColorPF(BlocksWorld.NamedColor) - Constructor for class burlap.domain.singleagent.blocksworld.BlocksWorld.ColorPF
-
- colors - Variable in class burlap.domain.singleagent.blocksworld.BlocksWorld
-
- colPayoffs - Variable in class burlap.behavior.stochasticgames.agents.twoplayer.singlestage.equilibriumplayer.EquilibriumPlayingSGAgent.BimatrixTuple
-
The column player's payoffs.
- colSE - Variable in class burlap.behavior.singleagent.auxiliary.performance.PerformancePlotter
-
All agent plot series for the most recetent trial's steps per episode
- colSE - Variable in class burlap.behavior.stochasticgames.auxiliary.performance.MultiAgentPerformancePlotter
-
All agent plot series for the most recetent trial's steps per episode
- colSEAvg - Variable in class burlap.behavior.singleagent.auxiliary.performance.PerformancePlotter
-
All agent plot series for the average of all trial's steps per episode
- colSEAvg - Variable in class burlap.behavior.stochasticgames.auxiliary.performance.MultiAgentPerformancePlotter
-
All agent plot series for the average of all trial's steps per episode
- combinedNonZeroPDParameters(FunctionGradient...) - Method in class burlap.behavior.singleagent.learnfromdemo.mlirl.differentiableplanners.DifferentiableDP
-
- combinedNonZeroPDParameters(FunctionGradient...) - Method in class burlap.behavior.singleagent.learnfromdemo.mlirl.differentiableplanners.DifferentiableSparseSampling
-
- combinedNonZeroPDParameters(FunctionGradient...) - Static method in class burlap.behavior.singleagent.learnfromdemo.mlirl.support.BoltzmannPolicyGradient
-
- combineDuplicateTransitionProbabilities(List<StateTransitionProb>) - Method in class burlap.domain.stochasticgames.gridgame.GridGameStandardMechanics
-
Iterates through a list of transition probability objects and combines any that refer to the same state
- command - Variable in class burlap.shell.ShellObserver.ShellCommandEvent
-
The resolved
ShellCommand
responsible for carrying out the command string.
- commandName() - Method in class burlap.shell.command.env.AddStateObjectCommand
-
- commandName() - Method in class burlap.shell.command.env.EpisodeRecordingCommands.EpisodeBrowserCommand
-
- commandName() - Method in class burlap.shell.command.env.EpisodeRecordingCommands.RecordCommand
-
- commandName() - Method in class burlap.shell.command.env.ExecuteActionCommand
-
- commandName() - Method in class burlap.shell.command.env.IsTerminalCommand
-
- commandName() - Method in class burlap.shell.command.env.ListActionsCommand
-
- commandName() - Method in class burlap.shell.command.env.ListPropFunctions
-
- commandName() - Method in class burlap.shell.command.env.ObservationCommand
-
- commandName() - Method in class burlap.shell.command.env.RemoveStateObjectCommand
-
- commandName() - Method in class burlap.shell.command.env.ResetEnvCommand
-
- commandName() - Method in class burlap.shell.command.env.RewardCommand
-
- commandName() - Method in class burlap.shell.command.env.SetVarCommand
-
- commandName() - Method in class burlap.shell.command.reserved.AliasCommand
-
- commandName() - Method in class burlap.shell.command.reserved.AliasesCommand
-
- commandName() - Method in class burlap.shell.command.reserved.CommandsCommand
-
- commandName() - Method in class burlap.shell.command.reserved.HelpCommand
-
- commandName() - Method in class burlap.shell.command.reserved.QuitCommand
-
- commandName() - Method in interface burlap.shell.command.ShellCommand
-
Returns the default name of this command.
- commandName() - Method in class burlap.shell.command.world.AddStateObjectSGCommand
-
- commandName() - Method in class burlap.shell.command.world.GameCommand
-
- commandName() - Method in class burlap.shell.command.world.GenerateStateCommand
-
- commandName() - Method in class burlap.shell.command.world.IsTerminalSGCommand
-
- commandName() - Method in class burlap.shell.command.world.JointActionCommand
-
- commandName() - Method in class burlap.shell.command.world.LastJointActionCommand
-
- commandName() - Method in class burlap.shell.command.world.ManualAgentsCommands.ListManualAgents
-
- commandName() - Method in class burlap.shell.command.world.ManualAgentsCommands.LSManualAgentActionsCommands
-
- commandName() - Method in class burlap.shell.command.world.ManualAgentsCommands.RegisterAgentCommand
-
- commandName() - Method in class burlap.shell.command.world.ManualAgentsCommands.SetAgentAction
-
- commandName() - Method in class burlap.shell.command.world.RemoveStateObjectSGCommand
-
- commandName() - Method in class burlap.shell.command.world.RewardsCommand
-
- commandName() - Method in class burlap.shell.command.world.SetVarSGCommand
-
- commandName() - Method in class burlap.shell.command.world.WorldObservationCommand
-
- commandName() - Method in class burlap.shell.visual.VisualExplorer.LivePollCommand
-
- commands - Variable in class burlap.shell.BurlapShell
-
- CommandsCommand - Class in burlap.shell.command.reserved
-
- CommandsCommand() - Constructor for class burlap.shell.command.reserved.CommandsCommand
-
- commandString - Variable in class burlap.shell.ShellObserver.ShellCommandEvent
-
The shell command string that was executed.
- compare(PrioritizedSearchNode, PrioritizedSearchNode) - Method in class burlap.behavior.singleagent.planning.deterministic.informed.PrioritizedSearchNode.PSNComparator
-
- compare(PrioritizedSweeping.BPTRNode, PrioritizedSweeping.BPTRNode) - Method in class burlap.behavior.singleagent.planning.stochastic.valueiteration.PrioritizedSweeping.BPTRNodeComparator
-
- compare(Object, Object) - Method in class burlap.datastructures.AlphanumericSorting
-
The compare method that compares the alphanumeric strings
- completedExperiment - Variable in class burlap.behavior.singleagent.auxiliary.performance.LearningAlgorithmExperimenter
-
Whether the experimenter has completed.
- completedExperiment - Variable in class burlap.behavior.stochasticgames.auxiliary.performance.MultiAgentExperimenter
-
Whether the experimenter has completed.
- computeBoltzmannPolicyGradient(State, Action, DifferentiableQFunction, double) - Static method in class burlap.behavior.singleagent.learnfromdemo.mlirl.support.BoltzmannPolicyGradient
-
Computes the gradient of a Boltzmann policy using the given differentiable valueFunction.
- computeClusterTrajectoryLoggedNormalization(int, double[][]) - Method in class burlap.behavior.singleagent.learnfromdemo.mlirl.MultipleIntentionsMLIRL
-
Given a matrix holding the log[Pr(c)] + log(Pr(t | c)] values in its entries, where
Pr(c) is the probability of the cluster and Pr(t | c)] is the probability of the trajectory given the cluster,
this method returns the log probability of the standard probability normalization factor for trajectory t in
the matrix.
- computeColStrategy(double[][], double[][]) - Method in class burlap.behavior.stochasticgames.agents.twoplayer.singlestage.equilibriumplayer.BimatrixEquilibriumSolver
-
Computes and returns the column player strategy for the given bimatrix game.
- computeColStrategy(double[][], double[][]) - Method in class burlap.behavior.stochasticgames.agents.twoplayer.singlestage.equilibriumplayer.equilibriumsolvers.CorrelatedEquilibrium
-
- computeColStrategy(double[][], double[][]) - Method in class burlap.behavior.stochasticgames.agents.twoplayer.singlestage.equilibriumplayer.equilibriumsolvers.MaxMax
-
- computeColStrategy(double[][], double[][]) - Method in class burlap.behavior.stochasticgames.agents.twoplayer.singlestage.equilibriumplayer.equilibriumsolvers.MinMax
-
- computeColStrategy(double[][], double[][]) - Method in class burlap.behavior.stochasticgames.agents.twoplayer.singlestage.equilibriumplayer.equilibriumsolvers.Utilitarian
-
- computeExactValueFunction - Variable in class burlap.behavior.singleagent.planning.stochastic.sparsesampling.SparseSampling
-
This parameter indicates whether the exact finite horizon value function is computed or whether sparse sampling
to estimate should be used.
- computeF(PrioritizedSearchNode, Action, HashableState, double) - Method in class burlap.behavior.singleagent.planning.deterministic.informed.astar.AStar
-
- computeF(PrioritizedSearchNode, Action, HashableState, EnvironmentOutcome) - Method in class burlap.behavior.singleagent.planning.deterministic.informed.astar.DynamicWeightedAStar
-
- computeF(PrioritizedSearchNode, Action, HashableState, double) - Method in class burlap.behavior.singleagent.planning.deterministic.informed.astar.StaticWeightedAStar
-
- computeF(PrioritizedSearchNode, Action, HashableState, double) - Method in class burlap.behavior.singleagent.planning.deterministic.informed.astar.WeightedGreedy
-
- computeF(PrioritizedSearchNode, Action, HashableState, double) - Method in class burlap.behavior.singleagent.planning.deterministic.informed.BestFirst
-
This method returns the f-score for a state given the parent search node, the generating action, the state that was produced.
- computeFlatHashCode(State) - Method in class burlap.statehashing.simple.IDSimpleHashableState
-
- computeFlatHashCode(State) - Method in class burlap.statehashing.simple.IISimpleHashableState
-
- computeHashCode(State) - Method in class burlap.statehashing.simple.IDSimpleHashableState
-
Computes the hash code for the input state.
- computeHashCode(State) - Method in class burlap.statehashing.simple.IISimpleHashableState
-
Computes the hash code for the input state.
- computeOOHashCode(OOState) - Method in class burlap.statehashing.masked.IDMaskedHashableState
-
- computeOOHashCode(OOState) - Method in class burlap.statehashing.masked.IIMaskedHashableState
-
- computeOOHashCode(OOState) - Method in class burlap.statehashing.maskeddiscretized.IDDiscMaskedHashableState
-
- computeOOHashCode(OOState) - Method in class burlap.statehashing.maskeddiscretized.IIDiscMaskedHashableState
-
- computeOOHashCode(OOState) - Method in class burlap.statehashing.simple.IDSimpleHashableState
-
- computeOOHashCode(OOState) - Method in class burlap.statehashing.simple.IISimpleHashableState
-
- computePerClusterMLIRLWeights() - Method in class burlap.behavior.singleagent.learnfromdemo.mlirl.MultipleIntentionsMLIRL
-
Computes the probability of each trajectory being generated by each cluster and returns it in a matrix.
- computePolicyFromTree() - Method in class burlap.behavior.singleagent.planning.stochastic.montecarlo.uct.UCTTreeWalkPolicy
-
computes a hash-backed policy for every state visited along the greedy path of the UCT tree.
- computePolicyGradient(double, double[], double, double, FunctionGradient[], int) - Static method in class burlap.behavior.singleagent.learnfromdemo.mlirl.support.BoltzmannPolicyGradient
-
Computes the gradient of a Boltzmann policy using values derived from a Differentiable Botlzmann backup valueFunction.
- computeProbabilityOfClustersGivenTrajectory(Episode) - Method in class burlap.behavior.singleagent.learnfromdemo.mlirl.MultipleIntentionsMLIRL
-
Returns the probability of each behavior cluster given the trajectory.
- computeProbs() - Method in class burlap.datastructures.BoltzmannDistribution
-
Computes the probability distribution.
- computeQ(State, Action) - Method in class burlap.behavior.singleagent.planning.stochastic.DynamicProgramming
-
Computes the Q-value This computation
*is* compatible with
Option
objects.
- computeQGradient(State, Action) - Method in class burlap.behavior.singleagent.learnfromdemo.mlirl.differentiableplanners.DifferentiableDP
-
Computes the Q-value gradient for the given
State
and
Action
.
- computeRowStrategy(double[][], double[][]) - Method in class burlap.behavior.stochasticgames.agents.twoplayer.singlestage.equilibriumplayer.BimatrixEquilibriumSolver
-
Computes and returns the row player strategy for the given bimatrix game.
- computeRowStrategy(double[][], double[][]) - Method in class burlap.behavior.stochasticgames.agents.twoplayer.singlestage.equilibriumplayer.equilibriumsolvers.CorrelatedEquilibrium
-
- computeRowStrategy(double[][], double[][]) - Method in class burlap.behavior.stochasticgames.agents.twoplayer.singlestage.equilibriumplayer.equilibriumsolvers.MaxMax
-
- computeRowStrategy(double[][], double[][]) - Method in class burlap.behavior.stochasticgames.agents.twoplayer.singlestage.equilibriumplayer.equilibriumsolvers.MinMax
-
- computeRowStrategy(double[][], double[][]) - Method in class burlap.behavior.stochasticgames.agents.twoplayer.singlestage.equilibriumplayer.equilibriumsolvers.Utilitarian
-
- computesExactValueFunction() - Method in class burlap.behavior.singleagent.planning.stochastic.sparsesampling.SparseSampling
-
Returns whether this valueFunction computes the exact finite horizon value function (by using the full transition dynamics) or whether
it estimates the value function with sampling.
- computeTempNormalized() - Method in class burlap.datastructures.BoltzmannDistribution
-
Computes the temperature normalized preference values
- computeTransitions(State, Option, HashedAggregator<HashableState>, double[]) - Method in class burlap.behavior.singleagent.options.model.BFSMarkovOptionModel
-
- computeTransitions(State, Option, HashedAggregator<HashableState>, double[]) - Method in class burlap.behavior.singleagent.options.model.BFSNonMarkovOptionModel
-
- computeUCTQ(UCTStateNode, UCTActionNode) - Method in class burlap.behavior.singleagent.planning.stochastic.montecarlo.uct.UCT
-
Returns the upper confidence Q-value for a given state node and action node.
- ConcatenatedObjectFeatures - Class in burlap.behavior.functionapproximation.dense
-
This class is used to produce a state feature vector from an
OOState
by iterating
over the objects, generating a double array for each object, and concatenating the reslting vectors into one vector.
- ConcatenatedObjectFeatures() - Constructor for class burlap.behavior.functionapproximation.dense.ConcatenatedObjectFeatures
-
- ConcatenatedObjectFeatures(List<String>, Map<String, DenseStateFeatures>) - Constructor for class burlap.behavior.functionapproximation.dense.ConcatenatedObjectFeatures
-
- config - Variable in class burlap.statehashing.discretized.DiscretizingHashableStateFactory
-
The discretization config
- config - Variable in class burlap.statehashing.discretized.IDDiscHashableState
-
- config - Variable in class burlap.statehashing.discretized.IIDiscHashableState
-
- config - Variable in class burlap.statehashing.masked.IDMaskedHashableState
-
- config - Variable in class burlap.statehashing.masked.IIMaskedHashableState
-
- config - Variable in class burlap.statehashing.masked.MaskedHashableStateFactory
-
- config - Variable in class burlap.statehashing.maskeddiscretized.IDDiscMaskedHashableState
-
- config - Variable in class burlap.statehashing.maskeddiscretized.IIDiscMaskedHashableState
-
- consoleFrame - Variable in class burlap.shell.visual.SGVisualExplorer
-
- consoleFrame - Variable in class burlap.shell.visual.VisualExplorer
-
- constantDoubleArray(double, int) - Static method in class burlap.behavior.stochasticgames.solvers.GeneralBimatrixSolverTools
-
Returns a double array of a given dimension filled with the same value.
- ConstantLR - Class in burlap.behavior.learningrate
-
A class for specifying a constant learning rate that never changes.
- ConstantLR() - Constructor for class burlap.behavior.learningrate.ConstantLR
-
Constructs constant learning rate of 0.1
- ConstantLR(Double) - Constructor for class burlap.behavior.learningrate.ConstantLR
-
Constructs a constant learning rate for the given value
- ConstantMADPPlannerFactory(MADynamicProgramming) - Constructor for class burlap.behavior.stochasticgames.agents.madp.MADPPlannerFactory.ConstantMADPPlannerFactory
-
Initializes with a given valueFunction reference.
- ConstantStateGenerator - Class in burlap.mdp.auxiliary.common
-
This class takes a source state as input as returns copies of it for every call of generateState().
- ConstantStateGenerator(State) - Constructor for class burlap.mdp.auxiliary.common.ConstantStateGenerator
-
This class takes a source state as input as returns copies of it for every call of generateState().
- ConstantValueFunction - Class in burlap.behavior.valuefunction
-
A
QFunction
implementation that always returns a constant value.
- ConstantValueFunction() - Constructor for class burlap.behavior.valuefunction.ConstantValueFunction
-
Will cause this object to return 0 for all initialization values.
- ConstantValueFunction(double) - Constructor for class burlap.behavior.valuefunction.ConstantValueFunction
-
Will cause this object to return value
for all initialization values.
- ConstantWorldGenerator - Class in burlap.mdp.stochasticgames.tournament.common
-
A WorldGenerator that always generators the same WorldConfiguraiton.
- ConstantWorldGenerator(SGDomain, JointModel, JointRewardFunction, TerminalFunction, StateGenerator) - Constructor for class burlap.mdp.stochasticgames.tournament.common.ConstantWorldGenerator
-
Deprecated.
- ConstantWorldGenerator(SGDomain, JointRewardFunction, TerminalFunction, StateGenerator) - Constructor for class burlap.mdp.stochasticgames.tournament.common.ConstantWorldGenerator
-
Initializes the WorldGenerator.
- ConstantWorldGenerator(SGDomain, JointModel, JointRewardFunction, TerminalFunction, StateGenerator, StateMapping) - Constructor for class burlap.mdp.stochasticgames.tournament.common.ConstantWorldGenerator
-
Deprecated.
- ConstantWorldGenerator(SGDomain, JointRewardFunction, TerminalFunction, StateGenerator, StateMapping) - Constructor for class burlap.mdp.stochasticgames.tournament.common.ConstantWorldGenerator
-
Initializes the WorldGenerator.
- constructBimatrix(State, List<Action>) - Method in class burlap.behavior.stochasticgames.agents.twoplayer.singlestage.equilibriumplayer.EquilibriumPlayingSGAgent
-
Constructs a bimatrix game from the possible joint rewards of the given state.
- constructor(Environment, EnvironmentObserver...) - Static method in class burlap.mdp.singleagent.environment.extensions.EnvironmentServer
-
- constructServerOrAddObservers(Environment, EnvironmentObserver...) - Static method in class burlap.mdp.singleagent.environment.extensions.EnvironmentServer
-
- containsActionPreference(UCTStateNode) - Method in class burlap.behavior.singleagent.planning.stochastic.montecarlo.uct.UCT
-
Returns true if the sample returns for any actions are different
- containsInstance(T) - Method in class burlap.datastructures.HashIndexedHeap
-
Checks if the heap contains this object and returns the pointer to the stored object if it does; otherwise null is returned.
- containsKey(K) - Method in class burlap.datastructures.HashedAggregator
-
Returns true if this object has a value associated with the specified key, false otherwise.
- containsKey(String) - Method in class burlap.domain.stochasticgames.normalform.SingleStageNormalFormGame.ActionNameMap
-
- containsParameterizedActions - Variable in class burlap.behavior.singleagent.learning.actorcritic.actor.BoltzmannActor
-
Indicates whether the actions that this agent can perform are parameterized
- control(Environment, double) - Method in class burlap.behavior.singleagent.options.MacroAction
-
- control(Environment, double) - Method in interface burlap.behavior.singleagent.options.Option
-
- control(Option, Environment, double) - Static method in class burlap.behavior.singleagent.options.Option.Helper
-
- control(Environment, double) - Method in class burlap.behavior.singleagent.options.SubgoalOption
-
- controlContainer - Variable in class burlap.behavior.singleagent.auxiliary.EpisodeSequenceVisualizer
-
- controlContainer - Variable in class burlap.behavior.stochasticgames.auxiliary.GameSequenceVisualizer
-
- controlDepth - Variable in class burlap.behavior.singleagent.planning.vfa.fittedvi.FittedVI
-
- convertIntoObservation(State) - Method in class burlap.domain.singleagent.rlglue.RLGlueEnvironment
-
Takes a OO-MDP state and converts it into an RLGlue observation
- copy() - Method in class burlap.behavior.functionapproximation.dense.ConcatenatedObjectFeatures
-
- copy() - Method in class burlap.behavior.functionapproximation.dense.DenseCrossProductFeatures
-
- copy() - Method in class burlap.behavior.functionapproximation.dense.DenseLinearVFA
-
- copy() - Method in interface burlap.behavior.functionapproximation.dense.DenseStateActionFeatures
-
- copy() - Method in class burlap.behavior.functionapproximation.dense.DenseStateActionLinearVFA
-
- copy() - Method in interface burlap.behavior.functionapproximation.dense.DenseStateFeatures
-
- copy() - Method in class burlap.behavior.functionapproximation.dense.fourier.FourierBasis
-
- copy() - Method in class burlap.behavior.functionapproximation.dense.NormalizedVariableFeatures
-
- copy() - Method in class burlap.behavior.functionapproximation.dense.NumericVariableFeatures
-
- copy() - Method in class burlap.behavior.functionapproximation.dense.PFFeatures
-
- copy() - Method in class burlap.behavior.functionapproximation.dense.rbf.RBFFeatures
-
- copy() - Method in class burlap.behavior.functionapproximation.dense.SparseToDenseFeatures
-
- copy() - Method in interface burlap.behavior.functionapproximation.ParametricFunction
-
- copy() - Method in class burlap.behavior.functionapproximation.sparse.LinearVFA
-
- copy() - Method in class burlap.behavior.functionapproximation.sparse.SparseCrossProductFeatures
-
- copy() - Method in class burlap.behavior.functionapproximation.sparse.SparseCrossProductFeatures.FeaturesMap
-
- copy() - Method in interface burlap.behavior.functionapproximation.sparse.SparseStateActionFeatures
-
Returns a deep copy of this features function.
- copy() - Method in interface burlap.behavior.functionapproximation.sparse.SparseStateFeatures
-
Returns a deep copy of this feature database.
- copy() - Method in class burlap.behavior.functionapproximation.sparse.tilecoding.TileCodingFeatures
-
- copy() - Method in class burlap.behavior.policy.support.AnnotatedAction
-
- copy() - Method in class burlap.behavior.singleagent.Episode
-
- copy() - Method in class burlap.behavior.singleagent.interfaces.rlglue.RLGlueDomain.RLGlueActionType.RLGLueAction
-
- copy() - Method in class burlap.behavior.singleagent.interfaces.rlglue.RLGlueState
-
- copy() - Method in class burlap.behavior.singleagent.learnfromdemo.mlirl.commonrfs.LinearStateActionDifferentiableRF
-
- copy() - Method in class burlap.behavior.singleagent.learnfromdemo.mlirl.commonrfs.LinearStateDifferentiableRF
-
- copy() - Method in class burlap.behavior.singleagent.learnfromdemo.mlirl.differentiableplanners.diffvinit.DiffVFRF
-
- copy() - Method in class burlap.behavior.singleagent.learnfromdemo.mlirl.differentiableplanners.diffvinit.LinearDiffRFVInit
-
- copy() - Method in class burlap.behavior.singleagent.learnfromdemo.mlirl.differentiableplanners.diffvinit.LinearStateDiffVF
-
- copy() - Method in class burlap.behavior.singleagent.learnfromdemo.mlirl.differentiableplanners.diffvinit.VanillaDiffVinit
-
- copy() - Method in class burlap.behavior.singleagent.options.MacroAction
-
- copy() - Method in class burlap.behavior.singleagent.options.SubgoalOption
-
- copy() - Method in class burlap.behavior.stochasticgames.agents.naiveq.history.HistoryState
-
- copy() - Method in class burlap.behavior.stochasticgames.JointPolicy
-
Creates a copy of this joint policy and returns it.
- copy() - Method in class burlap.behavior.stochasticgames.madynamicprogramming.policies.ECorrelatedQJointPolicy
-
- copy() - Method in class burlap.behavior.stochasticgames.madynamicprogramming.policies.EGreedyJointPolicy
-
- copy() - Method in class burlap.behavior.stochasticgames.madynamicprogramming.policies.EGreedyMaxWellfare
-
- copy() - Method in class burlap.behavior.stochasticgames.madynamicprogramming.policies.EMinMaxPolicy
-
- copy() - Method in class burlap.behavior.stochasticgames.PolicyFromJointPolicy
-
Returns a copy of this policy, which entails of first making a copy of the joint policy.
- copy() - Method in class burlap.domain.singleagent.blockdude.state.BlockDudeAgent
-
- copy() - Method in class burlap.domain.singleagent.blockdude.state.BlockDudeCell
-
- copy() - Method in class burlap.domain.singleagent.blockdude.state.BlockDudeMap
-
- copy() - Method in class burlap.domain.singleagent.blockdude.state.BlockDudeState
-
- copy() - Method in class burlap.domain.singleagent.blocksworld.BlocksWorldBlock
-
- copy() - Method in class burlap.domain.singleagent.blocksworld.BlocksWorldState
-
- copy() - Method in class burlap.domain.singleagent.cartpole.CartPoleDomain.CPPhysicsParams
-
- copy() - Method in class burlap.domain.singleagent.cartpole.InvertedPendulum.IPPhysicsParams
-
- copy() - Method in class burlap.domain.singleagent.cartpole.states.CartPoleFullState
-
- copy() - Method in class burlap.domain.singleagent.cartpole.states.CartPoleState
-
- copy() - Method in class burlap.domain.singleagent.cartpole.states.InvertedPendulumState
-
- copy() - Method in class burlap.domain.singleagent.frostbite.state.FrostbiteAgent
-
- copy() - Method in class burlap.domain.singleagent.frostbite.state.FrostbiteIgloo
-
- copy() - Method in class burlap.domain.singleagent.frostbite.state.FrostbitePlatform
-
- copy() - Method in class burlap.domain.singleagent.frostbite.state.FrostbiteState
-
- copy() - Method in class burlap.domain.singleagent.graphdefined.GraphDefinedDomain.GraphActionType.GraphAction
-
- copy() - Method in class burlap.domain.singleagent.graphdefined.GraphDefinedDomain.NodeTransitionProbability
-
- copy() - Method in class burlap.domain.singleagent.graphdefined.GraphStateNode
-
- copy() - Method in class burlap.domain.singleagent.gridworld.state.GridAgent
-
- copy() - Method in class burlap.domain.singleagent.gridworld.state.GridLocation
-
- copy() - Method in class burlap.domain.singleagent.gridworld.state.GridWorldState
-
- copy() - Method in class burlap.domain.singleagent.lunarlander.LunarLanderDomain.LLPhysicsParams
-
- copy() - Method in class burlap.domain.singleagent.lunarlander.LunarLanderDomain.ThrustType.ThrustAction
-
- copy() - Method in class burlap.domain.singleagent.lunarlander.state.LLAgent
-
- copy() - Method in class burlap.domain.singleagent.lunarlander.state.LLBlock
-
- copy() - Method in class burlap.domain.singleagent.lunarlander.state.LLBlock.LLObstacle
-
- copy() - Method in class burlap.domain.singleagent.lunarlander.state.LLBlock.LLPad
-
- copy() - Method in class burlap.domain.singleagent.lunarlander.state.LLState
-
- copy() - Method in class burlap.domain.singleagent.mountaincar.MCState
-
- copy() - Method in class burlap.domain.singleagent.mountaincar.MountainCar.MCPhysicsParams
-
- copy() - Method in class burlap.domain.singleagent.pomdp.tiger.TigerObservation
-
- copy() - Method in class burlap.domain.singleagent.pomdp.tiger.TigerState
-
- copy() - Method in class burlap.domain.stochasticgames.gridgame.state.GGAgent
-
- copy() - Method in class burlap.domain.stochasticgames.gridgame.state.GGGoal
-
- copy() - Method in class burlap.domain.stochasticgames.gridgame.state.GGWall.GGHorizontalWall
-
- copy() - Method in class burlap.domain.stochasticgames.gridgame.state.GGWall.GGVerticalWall
-
- copy() - Method in class burlap.domain.stochasticgames.normalform.NFGameState
-
- copy() - Method in class burlap.domain.stochasticgames.normalform.SingleStageNormalFormGame.ActionNameMap
-
- copy() - Method in class burlap.domain.stochasticgames.normalform.SingleStageNormalFormGame.AgentPayoutFunction
-
- copy() - Method in class burlap.domain.stochasticgames.normalform.SingleStageNormalFormGame.MatrixAction
-
- copy() - Method in class burlap.domain.stochasticgames.normalform.SingleStageNormalFormGame.StrategyProfile
-
- copy() - Method in interface burlap.mdp.core.action.Action
-
Returns a copy of this grounded action.
- copy() - Method in class burlap.mdp.core.action.SimpleAction
-
- copy() - Method in class burlap.mdp.core.oo.state.generic.DeepOOState
-
- copy() - Method in class burlap.mdp.core.oo.state.generic.GenericOOState
-
- copy() - Method in class burlap.mdp.core.state.NullState
-
- copy() - Method in interface burlap.mdp.core.state.State
-
Returns a copy of this state suitable for creating state transitions.
- copy() - Method in class burlap.mdp.singleagent.oo.ObjectParameterizedActionType.SAObjectParameterizedAction
-
- copy() - Method in class burlap.mdp.singleagent.pomdp.beliefstate.TabularBeliefState
-
- copy() - Method in class burlap.mdp.stochasticgames.JointAction
-
- copy() - Method in class burlap.statehashing.discretized.DiscConfig
-
- copy() - Method in class burlap.statehashing.masked.MaskedConfig
-
- copy() - Method in class burlap.visualizer.Visualizer
-
- copyBlocks() - Method in class burlap.domain.singleagent.blockdude.state.BlockDudeState
-
- copyTransitionDynamics() - Method in class burlap.domain.singleagent.graphdefined.GraphDefinedDomain
-
Returns a deep copy of the transition dynamics
- copyWithName(String) - Method in class burlap.domain.singleagent.blockdude.state.BlockDudeAgent
-
- copyWithName(String) - Method in class burlap.domain.singleagent.blockdude.state.BlockDudeCell
-
- copyWithName(String) - Method in class burlap.domain.singleagent.blockdude.state.BlockDudeMap
-
- copyWithName(String) - Method in class burlap.domain.singleagent.blocksworld.BlocksWorldBlock
-
- copyWithName(String) - Method in class burlap.domain.singleagent.frostbite.state.FrostbiteAgent
-
- copyWithName(String) - Method in class burlap.domain.singleagent.frostbite.state.FrostbiteIgloo
-
- copyWithName(String) - Method in class burlap.domain.singleagent.frostbite.state.FrostbitePlatform
-
- copyWithName(String) - Method in class burlap.domain.singleagent.gridworld.state.GridAgent
-
- copyWithName(String) - Method in class burlap.domain.singleagent.gridworld.state.GridLocation
-
- copyWithName(String) - Method in class burlap.domain.singleagent.lunarlander.state.LLAgent
-
- copyWithName(String) - Method in class burlap.domain.singleagent.lunarlander.state.LLBlock
-
- copyWithName(String) - Method in class burlap.domain.stochasticgames.gridgame.state.GGAgent
-
- copyWithName(String) - Method in class burlap.domain.stochasticgames.gridgame.state.GGGoal
-
- copyWithName(String) - Method in class burlap.domain.stochasticgames.gridgame.state.GGWall.GGHorizontalWall
-
- copyWithName(String) - Method in class burlap.domain.stochasticgames.gridgame.state.GGWall.GGVerticalWall
-
- copyWithName(String) - Method in interface burlap.mdp.core.oo.state.ObjectInstance
-
- correctDoor - Variable in class burlap.domain.singleagent.pomdp.tiger.TigerModel
-
the reward for opening the correct door
- correctDoorReward - Variable in class burlap.domain.singleagent.pomdp.tiger.TigerDomain
-
the reward for opening the correct door
- CorrelatedEquilibrium - Class in burlap.behavior.stochasticgames.agents.twoplayer.singlestage.equilibriumplayer.equilibriumsolvers
-
Computes the correlated equilibrium strategy.
- CorrelatedEquilibrium() - Constructor for class burlap.behavior.stochasticgames.agents.twoplayer.singlestage.equilibriumplayer.equilibriumsolvers.CorrelatedEquilibrium
-
- CorrelatedEquilibrium(CorrelatedEquilibriumSolver.CorrelatedEquilibriumObjective) - Constructor for class burlap.behavior.stochasticgames.agents.twoplayer.singlestage.equilibriumplayer.equilibriumsolvers.CorrelatedEquilibrium
-
- CorrelatedEquilibriumSolver - Class in burlap.behavior.stochasticgames.solvers
-
This class provides static methods for solving correlated equilibrium problems for Bimatrix games or values represented in a Bimatrix.
- CorrelatedEquilibriumSolver.CorrelatedEquilibriumObjective - Enum in burlap.behavior.stochasticgames.solvers
-
The four different equilibrium objectives that can be used:
UTILITARIAN, EGALITARIAN, REPUBLICAN, and LIBERTARIAN.
- CorrelatedQ - Class in burlap.behavior.stochasticgames.madynamicprogramming.backupOperators
-
A correlated Q backup operator [1] for using in stochastic game multi-agent Q-learning or dynamic programming.
- CorrelatedQ(CorrelatedEquilibriumSolver.CorrelatedEquilibriumObjective) - Constructor for class burlap.behavior.stochasticgames.madynamicprogramming.backupOperators.CorrelatedQ
-
Initializes an operator for the given correlated equilibrium objective.
- cosScale - Variable in class burlap.domain.singleagent.mountaincar.MountainCar.MCPhysicsParams
-
Constant factor multiplied by the agent position inside the cosine that defines the shape of the curve.
- costWeight - Variable in class burlap.behavior.singleagent.planning.deterministic.informed.astar.WeightedGreedy
-
The cost function weight.
- CPClassicModel - Class in burlap.domain.singleagent.cartpole.model
-
- CPClassicModel(CartPoleDomain.CPPhysicsParams) - Constructor for class burlap.domain.singleagent.cartpole.model.CPClassicModel
-
- CPCorrectModel - Class in burlap.domain.singleagent.cartpole.model
-
- CPCorrectModel(CartPoleDomain.CPPhysicsParams) - Constructor for class burlap.domain.singleagent.cartpole.model.CPCorrectModel
-
- CPPhysicsParams() - Constructor for class burlap.domain.singleagent.cartpole.CartPoleDomain.CPPhysicsParams
-
- CPPhysicsParams(double, double, double, double, double, double, double, double, double, double, double, double, boolean, boolean) - Constructor for class burlap.domain.singleagent.cartpole.CartPoleDomain.CPPhysicsParams
-
- createDeviationRenderer() - Method in class burlap.behavior.singleagent.auxiliary.performance.PerformancePlotter
-
Creates a DeviationRenderer to use for the trial average plots
- createDeviationRenderer() - Method in class burlap.behavior.stochasticgames.auxiliary.performance.MultiAgentPerformancePlotter
-
Creates a DeviationRenderer to use for the trial average plots
- createGridWorldBasedValueFunctionVisualizerGUI(List<State>, ValueFunction, Policy, Object, Object, VariableDomain, VariableDomain, double, double, String, String, String, String) - Static method in class burlap.behavior.singleagent.auxiliary.valuefunctionvis.ValueFunctionVisualizerGUI
-
A method for creating common 2D arrow glyped value function and policy visualization.
- createRepeatedGameWorld(SGAgent...) - Method in class burlap.domain.stochasticgames.normalform.SingleStageNormalFormGame
-
Creates a world instance for this game in which the provided agents join in the order they are passed.
- createRepeatedGameWorld(SGDomain, SGAgent...) - Method in class burlap.domain.stochasticgames.normalform.SingleStageNormalFormGame
-
Creates a world instance for this game in which the provided agents join in the order they are passed.
- createUnmodeledFavoredPolicy() - Method in class burlap.behavior.singleagent.learning.modellearning.rmax.PotentialShapedRMax
-
- critic - Variable in class burlap.behavior.singleagent.learning.actorcritic.ActorCritic
-
The critic component to use
- Critic - Interface in burlap.behavior.singleagent.learning.actorcritic
-
This interface provides the methods necessary for implementing the critic part of an actor-critic learning algorithm.
- critique - Variable in class burlap.behavior.singleagent.learning.actorcritic.CritiqueResult
-
The critique of this behavior.
- critiqueAndUpdate(EnvironmentOutcome) - Method in interface burlap.behavior.singleagent.learning.actorcritic.Critic
-
This method's implementation provides the critique for some specific instance of the behavior.
- critiqueAndUpdate(EnvironmentOutcome) - Method in class burlap.behavior.singleagent.learning.actorcritic.critics.TDLambda
-
- critiqueAndUpdate(EnvironmentOutcome) - Method in class burlap.behavior.singleagent.learning.actorcritic.critics.TimeIndexedTDLambda
-
- CritiqueResult - Class in burlap.behavior.singleagent.learning.actorcritic
-
The CritiqueResult class stores the relevant information regarding a critique of behavior.
- CritiqueResult(State, Action, State, double) - Constructor for class burlap.behavior.singleagent.learning.actorcritic.CritiqueResult
-
Initializes with a state-action-state behavior tuple and the value of the critique for this behavior.
- crossesWall(GridGameStandardMechanics.Location2, GridGameStandardMechanics.Location2, ObjectInstance, boolean) - Method in class burlap.domain.stochasticgames.gridgame.GridGameStandardMechanics
-
Returns true if the agent would cross a given wall instance given a movement attempt.
- cseAvgSeries - Variable in class burlap.behavior.singleagent.auxiliary.performance.PerformancePlotter.AgentDatasets
-
All trial's average cumulative steps per episode series data
- cseAvgSeries - Variable in class burlap.behavior.stochasticgames.auxiliary.performance.MultiAgentPerformancePlotter.AgentDatasets
-
All trial's average cumulative steps per episode series data
- csrAvgSeries - Variable in class burlap.behavior.singleagent.auxiliary.performance.PerformancePlotter.AgentDatasets
-
All trial's average cumulative reward per step series data
- csrAvgSeries - Variable in class burlap.behavior.stochasticgames.auxiliary.performance.MultiAgentPerformancePlotter.AgentDatasets
-
All trial's average cumulative reward per step series data
- cumulatedRewardMap - Variable in class burlap.behavior.singleagent.planning.deterministic.informed.astar.AStar
-
Data structure for maintaining g(n): the cost so far to node n
- cumulativeDiscountedReward - Variable in class burlap.behavior.singleagent.options.model.BFSMarkovOptionModel.OptionScanNode
-
The cumulative discounted reward received reaching this node.
- cumulativeEpisodeReward - Variable in class burlap.behavior.singleagent.auxiliary.performance.PerformancePlotter.Trial
-
Stores the cumulative reward by episode
- cumulativeEpisodeReward - Variable in class burlap.behavior.stochasticgames.auxiliary.performance.MultiAgentPerformancePlotter.Trial
-
Stores the cumulative reward by episode
- cumulativeEpisodeRewardSeries - Variable in class burlap.behavior.singleagent.auxiliary.performance.PerformancePlotter.AgentDatasets
-
Most recent trial's cumulative reward per step episode data
- cumulativeEpisodeRewardSeries - Variable in class burlap.behavior.stochasticgames.auxiliary.performance.MultiAgentPerformancePlotter.AgentDatasets
-
Most recent trial's cumulative reward per step episode data
- cumulativeStepEpisode - Variable in class burlap.behavior.singleagent.auxiliary.performance.PerformancePlotter.Trial
-
Stores the cumulative steps by episode
- cumulativeStepEpisode - Variable in class burlap.behavior.stochasticgames.auxiliary.performance.MultiAgentPerformancePlotter.Trial
-
Stores the cumulative steps by episode
- cumulativeStepEpisodeSeries - Variable in class burlap.behavior.singleagent.auxiliary.performance.PerformancePlotter.AgentDatasets
-
Most recent trial's cumulative steps per step episode data
- cumulativeStepEpisodeSeries - Variable in class burlap.behavior.stochasticgames.auxiliary.performance.MultiAgentPerformancePlotter.AgentDatasets
-
Most recent trial's cumulative steps per step episode data
- cumulativeStepReward - Variable in class burlap.behavior.singleagent.auxiliary.performance.PerformancePlotter.Trial
-
Stores the cumulative reward by step
- cumulativeStepReward - Variable in class burlap.behavior.stochasticgames.auxiliary.performance.MultiAgentPerformancePlotter.Trial
-
Stores the cumulative reward by step
- cumulativeStepRewardSeries - Variable in class burlap.behavior.singleagent.auxiliary.performance.PerformancePlotter.AgentDatasets
-
Most recent trial's cumulative reward per step series data
- cumulativeStepRewardSeries - Variable in class burlap.behavior.stochasticgames.auxiliary.performance.MultiAgentPerformancePlotter.AgentDatasets
-
Most recent trial's cumulative reward per step series data
- curAgentDatasets - Variable in class burlap.behavior.singleagent.auxiliary.performance.PerformancePlotter
-
contains the plot series data that will be displayed for the current agent
- curAgentName - Variable in class burlap.behavior.singleagent.auxiliary.performance.PerformancePlotter
-
The name of the current agent being tested
- curBelief - Variable in class burlap.mdp.singleagent.pomdp.BeliefAgent
-
- curEA - Variable in class burlap.behavior.singleagent.auxiliary.EpisodeSequenceVisualizer
-
- curEpisode - Variable in class burlap.behavior.singleagent.auxiliary.performance.PerformancePlotter
-
the current episode that was recorded
- curEpisode - Variable in class burlap.behavior.stochasticgames.auxiliary.performance.MultiAgentPerformancePlotter
-
the current episode that was recorded
- curEpisode - Variable in class burlap.shell.command.env.EpisodeRecordingCommands
-
- curEpisodeReward - Variable in class burlap.behavior.singleagent.auxiliary.performance.PerformancePlotter.Trial
-
The cumulative reward of the episode so far
- curEpisodeReward - Variable in class burlap.behavior.stochasticgames.auxiliary.performance.MultiAgentPerformancePlotter.Trial
-
The cumulative reward of the episode so far
- curEpisodeRewards - Variable in class burlap.behavior.singleagent.auxiliary.performance.PerformancePlotter.Trial
-
A list of the reward sequence in the current episode
- curEpisodeRewards - Variable in class burlap.behavior.stochasticgames.auxiliary.performance.MultiAgentPerformancePlotter.Trial
-
A list of the reward sequence in the current episode
- curEpisodeSteps - Variable in class burlap.behavior.singleagent.auxiliary.performance.PerformancePlotter.Trial
-
The number of steps in the episode so far
- curEpisodeSteps - Variable in class burlap.behavior.stochasticgames.auxiliary.performance.MultiAgentPerformancePlotter.Trial
-
The number of steps in the episode so far
- curGA - Variable in class burlap.behavior.stochasticgames.auxiliary.GameSequenceVisualizer
-
- curHState - Variable in class burlap.behavior.stochasticgames.agents.naiveq.history.SGQWActionHistory
-
- curObservation - Variable in class burlap.mdp.singleagent.pomdp.SimulatedPOEnvironment
-
The current observation from the POMDP environment
- currentActionOffset - Variable in class burlap.behavior.functionapproximation.dense.DenseLinearVFA
-
- currentFeatures - Variable in class burlap.behavior.functionapproximation.dense.DenseStateActionLinearVFA
-
- currentFeatures - Variable in class burlap.behavior.functionapproximation.sparse.LinearVFA
-
- currentGameEpisodeRecord - Variable in class burlap.mdp.stochasticgames.world.World
-
- currentGradient - Variable in class burlap.behavior.functionapproximation.dense.DenseLinearVFA
-
- currentGradient - Variable in class burlap.behavior.functionapproximation.dense.DenseStateActionLinearVFA
-
- currentGradient - Variable in class burlap.behavior.functionapproximation.sparse.LinearVFA
-
- currentObservation() - Method in class burlap.behavior.singleagent.interfaces.rlglue.RLGlueAgent
-
- currentObservation() - Method in class burlap.behavior.stochasticgames.agents.interfacing.singleagent.LearningAgentToSGAgentInterface
-
- currentObservation() - Method in interface burlap.mdp.singleagent.environment.Environment
-
Returns the current observation of the environment as a
State
.
- currentObservation() - Method in class burlap.mdp.singleagent.environment.extensions.EnvironmentServer
-
- currentObservation() - Method in class burlap.mdp.singleagent.environment.SimulatedEnvironment
-
- currentObservation() - Method in class burlap.mdp.singleagent.pomdp.SimulatedPOEnvironment
-
- currentState - Variable in class burlap.behavior.stochasticgames.agents.interfacing.singleagent.LearningAgentToSGAgentInterface
-
The current state of the world
- currentState - Variable in class burlap.mdp.stochasticgames.world.World
-
- currentStateFeatures - Variable in class burlap.behavior.functionapproximation.dense.DenseLinearVFA
-
- currentValue - Variable in class burlap.behavior.functionapproximation.dense.DenseLinearVFA
-
- currentValue - Variable in class burlap.behavior.functionapproximation.dense.DenseStateActionLinearVFA
-
- currentValue - Variable in class burlap.behavior.functionapproximation.sparse.LinearVFA
-
- currentValueFunctionIsLower - Variable in class burlap.behavior.singleagent.planning.stochastic.rtdp.BoundedRTDP
-
Whether the current
DynamicProgramming
valueFunction reference points to the lower bound value function or the upper bound value function.
- curState - Variable in class burlap.behavior.singleagent.interfaces.rlglue.RLGlueAgent
-
The current state of the environment
- curState - Variable in class burlap.behavior.stochasticgames.agents.naiveq.history.HistoryState
-
- curState - Variable in class burlap.domain.singleagent.rlglue.RLGlueEnvironment
-
The current state of the environment
- curState - Variable in class burlap.mdp.singleagent.environment.SimulatedEnvironment
-
The current state of the environment
- curState - Variable in class burlap.visualizer.StateRenderLayer
-
the current state to be painted next
- curStateIsTerminal - Variable in class burlap.behavior.singleagent.interfaces.rlglue.RLGlueAgent
-
Whether the current state is a terminal state
- curStateIsTerminal - Variable in class burlap.behavior.stochasticgames.agents.interfacing.singleagent.LearningAgentToSGAgentInterface
-
Whether the last state was a terminal state
- curTime - Variable in class burlap.behavior.singleagent.learning.actorcritic.critics.TimeIndexedTDLambda
-
The current time index / depth of the current episode
- curTimeStep - Variable in class burlap.behavior.singleagent.auxiliary.performance.PerformancePlotter
-
the current time step that was recorded
- curTimeStep - Variable in class burlap.behavior.stochasticgames.auxiliary.performance.MultiAgentPerformancePlotter
-
the current time step that was recorded
- curTrial - Variable in class burlap.behavior.singleagent.auxiliary.performance.PerformancePlotter
-
Contains all the current trial performance data
- CustomRewardModel - Class in burlap.behavior.singleagent.learnfromdemo
-
- CustomRewardModel(SampleModel, RewardFunction) - Constructor for class burlap.behavior.singleagent.learnfromdemo.CustomRewardModel
-
- CustomRewardNoTermModel(SampleModel, RewardFunction) - Constructor for class burlap.behavior.singleagent.learnfromdemo.RewardValueProjection.CustomRewardNoTermModel
-
- CWGInit(SGDomain, JointRewardFunction, TerminalFunction, StateGenerator, StateMapping) - Method in class burlap.mdp.stochasticgames.tournament.common.ConstantWorldGenerator
-
- cWidth - Variable in class burlap.behavior.singleagent.auxiliary.EpisodeSequenceVisualizer
-
- cWidth - Variable in class burlap.behavior.singleagent.auxiliary.valuefunctionvis.ValueFunctionVisualizerGUI
-
Visualizer canvas width
- cWidth - Variable in class burlap.behavior.stochasticgames.auxiliary.GameSequenceVisualizer
-
- cWidth - Variable in class burlap.mdp.singleagent.common.VisualActionObserver
-
The width of the painter
- cWidth - Variable in class burlap.mdp.stochasticgames.common.VisualWorldObserver
-
The width of the painter
- cWidth - Variable in class burlap.shell.visual.SGVisualExplorer
-
- cWidth - Variable in class burlap.shell.visual.VisualExplorer
-