A B C D E F G H I J K L M N O P Q R S T U V W X Y Z 

C

c - Variable in class burlap.behavior.singleagent.learnbydemo.mlirl.differentiableplanners.DifferentiableSparseSampling
The number of transition dynamics samples (for the root if depth-variable C is used)
c - Variable in class burlap.behavior.singleagent.planning.stochastic.sparsesampling.SparseSampling
The number of transition dynamics samples (for the root if depth-variable C is used)
c(int, String) - Static method in class burlap.debugtools.DPrint
A print command for the given debug code.
cachedExpectations - Variable in class burlap.behavior.singleagent.options.Option
The cached transition probabilities from each initiation state
cachedExpectedRewards - Variable in class burlap.behavior.singleagent.options.Option
The cached expected reward from each initiation state
CachedPolicy - Class in burlap.behavior.singleagent.planning.commonpolicies
This class can be used to lazily cache the policy of a source policy.
CachedPolicy(StateHashFactory, Policy) - Constructor for class burlap.behavior.singleagent.planning.commonpolicies.CachedPolicy
Initializes
CachedPolicy(StateHashFactory, Policy, int) - Constructor for class burlap.behavior.singleagent.planning.commonpolicies.CachedPolicy
Initializes
cartFriction - Variable in class burlap.domain.singleagent.cartpole.CartPoleDomain.CPPhysicsParams
The friction between the cart and ground
cartMass - Variable in class burlap.domain.singleagent.cartpole.CartPoleDomain.CPPhysicsParams
The mass of the cart.
cartMass - Variable in class burlap.domain.singleagent.cartpole.InvertedPendulum.IPPhysicsParams
The mass of the cart.
CartPoleDomain - Class in burlap.domain.singleagent.cartpole
The classic cart pole balancing problem as described by Barto, Sutton, and Anderson [2] with correct mechanics as described by Florian [1].
CartPoleDomain() - Constructor for class burlap.domain.singleagent.cartpole.CartPoleDomain
 
CartPoleDomain.CartPoleRewardFunction - Class in burlap.domain.singleagent.cartpole
A default reward function for this task.
CartPoleDomain.CartPoleRewardFunction() - Constructor for class burlap.domain.singleagent.cartpole.CartPoleDomain.CartPoleRewardFunction
Initializes with max pole angle threshold of 12 degrees (about 0.2 radians)
CartPoleDomain.CartPoleRewardFunction(double) - Constructor for class burlap.domain.singleagent.cartpole.CartPoleDomain.CartPoleRewardFunction
Initializes with a max pole angle as specified in radians
CartPoleDomain.CartPoleTerminalFunction - Class in burlap.domain.singleagent.cartpole
A default terminal function for this domain.
CartPoleDomain.CartPoleTerminalFunction() - Constructor for class burlap.domain.singleagent.cartpole.CartPoleDomain.CartPoleTerminalFunction
Initializes with default max angle of 12 degrees (about 0.2 radians)
CartPoleDomain.CartPoleTerminalFunction(double) - Constructor for class burlap.domain.singleagent.cartpole.CartPoleDomain.CartPoleTerminalFunction
Initializes with a max pole angle as specified in radians
CartPoleDomain.CPPhysicsParams - Class in burlap.domain.singleagent.cartpole
 
CartPoleDomain.CPPhysicsParams() - Constructor for class burlap.domain.singleagent.cartpole.CartPoleDomain.CPPhysicsParams
 
CartPoleDomain.CPPhysicsParams(double, double, double, double, double, double, double, double, double, double, double, double, boolean, boolean) - Constructor for class burlap.domain.singleagent.cartpole.CartPoleDomain.CPPhysicsParams
 
CartPoleDomain.MovementAction - Class in burlap.domain.singleagent.cartpole
A movement action which applies force in the specified direction.
CartPoleDomain.MovementAction(String, Domain, double, CartPoleDomain.CPPhysicsParams) - Constructor for class burlap.domain.singleagent.cartpole.CartPoleDomain.MovementAction
Initializes.
CartPoleStateParser - Class in burlap.domain.singleagent.cartpole
A custom state parser for the cart pole domain.
CartPoleStateParser(Domain) - Constructor for class burlap.domain.singleagent.cartpole.CartPoleStateParser
Initializes.
CartPoleVisualizer - Class in burlap.domain.singleagent.cartpole
Class for returning cart pole visualizer objects.
CartPoleVisualizer() - Constructor for class burlap.domain.singleagent.cartpole.CartPoleVisualizer
 
CartPoleVisualizer.CartPoleObjectPainter - Class in burlap.domain.singleagent.cartpole
An object painter for the cart pole object.
CartPoleVisualizer.CartPoleObjectPainter() - Constructor for class burlap.domain.singleagent.cartpole.CartPoleVisualizer.CartPoleObjectPainter
 
cellWidth() - Method in class burlap.behavior.singleagent.auxiliary.StateGridder.AttributeSpecification
Returns the width of a grid cell along this attribute.
centeredState - Variable in class burlap.behavior.singleagent.vfa.rbf.FVRBF
The center state of the RBF unit.
centeredState - Variable in class burlap.behavior.singleagent.vfa.rbf.RBF
The center state of this unit
cerAvgSeries - Variable in class burlap.behavior.singleagent.auxiliary.performance.PerformancePlotter.AgentDatasets
All trial's average cumulative reward per episode series data
cerAvgSeries - Variable in class burlap.behavior.stochasticgame.auxiliary.performance.MultiAgentPerformancePlotter.AgentDatasets
All trial's average cumulative reward per episode series data
cf(int, String, Object...) - Static method in class burlap.debugtools.DPrint
A printf command for the given debug code.
changeWeight(T, double) - Method in class burlap.datastructures.StochasticTree
Changes the weight of the given element.
cHeight - Variable in class burlap.behavior.singleagent.auxiliary.valuefunctionvis.ValueFunctionVisualizerGUI
Visualizer canvas height
cHeight - Variable in class burlap.behavior.singleagent.EpisodeSequenceVisualizer
 
cHeight - Variable in class burlap.behavior.stochasticgame.GameSequenceVisualizer
 
cHeight - Variable in class burlap.oomdp.singleagent.common.VisualActionObserver
The height of the painter
cHeight - Variable in class burlap.oomdp.singleagent.explorer.VisualExplorer
 
cHeight - Variable in class burlap.oomdp.stochasticgames.common.VisualWorldObserver
The height of the painter
cl(int, String) - Static method in class burlap.debugtools.DPrint
A print line command for the given debug code.
CLASSAGENT - Static variable in class burlap.domain.singleagent.blockdude.BlockDude
Name for the agent OO-MDP class
CLASSAGENT - Static variable in class burlap.domain.singleagent.graphdefined.GraphDefinedDomain
Constant for the name of the agent class
CLASSAGENT - Static variable in class burlap.domain.singleagent.gridworld.GridWorldDomain
Constant for the name of the agent class
CLASSAGENT - Static variable in class burlap.domain.singleagent.mountaincar.MountainCar
A constant for the name of the agent class
CLASSAGENT - Static variable in class burlap.domain.stochasticgames.gridgame.GridGame
A constant for the name of the agent class.
CLASSBLOCK - Static variable in class burlap.domain.singleagent.blockdude.BlockDude
Name for the block OO-MDP class
CLASSBLOCK - Static variable in class burlap.domain.singleagent.blocksworld.BlocksWorld
Constant for the name of the block class.
CLASSBRICKS - Static variable in class burlap.domain.singleagent.blockdude.BlockDude
Name for the bricks OO-MDP class
CLASSCARTPOLE - Static variable in class burlap.domain.singleagent.cartpole.CartPoleDomain
A constant for the name of the cart and pole object to be moved
CLASSDIMHWALL - Static variable in class burlap.domain.stochasticgames.gridgame.GridGame
A constant for the name of the horizontal wall class.
CLASSDIMVWALL - Static variable in class burlap.domain.stochasticgames.gridgame.GridGame
A constant for the name of the vertical wall class.
CLASSEXIT - Static variable in class burlap.domain.singleagent.blockdude.BlockDude
Name for the exit OO-MDP class
CLASSGOAL - Static variable in class burlap.domain.stochasticgames.gridgame.GridGame
A constant for the name of the goal class.
classHistory - Variable in class burlap.behavior.stochasticgame.agents.naiveq.history.SGQWActionHistory
The object class that will be used to represent a history component.
CLASSHISTORY - Static variable in class burlap.behavior.stochasticgame.agents.naiveq.history.SGQWActionHistory
A constant for the name of the history object class.
classifier - Variable in class burlap.behavior.singleagent.planning.vfa.fittedvi.WekaVFATrainer.WekaVFA
The Weka Classifier used to predict state values.
CLASSLOCATION - Static variable in class burlap.domain.singleagent.gridworld.GridWorldDomain
Constant for the name of the location class
className - Variable in class burlap.behavior.singleagent.vfa.cmac.AttributeTileSpecification
The object class name this tiling specification concerns
className - Variable in class burlap.behavior.singleagent.vfa.cmac.Tiling.ObjectTile
The OO-MDP class name for this object's OO-MDP object instance
classOrder - Variable in class burlap.behavior.singleagent.vfa.cmac.Tiling
A list specifying the order that the attributes from different classes will be combined into a single multi-dimensional tile
CLASSPENDULUM - Static variable in class burlap.domain.singleagent.cartpole.InvertedPendulum
The object class for the pendulum.
CLASSPLAYER - Static variable in class burlap.domain.stochasticgames.normalform.SingleStageNormalFormGame
Class name for a player class
CLASSSTATE - Static variable in class burlap.domain.singleagent.tabularized.TabulatedDomainWrapper
The single class name that holds the state attribute
clear() - Method in class burlap.behavior.singleagent.learning.lspi.SARSData
Clears this dataset of all elements.
clearActions() - Method in class burlap.behavior.singleagent.Policy.RandomPolicy
Clears the action list used in action selection.
clearAllActionObserversForAllActions() - Method in class burlap.oomdp.singleagent.SADomain
Clears all action observers for all action in this domain.
clearAllActionsObservers() - Method in class burlap.oomdp.singleagent.Action
Clears all action observers associated with this action
clearAllWorldObserver() - Method in class burlap.oomdp.stochasticgames.World
Clears all world observers from this world.
clearLocationOfWalls(int, int) - Method in class burlap.domain.singleagent.gridworld.GridWorldDomain
Removes any obstacles or walls at the specified location.
clearNonAverages() - Method in class burlap.behavior.singleagent.auxiliary.performance.PerformancePlotter.AgentDatasets
clears all the series data for the most recent trial.
clearNonAverages() - Method in class burlap.behavior.stochasticgame.auxiliary.performance.MultiAgentPerformancePlotter.AgentDatasets
clears all the series data for the most recent trial.
clearRelationalTargets(String) - Method in class burlap.oomdp.core.ObjectInstance
Clears all the relational value targets of the attribute named attName for this object instance.
clearRelationTargets() - Method in class burlap.oomdp.core.Value
Removes any relational targets for this attribute
clearRelationTargets() - Method in class burlap.oomdp.core.values.DiscreteValue
 
clearRelationTargets() - Method in class burlap.oomdp.core.values.DoubleArrayValue
 
clearRelationTargets() - Method in class burlap.oomdp.core.values.IntArrayValue
 
clearRelationTargets() - Method in class burlap.oomdp.core.values.IntValue
 
clearRelationTargets() - Method in class burlap.oomdp.core.values.MultiTargetRelationalValue
 
clearRelationTargets() - Method in class burlap.oomdp.core.values.RealValue
 
clearRelationTargets() - Method in class burlap.oomdp.core.values.RelationalValue
 
clearRelationTargets() - Method in class burlap.oomdp.core.values.StringValue
 
clearStateActionTransitions(int, int) - Method in class burlap.domain.singleagent.graphdefined.GraphDefinedDomain
Clears all (stochastic) edges for a given state-action pair.
clearStateTransitionsFrom(int) - Method in class burlap.domain.singleagent.graphdefined.GraphDefinedDomain
Clears all transitions from a given state node
clone() - Method in class burlap.oomdp.core.GroundedProp
 
clusterPriors - Variable in class burlap.behavior.singleagent.learnbydemo.mlirl.MultipleIntentionsMLIRL
The prior probabilities on each cluster.
clusterRequests - Variable in class burlap.behavior.singleagent.learnbydemo.mlirl.MultipleIntentionsMLIRL
The invididual MLIRLRequest objects for each behavior cluster.
CMACFeatureDatabase - Class in burlap.behavior.singleagent.vfa.cmac
A feature database using CMACs [1] AKA Tiling Coding over OO-MDP states.
CMACFeatureDatabase(int, CMACFeatureDatabase.TilingArrangement) - Constructor for class burlap.behavior.singleagent.vfa.cmac.CMACFeatureDatabase
Initializes with a set of nTilings and sets the offset arrangement for subsequent tilings to be determined according to arrangement.
CMACFeatureDatabase.TilingArrangement - Enum in burlap.behavior.singleagent.vfa.cmac
Enum for specifying whether tilings should have their tile alignments should be chossen so that they are randomly jittered from each other, or if each subsequent tiling should be offset by a uniform amount.
CoCoQ - Class in burlap.behavior.stochasticgame.mavaluefunction.backupOperators
The CoCoQ backup operator for sequential stochastic games [1].
CoCoQ() - Constructor for class burlap.behavior.stochasticgame.mavaluefunction.backupOperators.CoCoQ
 
coefficientNorm(int) - Method in class burlap.behavior.singleagent.vfa.fourier.FourierBasis
Returns the norm of the coefficient vector for the given basis function index.
coefficientVectors - Variable in class burlap.behavior.singleagent.vfa.fourier.FourierBasis
The coefficient vectors used
col - Variable in class burlap.domain.singleagent.gridworld.GridWorldVisualizer.CellPainter
 
colAER - Variable in class burlap.behavior.singleagent.auxiliary.performance.PerformancePlotter
All agent plot series for the the most recetent trial's average reward per episode
colAER - Variable in class burlap.behavior.stochasticgame.auxiliary.performance.MultiAgentPerformancePlotter
All agent plot series for the the most recetent trial's average reward per episode
colAERAvg - Variable in class burlap.behavior.singleagent.auxiliary.performance.PerformancePlotter
All agent plot series for the average of all trial's average reward per episode
colAERAvg - Variable in class burlap.behavior.stochasticgame.auxiliary.performance.MultiAgentPerformancePlotter
All agent plot series for the average of all trial's average reward per episode
colCER - Variable in class burlap.behavior.singleagent.auxiliary.performance.PerformancePlotter
All agent plot series for the the most recetent trial's cumulative reward per episode
colCER - Variable in class burlap.behavior.stochasticgame.auxiliary.performance.MultiAgentPerformancePlotter
All agent plot series for the the most recetent trial's cumulative reward per episode
colCERAvg - Variable in class burlap.behavior.singleagent.auxiliary.performance.PerformancePlotter
All agent plot series for the average of all trial's cumulative reward per episode
colCERAvg - Variable in class burlap.behavior.stochasticgame.auxiliary.performance.MultiAgentPerformancePlotter
All agent plot series for the average of all trial's cumulative reward per episode
colCSE - Variable in class burlap.behavior.singleagent.auxiliary.performance.PerformancePlotter
All agent plot series for the the most recetent trial's cumulative step per episode
colCSE - Variable in class burlap.behavior.stochasticgame.auxiliary.performance.MultiAgentPerformancePlotter
All agent plot series for the the most recetent trial's cumulative step per episode
colCSEAvg - Variable in class burlap.behavior.singleagent.auxiliary.performance.PerformancePlotter
All agent plot series for the average of all trial's cumulative steps per episode
colCSEAvg - Variable in class burlap.behavior.stochasticgame.auxiliary.performance.MultiAgentPerformancePlotter
All agent plot series for the average of all trial's cumulative steps per episode
colCSR - Variable in class burlap.behavior.singleagent.auxiliary.performance.PerformancePlotter
All agent plot series for the the most recetent trial's cumulative reward per step
colCSR - Variable in class burlap.behavior.stochasticgame.auxiliary.performance.MultiAgentPerformancePlotter
All agent plot series for the the most recetent trial's cumulative reward per step
colCSRAvg - Variable in class burlap.behavior.singleagent.auxiliary.performance.PerformancePlotter
All agent plot series for the average of all trial's cumulative reward per step
colCSRAvg - Variable in class burlap.behavior.stochasticgame.auxiliary.performance.MultiAgentPerformancePlotter
All agent plot series for the average of all trial's cumulative reward per step
collectData - Variable in class burlap.behavior.singleagent.auxiliary.performance.PerformancePlotter
Whether the data from action observations received should be recoreded or not.
collectData - Variable in class burlap.behavior.stochasticgame.auxiliary.performance.MultiAgentPerformancePlotter
Whether the data from action observations received should be recoreded or not.
collectDataFrom(State, RewardFunction, int, TerminalFunction, SARSData) - Method in class burlap.behavior.singleagent.learning.lspi.SARSCollector
Collects data from an initial state until either a terminal state is reached or until the maximum number of steps is taken.
collectDataFrom(State, RewardFunction, int, TerminalFunction, SARSData) - Method in class burlap.behavior.singleagent.learning.lspi.SARSCollector.UniformRandomSARSCollector
 
collectNInstances(StateGenerator, RewardFunction, int, int, TerminalFunction, SARSData) - Method in class burlap.behavior.singleagent.learning.lspi.SARSCollector
Collects nSamples of SARS tuples and returns it in a SARSData object.
collisionReward - Variable in class burlap.domain.singleagent.lunarlander.LunarLanderRF
The reward for hitting the ground or an obstacle
colMER - Variable in class burlap.behavior.singleagent.auxiliary.performance.PerformancePlotter
All agent plot series for the the most recetent trial's median reward per episode
colMER - Variable in class burlap.behavior.stochasticgame.auxiliary.performance.MultiAgentPerformancePlotter
All agent plot series for the the most recetent trial's median reward per episode
colMERAvg - Variable in class burlap.behavior.singleagent.auxiliary.performance.PerformancePlotter
All agent plot series for the average of all trial's median reward per episode
colMERAvg - Variable in class burlap.behavior.stochasticgame.auxiliary.performance.MultiAgentPerformancePlotter
All agent plot series for the average of all trial's median reward per episode
color(double) - Method in interface burlap.behavior.singleagent.auxiliary.valuefunctionvis.common.ColorBlend
Returns a Color for a given double value
color(double) - Method in class burlap.behavior.singleagent.auxiliary.valuefunctionvis.common.LandmarkColorBlendInterpolation
 
ColorBlend - Interface in burlap.behavior.singleagent.auxiliary.valuefunctionvis.common
An interface for defining methods that return a color for a given double value.
colorBlend - Variable in class burlap.behavior.singleagent.auxiliary.valuefunctionvis.common.StateValuePainter2D
The object to use for returning the color with which to fill the state cell given its value.
COLORBLUE - Static variable in class burlap.domain.singleagent.blocksworld.BlocksWorld
Constant for the color value "blue"
COLORGREEN - Static variable in class burlap.domain.singleagent.blocksworld.BlocksWorld
Constant for the color value "green"
COLORRED - Static variable in class burlap.domain.singleagent.blocksworld.BlocksWorld
Constant for the color value "red"
colPayoffs - Variable in class burlap.behavior.stochasticgame.agents.twoplayer.singlestage.equilibriumplayer.EquilibriumPlayingAgent.BimatrixTuple
The column player's payoffs.
colSE - Variable in class burlap.behavior.singleagent.auxiliary.performance.PerformancePlotter
All agent plot series for the most recetent trial's steps per episode
colSE - Variable in class burlap.behavior.stochasticgame.auxiliary.performance.MultiAgentPerformancePlotter
All agent plot series for the most recetent trial's steps per episode
colSEAvg - Variable in class burlap.behavior.singleagent.auxiliary.performance.PerformancePlotter
All agent plot series for the average of all trial's steps per episode
colSEAvg - Variable in class burlap.behavior.stochasticgame.auxiliary.performance.MultiAgentPerformancePlotter
All agent plot series for the average of all trial's steps per episode
combineDuplicateTransitionProbabilities(List<TransitionProbability>) - Method in class burlap.domain.stochasticgames.gridgame.GridGameStandardMechanics
Iterates through a list of transition probability objects and combines any that refer to the same state
CommandLineOptions - Class in burlap.datastructures
This class is used to extract command line options that are specified in the form: "--option" or "--option=vaue".
CommandLineOptions(String[]) - Constructor for class burlap.datastructures.CommandLineOptions
Parses an array of command line arguments for the options and their values
compare(PrioritizedSearchNode, PrioritizedSearchNode) - Method in class burlap.behavior.singleagent.planning.deterministic.informed.PrioritizedSearchNode.PSNComparator
 
compare(PrioritizedSweeping.BPTRNode, PrioritizedSweeping.BPTRNode) - Method in class burlap.behavior.singleagent.planning.stochastic.valueiteration.PrioritizedSweeping.BPTRNodeComparator
 
compare(Object, Object) - Method in class burlap.datastructures.AlphanumericSorting
The compare method that compares the alphanumeric strings
completedExperiment - Variable in class burlap.behavior.singleagent.auxiliary.performance.LearningAlgorithmExperimenter
Whether the experimenter has completed.
completedExperiment - Variable in class burlap.behavior.stochasticgame.auxiliary.performance.MultiAgentExperimenter
Whether the experimenter has completed.
computeBoltzmannPolicyGradient(State, GroundedAction, QGradientPlanner, double) - Static method in class burlap.behavior.singleagent.learnbydemo.mlirl.support.BoltzmannPolicyGradient
Computes the gradient of a Boltzmann policy using the given differentiable planner.
computeClusterTrajectoryLoggedNormalization(int, double[][]) - Method in class burlap.behavior.singleagent.learnbydemo.mlirl.MultipleIntentionsMLIRL
Given a matrix holding the log[Pr(c)] + log(Pr(t | c)] values in its entries, where Pr(c) is the probability of the cluster and Pr(t | c)] is the probability of the trajectory given the cluster, this method returns the log probability of the standard probability normalization factor for trajectory t in the matrix.
computeColStrategy(double[][], double[][]) - Method in class burlap.behavior.stochasticgame.agents.twoplayer.singlestage.equilibriumplayer.BimatrixEquilibriumSolver
Computes and returns the column player strategy for the given bimatrix game.
computeColStrategy(double[][], double[][]) - Method in class burlap.behavior.stochasticgame.agents.twoplayer.singlestage.equilibriumplayer.equilibriumsolvers.CorrelatedEquilibrium
 
computeColStrategy(double[][], double[][]) - Method in class burlap.behavior.stochasticgame.agents.twoplayer.singlestage.equilibriumplayer.equilibriumsolvers.MaxMax
 
computeColStrategy(double[][], double[][]) - Method in class burlap.behavior.stochasticgame.agents.twoplayer.singlestage.equilibriumplayer.equilibriumsolvers.MinMax
 
computeColStrategy(double[][], double[][]) - Method in class burlap.behavior.stochasticgame.agents.twoplayer.singlestage.equilibriumplayer.equilibriumsolvers.Utilitarian
 
computeExactValueFunction - Variable in class burlap.behavior.singleagent.planning.stochastic.sparsesampling.SparseSampling
This parameter indicates whether the exact finite horizon value function is computed or whether sparse sampling to estimate should be used.
computeF(PrioritizedSearchNode, GroundedAction, StateHashTuple) - Method in class burlap.behavior.singleagent.planning.deterministic.informed.astar.AStar
 
computeF(PrioritizedSearchNode, GroundedAction, StateHashTuple) - Method in class burlap.behavior.singleagent.planning.deterministic.informed.astar.DynamicWeightedAStar
 
computeF(PrioritizedSearchNode, GroundedAction, StateHashTuple) - Method in class burlap.behavior.singleagent.planning.deterministic.informed.astar.StaticWeightedAStar
 
computeF(PrioritizedSearchNode, GroundedAction, StateHashTuple) - Method in class burlap.behavior.singleagent.planning.deterministic.informed.astar.WeightedGreedy
 
computeF(PrioritizedSearchNode, GroundedAction, StateHashTuple) - Method in class burlap.behavior.singleagent.planning.deterministic.informed.BestFirst
This method returns the f-score for a state given the parent search node, the generating action, the state that was produced.
computeHashCode() - Method in class burlap.behavior.statehashing.DiscreteStateHashFactory.DiscreteStateHashTuple
 
computeHashCode() - Method in class burlap.behavior.statehashing.DiscretizingStateHashFactory.DiscretizedStateHashTuple
 
computeHashCode() - Method in class burlap.behavior.statehashing.NameDependentStateHashFactory.NameDependentStateHashTuple
 
computeHashCode() - Method in class burlap.behavior.statehashing.StateHashTuple
This method computes the hashCode for this object and saves it to the hashCode field beloning to the abstract class.
computePerClusterMLIRLWeights() - Method in class burlap.behavior.singleagent.learnbydemo.mlirl.MultipleIntentionsMLIRL
Computes the probability of each trajectory being generated by each cluster and returns it in a matrix.
computePolicyFromTree() - Method in class burlap.behavior.singleagent.planning.stochastic.montecarlo.uct.UCTTreeWalkPolicy
computes a hash-backed policy for every state visited along the greedy path of the UCT tree.
computePolicyGradient(DifferentiableRF, double, double[], double, double, double[][], int) - Static method in class burlap.behavior.singleagent.learnbydemo.mlirl.support.BoltzmannPolicyGradient
Computes the gradient of a Boltzmann policy using values derived from a Differentiable Botlzmann backup planner.
computeProbabilityOfClustersGivenTrajectory(EpisodeAnalysis) - Method in class burlap.behavior.singleagent.learnbydemo.mlirl.MultipleIntentionsMLIRL
Returns the probability of each behavior cluster given the trajectory.
computeProbs() - Method in class burlap.datastructures.BoltzmannDistribution
Computes the probability distribution.
computeQ(State, ActionTransitions) - Method in class burlap.behavior.singleagent.planning.ValueFunctionPlanner
Returns the Q-value for a given set and the possible transitions from it for a given action.
computeQ(StateHashTuple, GroundedAction) - Method in class burlap.behavior.singleagent.planning.ValueFunctionPlanner
Computes the Q-value using the uncached transition dynamics produced by the Action object methods.
computeQGradient(State, GroundedAction) - Method in class burlap.behavior.singleagent.learnbydemo.mlirl.differentiableplanners.DifferentiableVFPlanner
Computes the Q-value gradient for the given State and GroundedAction.
computeRowStrategy(double[][], double[][]) - Method in class burlap.behavior.stochasticgame.agents.twoplayer.singlestage.equilibriumplayer.BimatrixEquilibriumSolver
Computes and returns the row player strategy for the given bimatrix game.
computeRowStrategy(double[][], double[][]) - Method in class burlap.behavior.stochasticgame.agents.twoplayer.singlestage.equilibriumplayer.equilibriumsolvers.CorrelatedEquilibrium
 
computeRowStrategy(double[][], double[][]) - Method in class burlap.behavior.stochasticgame.agents.twoplayer.singlestage.equilibriumplayer.equilibriumsolvers.MaxMax
 
computeRowStrategy(double[][], double[][]) - Method in class burlap.behavior.stochasticgame.agents.twoplayer.singlestage.equilibriumplayer.equilibriumsolvers.MinMax
 
computeRowStrategy(double[][], double[][]) - Method in class burlap.behavior.stochasticgame.agents.twoplayer.singlestage.equilibriumplayer.equilibriumsolvers.Utilitarian
 
computesExactValueFunction() - Method in class burlap.behavior.singleagent.planning.stochastic.sparsesampling.SparseSampling
Returns whether this planner comptues the exact finite horizon value function (by using the full transition dynamics) or whether it estimates the value funciton with sampling.
computeTempNormalized() - Method in class burlap.datastructures.BoltzmannDistribution
Computes the temperature normalized preference values
computeUCTQ(UCTStateNode, UCTActionNode) - Method in class burlap.behavior.singleagent.planning.stochastic.montecarlo.uct.UCT
Returns the upper confidence Q-value for a given state node and action node.
ConcatenatedObjectFeatureVectorGenerator - Class in burlap.behavior.singleagent.vfa.common
This class is used to produce a state feature vector that is the concatenation of the observable attributes of objects belonging to a specified sequence of object classes.
ConcatenatedObjectFeatureVectorGenerator(String...) - Constructor for class burlap.behavior.singleagent.vfa.common.ConcatenatedObjectFeatureVectorGenerator
Initializes with an array of or object class names.
ConcatenatedObjectFeatureVectorGenerator(boolean, String...) - Constructor for class burlap.behavior.singleagent.vfa.common.ConcatenatedObjectFeatureVectorGenerator
Initializes with an array of or object class names.
consoleFrame - Variable in class burlap.oomdp.singleagent.explorer.VisualExplorer
 
consoleFrame - Variable in class burlap.oomdp.stochasticgames.explorers.SGVisualExplorer
 
constantDoubleArray(double, int) - Static method in class burlap.behavior.stochasticgame.solvers.GeneralBimatrixSolverTools
Returns a double array of a given dimension filled with the same value.
ConstantLR - Class in burlap.behavior.learningrate
A class for specifying a constant learning rate that never changes.
ConstantLR() - Constructor for class burlap.behavior.learningrate.ConstantLR
Constructs constant learning rate of 0.1
ConstantLR(Double) - Constructor for class burlap.behavior.learningrate.ConstantLR
Constructs a constant learning rate for the given value
ConstantSGStateGenerator - Class in burlap.oomdp.stochasticgames.common
A stochastic games state generator that always returns the same base state, which is specified via the constructor.
ConstantSGStateGenerator(State) - Constructor for class burlap.oomdp.stochasticgames.common.ConstantSGStateGenerator
Initializes.
ConstantStateGenerator - Class in burlap.oomdp.auxiliary.common
This class takes a source state as input as returns copies of it for every call of generateState().
ConstantStateGenerator(State) - Constructor for class burlap.oomdp.auxiliary.common.ConstantStateGenerator
This class takes a source state as input as returns copies of it for every call of generateState().
ConstantWorldGenerator - Class in burlap.oomdp.stochasticgames.tournament.common
A WorldGenerator that always generators the same WorldConfiguraiton.
ConstantWorldGenerator(SGDomain, JointActionModel, JointReward, TerminalFunction, SGStateGenerator) - Constructor for class burlap.oomdp.stochasticgames.tournament.common.ConstantWorldGenerator
Deprecated.
ConstantWorldGenerator(SGDomain, JointReward, TerminalFunction, SGStateGenerator) - Constructor for class burlap.oomdp.stochasticgames.tournament.common.ConstantWorldGenerator
Initializes the WorldGenerator.
ConstantWorldGenerator(SGDomain, JointActionModel, JointReward, TerminalFunction, SGStateGenerator, StateAbstraction) - Constructor for class burlap.oomdp.stochasticgames.tournament.common.ConstantWorldGenerator
Deprecated.
ConstantWorldGenerator(SGDomain, JointReward, TerminalFunction, SGStateGenerator, StateAbstraction) - Constructor for class burlap.oomdp.stochasticgames.tournament.common.ConstantWorldGenerator
Initializes the WorldGenerator.
constructBimatrix(State, List<GroundedSingleAction>) - Method in class burlap.behavior.stochasticgame.agents.twoplayer.singlestage.equilibriumplayer.EquilibriumPlayingAgent
Constructs a bimatrix game from the possible joint rewards of the given state.
containsActionPreference(UCTStateNode) - Method in class burlap.behavior.singleagent.planning.stochastic.montecarlo.uct.UCT
Returns true if the sample returns for any actions are different
containsInstance(T) - Method in class burlap.datastructures.HashIndexedHeap
Checks if the heap contains this object and returns the pointer to the stored object if it does; otherwise null is returned.
containsKey(String) - Method in class burlap.domain.stochasticgames.normalform.SingleStageNormalFormGame.ActionNameMap
 
containsOption(String) - Method in class burlap.datastructures.CommandLineOptions
Returns whether the queried opton is set.
containsParameterizedActions - Variable in class burlap.behavior.singleagent.learning.actorcritic.actor.BoltzmannActor
Indicates whether the actions that this agent can perform are parameterized
containsParameterizedActions - Variable in class burlap.behavior.singleagent.planning.OOMDPPlanner
Indicates whether the action set for this planner includes parametrized actions
continueFromState(State, String[]) - Method in class burlap.behavior.singleagent.options.Option
This method will use this option's termination probability, roll the dice, and return whether the option should continue or terminate.
controlContainer - Variable in class burlap.behavior.singleagent.EpisodeSequenceVisualizer
 
controlContainer - Variable in class burlap.behavior.stochasticgame.GameSequenceVisualizer
 
controlDepth - Variable in class burlap.behavior.singleagent.planning.vfa.fittedvi.FittedVI
convertIntoObservation(State) - Method in class burlap.oomdp.singleagent.interfaces.rlglue.RLGlueEnvironment
Takes a OO-MDP state and converts it into an RLGlue observation
copy() - Method in class burlap.behavior.singleagent.learnbydemo.mlirl.support.DifferentiableRF
Creates a copy of this reward function.
copy() - Method in class burlap.behavior.stochasticgame.JointPolicy
Creates a copy of this joint policy and returns it.
copy() - Method in class burlap.behavior.stochasticgame.mavaluefunction.policies.ECorrelatedQJointPolicy
 
copy() - Method in class burlap.behavior.stochasticgame.mavaluefunction.policies.EGreedyJointPolicy
 
copy() - Method in class burlap.behavior.stochasticgame.mavaluefunction.policies.EGreedyMaxWellfare
 
copy() - Method in class burlap.behavior.stochasticgame.mavaluefunction.policies.EMinMaxPolicy
 
copy() - Method in class burlap.behavior.stochasticgame.PolicyFromJointPolicy
Returns a copy of this policy, which entails of first making a copy of the joint policy.
copy() - Method in class burlap.domain.singleagent.cartpole.CartPoleDomain.CPPhysicsParams
 
copy() - Method in class burlap.domain.singleagent.cartpole.InvertedPendulum.IPPhysicsParams
 
copy() - Method in class burlap.domain.singleagent.graphdefined.GraphDefinedDomain.NodeTransitionProbibility
 
copy() - Method in class burlap.domain.singleagent.lunarlander.LunarLanderDomain.LLPhysicsParams
 
copy() - Method in class burlap.domain.singleagent.mountaincar.MountainCar.MCPhysicsParams
 
copy() - Method in class burlap.domain.stochasticgames.normalform.SingleStageNormalFormGame.ActionNameMap
 
copy() - Method in class burlap.domain.stochasticgames.normalform.SingleStageNormalFormGame.AgentPayoutFunction
 
copy() - Method in class burlap.domain.stochasticgames.normalform.SingleStageNormalFormGame.StrategyProfile
 
copy() - Method in class burlap.oomdp.core.AbstractGroundedAction
Returns a copy of this grounded action.
copy(Domain) - Method in class burlap.oomdp.core.Attribute
Will create a new Attribute object with the same configuration and name as this one.
copy(Domain) - Method in class burlap.oomdp.core.ObjectClass
Will create and return a new ObjectClass object with copies of this object class' attributes
copy() - Method in class burlap.oomdp.core.ObjectInstance
Creates and returns a new object instance that is a deep copy of this object instance's values.
copy() - Method in class burlap.oomdp.core.State
Returns a deep copy of this state.
copy() - Method in class burlap.oomdp.core.Value
Creates a deep copy of this value object.
copy() - Method in class burlap.oomdp.core.values.DiscreteValue
 
copy() - Method in class burlap.oomdp.core.values.DoubleArrayValue
 
copy() - Method in class burlap.oomdp.core.values.IntArrayValue
 
copy() - Method in class burlap.oomdp.core.values.IntValue
 
copy() - Method in class burlap.oomdp.core.values.MultiTargetRelationalValue
 
copy() - Method in class burlap.oomdp.core.values.RealValue
 
copy() - Method in class burlap.oomdp.core.values.RelationalValue
 
copy() - Method in class burlap.oomdp.core.values.StringValue
 
copy() - Method in class burlap.oomdp.singleagent.GroundedAction
 
copy() - Method in class burlap.oomdp.stochasticgames.GroundedSingleAction
 
copy() - Method in class burlap.oomdp.stochasticgames.JointAction
 
copyHelper() - Method in class burlap.behavior.singleagent.learnbydemo.mlirl.commonrfs.LinearStateActionDifferentiableRF
 
copyHelper() - Method in class burlap.behavior.singleagent.learnbydemo.mlirl.commonrfs.LinearStateDifferentiableRF
 
copyHelper() - Method in class burlap.behavior.singleagent.learnbydemo.mlirl.differentiableplanners.diffvinit.DiffVFRF
 
copyHelper() - Method in class burlap.behavior.singleagent.learnbydemo.mlirl.differentiableplanners.diffvinit.LinearDiffRFVInit
 
copyHelper() - Method in class burlap.behavior.singleagent.learnbydemo.mlirl.support.DifferentiableRF
A helper method for making a copy of this reward function.
copyInto(double[], double[], int) - Method in class burlap.behavior.singleagent.learnbydemo.mlirl.commonrfs.LinearStateActionDifferentiableRF
The copies the values of source into the target, starting in target index position index.
copyTransitionDynamics() - Method in class burlap.domain.singleagent.graphdefined.GraphDefinedDomain
Returns a deep copy of the transition dynamics
CorrelatedEquilibrium - Class in burlap.behavior.stochasticgame.agents.twoplayer.singlestage.equilibriumplayer.equilibriumsolvers
Computes the correlated equilibrium strategy.
CorrelatedEquilibrium() - Constructor for class burlap.behavior.stochasticgame.agents.twoplayer.singlestage.equilibriumplayer.equilibriumsolvers.CorrelatedEquilibrium
 
CorrelatedEquilibrium(CorrelatedEquilibriumSolver.CorrelatedEquilibriumObjective) - Constructor for class burlap.behavior.stochasticgame.agents.twoplayer.singlestage.equilibriumplayer.equilibriumsolvers.CorrelatedEquilibrium
 
CorrelatedEquilibriumSolver - Class in burlap.behavior.stochasticgame.solvers
This class provides static methods for solving correlated equilibrium problems for Bimatrix games or values represented in a Bimatrix.
CorrelatedEquilibriumSolver() - Constructor for class burlap.behavior.stochasticgame.solvers.CorrelatedEquilibriumSolver
 
CorrelatedEquilibriumSolver.CorrelatedEquilibriumObjective - Enum in burlap.behavior.stochasticgame.solvers
The four different equilibrium objectives that can be used: UTILITARIAN, EGALITARIAN, REPUBLICAN, and LIBERTARIAN.
CorrelatedQ - Class in burlap.behavior.stochasticgame.mavaluefunction.backupOperators
A correlated Q backup operator [1] for using in stochastic game multi-agent Q-learning or dynamic programming.
CorrelatedQ(CorrelatedEquilibriumSolver.CorrelatedEquilibriumObjective) - Constructor for class burlap.behavior.stochasticgame.mavaluefunction.backupOperators.CorrelatedQ
Initializes an operator for the given correlated equilibrium objective.
cosScale - Variable in class burlap.domain.singleagent.mountaincar.MountainCar.MCPhysicsParams
Constant factor multiplied by the agent position inside the cosine that defines the shape of the curve.
costWeight - Variable in class burlap.behavior.singleagent.planning.deterministic.informed.astar.WeightedGreedy
The cost function weight.
createDeviationRenderer() - Method in class burlap.behavior.singleagent.auxiliary.performance.PerformancePlotter
Creates a DeviationRenderer to use for the trial average plots
createDeviationRenderer() - Method in class burlap.behavior.stochasticgame.auxiliary.performance.MultiAgentPerformancePlotter
Creates a DeviationRenderer to use for the trial average plots
createDomainMappedPolicy() - Method in class burlap.behavior.singleagent.learning.modellearning.rmax.PotentialShapedRMax
 
createGridWorldBasedValueFunctionVisualizerGUI(List<State>, QComputablePlanner, Policy, String, String, String, String, String, String, String) - Static method in class burlap.behavior.singleagent.auxiliary.valuefunctionvis.ValueFunctionVisualizerGUI
A method for creating common gridworld-based value function and policy visualization.
createRepeatedGameWorld(Agent...) - Method in class burlap.domain.stochasticgames.normalform.SingleStageNormalFormGame
Creates a world instance for this game in which the provided agents join in the order they are passed.
createRepeatedGameWorld(SGDomain, Agent...) - Method in class burlap.domain.stochasticgames.normalform.SingleStageNormalFormGame
Creates a world instance for this game in which the provided agents join in the order they are passed.
critic - Variable in class burlap.behavior.singleagent.learning.actorcritic.ActorCritic
The critic component to use
Critic - Interface in burlap.behavior.singleagent.learning.actorcritic
This interface provides the methods necessary for implementing the critic part of an actor-critic learning algorithm.
critique - Variable in class burlap.behavior.singleagent.learning.actorcritic.CritiqueResult
The critique of this behavior.
critiqueAndUpdate(State, GroundedAction, State) - Method in interface burlap.behavior.singleagent.learning.actorcritic.Critic
This method's implementation provides the critique for some specific instance of the behavior.
critiqueAndUpdate(State, GroundedAction, State) - Method in class burlap.behavior.singleagent.learning.actorcritic.critics.TDLambda
 
critiqueAndUpdate(State, GroundedAction, State) - Method in class burlap.behavior.singleagent.learning.actorcritic.critics.TimeIndexedTDLambda
 
CritiqueResult - Class in burlap.behavior.singleagent.learning.actorcritic
The CritiqueResult class stores the relevant information regarding a critique of behavior.
CritiqueResult(State, GroundedAction, State, double) - Constructor for class burlap.behavior.singleagent.learning.actorcritic.CritiqueResult
Initializes with a state-action-state behavior tuple and the value of the critique for this behavior.
crossesWall(GridGameStandardMechanics.Location2, GridGameStandardMechanics.Location2, ObjectInstance, boolean) - Method in class burlap.domain.stochasticgames.gridgame.GridGameStandardMechanics
Returns true if the agent would cross a given wall instance given a movement attempt.
cseAvgSeries - Variable in class burlap.behavior.singleagent.auxiliary.performance.PerformancePlotter.AgentDatasets
All trial's average cumulative steps per episode series data
cseAvgSeries - Variable in class burlap.behavior.stochasticgame.auxiliary.performance.MultiAgentPerformancePlotter.AgentDatasets
All trial's average cumulative steps per episode series data
csrAvgSeries - Variable in class burlap.behavior.singleagent.auxiliary.performance.PerformancePlotter.AgentDatasets
All trial's average cumulative reward per step series data
csrAvgSeries - Variable in class burlap.behavior.stochasticgame.auxiliary.performance.MultiAgentPerformancePlotter.AgentDatasets
All trial's average cumulative reward per step series data
cumulatedRewardMap - Variable in class burlap.behavior.singleagent.planning.deterministic.informed.astar.AStar
Data structure for maintaining g(n): the cost so far to node n
cumulativeDiscount - Variable in class burlap.behavior.singleagent.options.Option
How much to discount the reward in the next option step
cumulativeEpisodeReward - Variable in class burlap.behavior.singleagent.auxiliary.performance.PerformancePlotter.Trial
Stores the cumulative reward by episode
cumulativeEpisodeReward - Variable in class burlap.behavior.stochasticgame.auxiliary.performance.MultiAgentPerformancePlotter.Trial
Stores the cumulative reward by episode
cumulativeEpisodeRewardSeries - Variable in class burlap.behavior.singleagent.auxiliary.performance.PerformancePlotter.AgentDatasets
Most recent trial's cumulative reward per step episode data
cumulativeEpisodeRewardSeries - Variable in class burlap.behavior.stochasticgame.auxiliary.performance.MultiAgentPerformancePlotter.AgentDatasets
Most recent trial's cumulative reward per step episode data
cumulativeStepEpisode - Variable in class burlap.behavior.singleagent.auxiliary.performance.PerformancePlotter.Trial
Stores the cumulative steps by episode
cumulativeStepEpisode - Variable in class burlap.behavior.stochasticgame.auxiliary.performance.MultiAgentPerformancePlotter.Trial
Stores the cumulative steps by episode
cumulativeStepEpisodeSeries - Variable in class burlap.behavior.singleagent.auxiliary.performance.PerformancePlotter.AgentDatasets
Most recent trial's cumulative steps per step episode data
cumulativeStepEpisodeSeries - Variable in class burlap.behavior.stochasticgame.auxiliary.performance.MultiAgentPerformancePlotter.AgentDatasets
Most recent trial's cumulative steps per step episode data
cumulativeStepReward - Variable in class burlap.behavior.singleagent.auxiliary.performance.PerformancePlotter.Trial
Stores the cumulative reward by step
cumulativeStepReward - Variable in class burlap.behavior.stochasticgame.auxiliary.performance.MultiAgentPerformancePlotter.Trial
Stores the cumulative reward by step
cumulativeStepRewardSeries - Variable in class burlap.behavior.singleagent.auxiliary.performance.PerformancePlotter.AgentDatasets
Most recent trial's cumulative reward per step series data
cumulativeStepRewardSeries - Variable in class burlap.behavior.stochasticgame.auxiliary.performance.MultiAgentPerformancePlotter.AgentDatasets
Most recent trial's cumulative reward per step series data
curAgentDatasets - Variable in class burlap.behavior.singleagent.auxiliary.performance.PerformancePlotter
contains the plot series data that will be displayed for the current agent
curAgentName - Variable in class burlap.behavior.singleagent.auxiliary.performance.PerformancePlotter
The name of the current agent being tested
curEA - Variable in class burlap.behavior.singleagent.EpisodeSequenceVisualizer
 
curEpisode - Variable in class burlap.behavior.singleagent.auxiliary.performance.PerformancePlotter
the current episode that was recorded
curEpisode - Variable in class burlap.behavior.stochasticgame.auxiliary.performance.MultiAgentPerformancePlotter
the current episode that was recorded
curEpisodeReward - Variable in class burlap.behavior.singleagent.auxiliary.performance.PerformancePlotter.Trial
The cumulative reward of the episode so far
curEpisodeReward - Variable in class burlap.behavior.stochasticgame.auxiliary.performance.MultiAgentPerformancePlotter.Trial
The cumulative reward of the episode so far
curEpisodeRewards - Variable in class burlap.behavior.singleagent.auxiliary.performance.PerformancePlotter.Trial
A list of the reward sequence in the current episode
curEpisodeRewards - Variable in class burlap.behavior.stochasticgame.auxiliary.performance.MultiAgentPerformancePlotter.Trial
A list of the reward sequence in the current episode
curEpisodeSteps - Variable in class burlap.behavior.singleagent.auxiliary.performance.PerformancePlotter.Trial
The number of steps in the episode so far
curEpisodeSteps - Variable in class burlap.behavior.stochasticgame.auxiliary.performance.MultiAgentPerformancePlotter.Trial
The number of steps in the episode so far
curGA - Variable in class burlap.behavior.stochasticgame.GameSequenceVisualizer
 
curIndex - Variable in class burlap.behavior.singleagent.options.MacroAction
the current execution index of the macro-action sequence.
curJointAction - Variable in class burlap.oomdp.stochasticgames.explorers.SGTerminalExplorer
 
currentEpisode - Variable in class burlap.oomdp.singleagent.explorer.VisualExplorer
 
currentGameRecord - Variable in class burlap.oomdp.stochasticgames.World
 
currentState - Variable in class burlap.oomdp.stochasticgames.World
 
currentValueFunctionIsLower - Variable in class burlap.behavior.singleagent.planning.stochastic.rtdp.BoundedRTDP
Whether the current ValueFunctionPlanner valueFunction reference points to the lower bound value function or the upper bound value function.
curState - Variable in class burlap.oomdp.singleagent.environment.Environment
 
curState - Variable in class burlap.oomdp.singleagent.explorer.VisualExplorer
 
curState - Variable in class burlap.oomdp.singleagent.interfaces.rlglue.RLGlueEnvironment
The current state of the environment
curStateIsTerminal() - Method in class burlap.oomdp.singleagent.environment.Environment
Returns whether the current environment state is a terminal state.
curTime - Variable in class burlap.behavior.singleagent.learning.actorcritic.critics.TimeIndexedTDLambda
The current time index / depth of the current episode
curTimeStep - Variable in class burlap.behavior.singleagent.auxiliary.performance.PerformancePlotter
the current time step that was recorded
curTimeStep - Variable in class burlap.behavior.stochasticgame.auxiliary.performance.MultiAgentPerformancePlotter
the current time step that was recorded
curTrial - Variable in class burlap.behavior.singleagent.auxiliary.performance.PerformancePlotter
Contains all the current trial performance data
CWGInit(SGDomain, JointReward, TerminalFunction, SGStateGenerator, StateAbstraction) - Method in class burlap.oomdp.stochasticgames.tournament.common.ConstantWorldGenerator
 
cWidth - Variable in class burlap.behavior.singleagent.auxiliary.valuefunctionvis.ValueFunctionVisualizerGUI
Visualizer canvas width
cWidth - Variable in class burlap.behavior.singleagent.EpisodeSequenceVisualizer
 
cWidth - Variable in class burlap.behavior.stochasticgame.GameSequenceVisualizer
 
cWidth - Variable in class burlap.oomdp.singleagent.common.VisualActionObserver
The width of the painter
cWidth - Variable in class burlap.oomdp.singleagent.explorer.VisualExplorer
 
cWidth - Variable in class burlap.oomdp.stochasticgames.common.VisualWorldObserver
The width of the painter
A B C D E F G H I J K L M N O P Q R S T U V W X Y Z