A B C D E F G H I J K L M N O P Q R S T U V W X Y Z 

S

s - Variable in class burlap.behavior.singleagent.learnbydemo.mlirl.support.QGradientTuple
The state
s - Variable in class burlap.behavior.singleagent.learning.actorcritic.CritiqueResult
The source state
s - Variable in class burlap.behavior.singleagent.learning.lspi.SARSData.SARS
The previou state
s - Variable in class burlap.behavior.singleagent.learning.tdmethods.QLearningStateNode
A hashed state entry for which Q-value will be stored.
s - Variable in class burlap.behavior.singleagent.planning.deterministic.SearchNode
The (hashed) state of this node
s - Variable in class burlap.behavior.singleagent.planning.vfa.fittedvi.SupervisedVFA.SupervisedVFAInstance
The state
s - Variable in class burlap.behavior.singleagent.QValue
The state with which this Q-value is associated.
s - Variable in class burlap.behavior.singleagent.vfa.cmac.Tiling.StateTile
The state the tile is for
s - Variable in class burlap.behavior.statehashing.StateHashTuple
 
s - Variable in class burlap.behavior.stochasticgame.mavaluefunction.JAQValue
 
s - Variable in class burlap.oomdp.core.TransitionProbability
The state to which the agent may transition.
saAgent - Variable in class burlap.behavior.stochasticgame.agents.interfacing.singleagent.SingleAgentInterface
The BURLAP single agent learning agent that is being used.
saAgentFactory - Variable in class burlap.behavior.stochasticgame.agents.interfacing.singleagent.SingleAgentInterface
The single agent learning factory
sActionFeatures - Variable in class burlap.behavior.singleagent.learning.lspi.LSPI.SSFeatures
State-action features
saDomain - Variable in class burlap.behavior.stochasticgame.agents.interfacing.singleagent.SingleAgentInterface
The single agent version of the domain
SADomain - Class in burlap.oomdp.singleagent
A domain subclass for single agent domains.
SADomain() - Constructor for class burlap.oomdp.singleagent.SADomain
 
saInterface - Variable in class burlap.behavior.stochasticgame.agents.interfacing.singleagent.SGToSADomain
The agent interface which single agent action objects will call
SALearningAgentFactoryForSG - Interface in burlap.behavior.stochasticgame.agents.interfacing.singleagent
 
sample() - Method in class burlap.datastructures.BoltzmannDistribution
Samples the output probability distribution.
sample() - Method in class burlap.datastructures.StochasticTree
Samples an element according to a probability defined by the relative weight of objects from the tree and returns it
sampleBasicMovement(State, GridGameStandardMechanics.Location2, GridGameStandardMechanics.Location2, List<GridGameStandardMechanics.Location2>) - Method in class burlap.domain.stochasticgames.gridgame.GridGameStandardMechanics
Returns a movement result of the agent.
sampledBellmanQEstimate(GroundedAction, DifferentiableSparseSampling.QAndQGradient) - Method in class burlap.behavior.singleagent.learnbydemo.mlirl.differentiableplanners.DifferentiableSparseSampling.DiffStateNode
 
sampledBellmanQEstimate(GroundedAction) - Method in class burlap.behavior.singleagent.planning.stochastic.sparsesampling.SparseSampling.StateNode
Estimates the Q-value using sampling from the transition dynamics.
sampleFromActionDistribution(State) - Method in class burlap.behavior.singleagent.Policy
This is a helper method for stochastic policies.
sampleHelper(StochasticTree<T>.STNode, double) - Method in class burlap.datastructures.StochasticTree
A recursive method for performing sampling
sampleModel(State, GroundedAction) - Method in class burlap.behavior.singleagent.learning.modellearning.Model
A method to sample this model's transition dynamics for the given state and action.
sampleModelHelper(State, GroundedAction) - Method in class burlap.behavior.singleagent.learning.modellearning.Model
A helper method to sample this model's transition dynamics for the given state and action.
sampleModelHelper(State, GroundedAction) - Method in class burlap.behavior.singleagent.learning.modellearning.models.TabularModel
 
samples - Variable in class burlap.behavior.singleagent.planning.vfa.fittedvi.FittedVI
The set of samples on which to perform value iteration.
sampleStrategy(double[]) - Method in class burlap.behavior.stochasticgame.agents.twoplayer.singlestage.equilibriumplayer.EquilibriumPlayingAgent
Samples an action from a strategy, where a strategy is defined as probability distribution over actions.
sampleTransitionFromTransitionProbabilities(State, GroundedAction) - Method in class burlap.behavior.singleagent.learning.modellearning.Model
Will return a sampled outcome state by calling the Model.getTransitionProbabilities(State, GroundedAction) method and randomly drawing a state according to its distribution
sampleWallCollision(GridGameStandardMechanics.Location2, GridGameStandardMechanics.Location2, List<ObjectInstance>, boolean) - Method in class burlap.domain.stochasticgames.gridgame.GridGameStandardMechanics
Return true if the agent is able to move in the desired location; false if the agent moves into a solid wall or if the agent randomly fails to move through a semi-wall that is in the way.
SarsaLam - Class in burlap.behavior.singleagent.learning.tdmethods
Tabular SARSA(\lambda) implementation [1].
SarsaLam(Domain, RewardFunction, TerminalFunction, double, StateHashFactory, double, double, double) - Constructor for class burlap.behavior.singleagent.learning.tdmethods.SarsaLam
Initializes SARSA(\lambda) with 0.1 epsilon greedy policy, the same Q-value initialization everywhere, and places no limit on the number of steps the agent can take in an episode.
SarsaLam(Domain, RewardFunction, TerminalFunction, double, StateHashFactory, double, double, int, double) - Constructor for class burlap.behavior.singleagent.learning.tdmethods.SarsaLam
Initializes SARSA(\lambda) with 0.1 epsilon greedy policy, the same Q-value initialization everywhere.
SarsaLam(Domain, RewardFunction, TerminalFunction, double, StateHashFactory, double, double, Policy, int, double) - Constructor for class burlap.behavior.singleagent.learning.tdmethods.SarsaLam
Initializes SARSA(\lambda) with the same Q-value initialization everywhere.
SarsaLam(Domain, RewardFunction, TerminalFunction, double, StateHashFactory, ValueFunctionInitialization, double, Policy, int, double) - Constructor for class burlap.behavior.singleagent.learning.tdmethods.SarsaLam
Initializes SARSA(\lambda).
SarsaLam.EligibilityTrace - Class in burlap.behavior.singleagent.learning.tdmethods
A data structure for maintaining eligibility trace values
SarsaLam.EligibilityTrace(StateHashTuple, QValue, double) - Constructor for class burlap.behavior.singleagent.learning.tdmethods.SarsaLam.EligibilityTrace
Creates a new eligibility trace to track for an episode.
sarsalamInit(double) - Method in class burlap.behavior.singleagent.learning.tdmethods.SarsaLam
 
SARSCollector - Class in burlap.behavior.singleagent.learning.lspi
This object is used to collected SARSData (state-action-reard-state tuples) that can then be used by algorithms like LSPI for learning.
SARSCollector(Domain) - Constructor for class burlap.behavior.singleagent.learning.lspi.SARSCollector
Initializes the collector's action set using the actions that are part of the domain.
SARSCollector(List<Action>) - Constructor for class burlap.behavior.singleagent.learning.lspi.SARSCollector
Initializes this collector's action set to use for collecting data.
SARSCollector.UniformRandomSARSCollector - Class in burlap.behavior.singleagent.learning.lspi
Collects SARS data from source states generated by a StateGenerator by choosing actions uniformly at random.
SARSCollector.UniformRandomSARSCollector(Domain) - Constructor for class burlap.behavior.singleagent.learning.lspi.SARSCollector.UniformRandomSARSCollector
Initializes the collector's action set using the actions that are part of the domain.
SARSCollector.UniformRandomSARSCollector(List<Action>) - Constructor for class burlap.behavior.singleagent.learning.lspi.SARSCollector.UniformRandomSARSCollector
Initializes this collector's action set to use for collecting data.
SARSData - Class in burlap.behavior.singleagent.learning.lspi
Class that provides a wrapper for a List holding a bunch of state-action-reward-state (SARSData.SARS) tuples.
SARSData() - Constructor for class burlap.behavior.singleagent.learning.lspi.SARSData
Initializes with an empty dataset
SARSData(int) - Constructor for class burlap.behavior.singleagent.learning.lspi.SARSData
Initializes with an empty dataset with initial capacity for the given parameter available.
SARSData.SARS - Class in burlap.behavior.singleagent.learning.lspi
State-action-reward-state tuple.
SARSData.SARS(State, GroundedAction, double, State) - Constructor for class burlap.behavior.singleagent.learning.lspi.SARSData.SARS
Initializes.
saThread - Variable in class burlap.behavior.stochasticgame.agents.interfacing.singleagent.SingleAgentInterface
The thread that runs the single agent learning algorithm
satisfies(State) - Method in class burlap.behavior.singleagent.planning.deterministic.TFGoalCondition
 
satisfies(State) - Method in class burlap.behavior.singleagent.planning.SinglePFSCT
 
satisfies(State) - Method in interface burlap.behavior.singleagent.planning.StateConditionTest
 
satisfiesGoal(State) - Method in class burlap.behavior.singleagent.planning.stochastic.rtdp.BFSRTDP
Returns whether a state is a goal state.
satisifiesHeap() - Method in class burlap.datastructures.HashIndexedHeap
This method returns whether the data structure stored is in fact a heap (costs linear time).
sdp - Variable in class burlap.behavior.singleagent.auxiliary.valuefunctionvis.ValueFunctionVisualizerGUI
Painter used to visualize general state-independent domain information
SDPlannerPolicy - Class in burlap.behavior.singleagent.planning.deterministic
This is a static deterministic planner policy, which means if the source deterministic planner has not already computed and cached the plan for a query state, then this policy is undefined for that state and will cause the policy to throw a corresponding Policy.PolicyUndefinedException exception object.
SDPlannerPolicy() - Constructor for class burlap.behavior.singleagent.planning.deterministic.SDPlannerPolicy
 
SDPlannerPolicy(DeterministicPlanner) - Constructor for class burlap.behavior.singleagent.planning.deterministic.SDPlannerPolicy
 
SearchNode - Class in burlap.behavior.singleagent.planning.deterministic
The SearchNode class is used for classic deterministic forward search planners.
SearchNode(StateHashTuple) - Constructor for class burlap.behavior.singleagent.planning.deterministic.SearchNode
Constructs a SearchNode for the input state.
SearchNode(StateHashTuple, GroundedAction, SearchNode) - Constructor for class burlap.behavior.singleagent.planning.deterministic.SearchNode
Constructs a SearchNode for the input state and sets the generating action and back pointer to the provided elements.
seAvgSeries - Variable in class burlap.behavior.singleagent.auxiliary.performance.PerformancePlotter.AgentDatasets
All trial's average steps per episode series data
seAvgSeries - Variable in class burlap.behavior.stochasticgame.auxiliary.performance.MultiAgentPerformancePlotter.AgentDatasets
All trial's average steps per episode series data
seedDefault(long) - Static method in class burlap.debugtools.RandomFactory
Sets the seed of the default random number generator
seedMapped(int, long) - Static method in class burlap.debugtools.RandomFactory
Seeds and returns the random generator with the associated id or creates it if it does not yet exist
seedMapped(String, long) - Static method in class burlap.debugtools.RandomFactory
Seeds and returns the random generator with the associated String id or creates it if it does not yet exist
selectActionNode(UCTStateNode) - Method in class burlap.behavior.singleagent.planning.stochastic.montecarlo.uct.UCT
Selections which action to take.
selectionMode - Variable in class burlap.behavior.singleagent.planning.stochastic.rtdp.BoundedRTDP
Which state selection mode is used.
selector - Variable in class burlap.oomdp.stochasticgames.tournament.Tournament
 
semiDeepCopy(String...) - Method in class burlap.oomdp.core.State
Performs a semi-deep copy of the state in which only the objects with the names in deepCopyObjectNames are deep copied and the rest of the objects are shallowed copied.
semiDeepCopy(ObjectInstance...) - Method in class burlap.oomdp.core.State
Performs a semi-deep copy of the state in which only the objects in deepCopyObjects are deep copied and the rest of the objects are shallowed copied.
semiDeepCopy(Set<ObjectInstance>) - Method in class burlap.oomdp.core.State
Performs a semi-deep copy of the state in which only the objects in deepCopyObjects are deep copied and the rest of the objects are shallowed copied.
semiWallProb - Variable in class burlap.domain.stochasticgames.gridgame.GridGame
The probability that an agent will pass through a semi-wall.
set(SingleStageNormalFormGame.StrategyProfile, double) - Method in class burlap.domain.stochasticgames.normalform.SingleStageNormalFormGame.AgentPayoutFunction
sets the payout for a given strategy profile
set1DEastWall(int, int) - Method in class burlap.domain.singleagent.gridworld.GridWorldDomain
Sets a specified location to have a 1D east wall.
set1DNorthWall(int, int) - Method in class burlap.domain.singleagent.gridworld.GridWorldDomain
Sets a specified location to have a 1D north wall.
setActingAgentName(String) - Method in class burlap.behavior.stochasticgame.PolicyFromJointPolicy
Sets the acting agents name
setActionNameGlyphPainter(String, ActionGlyphPainter) - Method in class burlap.behavior.singleagent.auxiliary.valuefunctionvis.common.PolicyGlyphPainter2D
Sets which glyph painter to use for an action with the given name
setActionObserverForAllAction(ActionObserver) - Method in class burlap.oomdp.singleagent.SADomain
Clears all action observers for all actions in this domain and then sets them to have the single action observer provided
setActions(List<Action>) - Method in class burlap.behavior.singleagent.planning.OOMDPPlanner
Sets the action set the planner should use.
setActionShortHand(Map<String, String>) - Method in class burlap.oomdp.singleagent.explorer.TerminalExplorer
Sets teh short hand names to use for actions.
setActionShortHand(Map<String, String>) - Method in class burlap.oomdp.stochasticgames.explorers.SGTerminalExplorer
Sets the action shorthands to use
setAgent(State, int, int, int, boolean) - Static method in class burlap.domain.singleagent.blockdude.BlockDude
Sets the agent object's x, y, direction, and holding attribute to the specified values.
setAgent(State, int, int) - Static method in class burlap.domain.singleagent.frostbite.FrostbiteDomain
Sets the agent s position, with a height of 0 (on the ground)
setAgent(State, int, int, int) - Static method in class burlap.domain.singleagent.frostbite.FrostbiteDomain
Sets the agent s position and height
setAgent(State, int, int) - Static method in class burlap.domain.singleagent.gridworld.GridWorldDomain
Sets the first agent object in s to the specified x and y position.
setAgent(State, double, double, double) - Static method in class burlap.domain.singleagent.lunarlander.LunarLanderDomain
Sets the agent/lander position/orientation and zeros out the lander velocity.
setAgent(State, double, double, double, double, double) - Static method in class burlap.domain.singleagent.lunarlander.LunarLanderDomain
Sets the agent/lander position/orientation and the velocity.
setAgent(State, double, double) - Static method in class burlap.domain.singleagent.mountaincar.MountainCar
Sets the agent position in the provided state to the given position and with the given velocity.
setAgent(State, int, int, int, int) - Static method in class burlap.domain.stochasticgames.gridgame.GridGame
Sets an agent's attribute values
setAgentDefinitions - Variable in class burlap.behavior.stochasticgame.agents.mavf.MultiAgentVFPlanningAgent
Whether the agent definitions for this planner have been set yet.
setAgentDefinitions(Map<String, AgentType>) - Method in class burlap.behavior.stochasticgame.mavaluefunction.MAValueFunctionPlanner
Sets/changes the agent definitions to use in planning.
setAgents(List<MultiAgentQLearning>) - Method in class burlap.behavior.stochasticgame.mavaluefunction.AgentQSourceMap.MAQLControlledQSourceMap
Initializes with a list of agents that each keep their own Q_source.
setAgentsInJointPolicy(Map<String, AgentType>) - Method in class burlap.behavior.stochasticgame.JointPolicy
Sets the agent definitions that define the set of possible joint actions in each state.
setAgentsInJointPolicy(List<Agent>) - Method in class burlap.behavior.stochasticgame.JointPolicy
Sets the agent definitions by querying the agent names and AgentType objects from a list of agents.
setAgentsInJointPolicyFromWorld(World) - Method in class burlap.behavior.stochasticgame.JointPolicy
Sets teh agent definitions by querying the agents that exist in a World object.
setAnginc(double) - Method in class burlap.domain.singleagent.lunarlander.LunarLanderDomain.LLPhysicsParams
 
setAnginc(double) - Method in class burlap.domain.singleagent.lunarlander.LunarLanderDomain
Sets how many radians the agent will rotate from its current orientation when a turn/rotate action is applied
setAngmax(double) - Method in class burlap.domain.singleagent.lunarlander.LunarLanderDomain.LLPhysicsParams
 
setAngmax(double) - Method in class burlap.domain.singleagent.lunarlander.LunarLanderDomain
Sets the maximum rotate angle (in radians) that the lander can be rotated from the vertical orientation in either clockwise or counterclockwise direction.
setAttributes(List<Attribute>) - Method in class burlap.oomdp.core.ObjectClass
Sets the attributes used to define this object class
setAttributesForClass(String, List<Attribute>) - Method in class burlap.behavior.statehashing.DiscreteStateHashFactory
Sets which attributes to use in the hash calculation for the given class.
setAttributesForHashCode(Map<String, List<Attribute>>) - Method in class burlap.behavior.statehashing.DiscreteStateHashFactory
Sets this hashing factory to hash on only the attributes for the specified classes in the provided map
setAuxInfoTo(PrioritizedSearchNode) - Method in class burlap.behavior.singleagent.planning.deterministic.informed.PrioritizedSearchNode
This method rewires the generating node information and priority to that specified in a different PrioritizedSearchNode.
setBase(State) - Method in class burlap.oomdp.singleagent.explorer.StateResetSpecialAction
Sets the base state to reset to
setBaseStateGenerator(StateGenerator) - Method in class burlap.oomdp.singleagent.explorer.StateResetSpecialAction
Sets the state generator to draw from on reset
setBgColor(Color) - Method in class burlap.behavior.singleagent.auxiliary.valuefunctionvis.ValueFunctionVisualizerGUI
Sets the canvas background color
setBGColor(Color) - Method in class burlap.oomdp.visualizer.MultiLayerRenderer
Sets the color that will fill the canvas before rendering begins
setBGColor(Color) - Method in class burlap.oomdp.visualizer.Visualizer
Sets the background color of the canvas
setBlock(State, int, int, int) - Static method in class burlap.domain.singleagent.blockdude.BlockDude
Sets the ith block's x and y position in a state.
setBlock(State, int, String, int, int, String) - Static method in class burlap.domain.singleagent.blocksworld.BlocksWorld
Use this method to quickly set the various values of a block
setBlock(State, String, String, int, int, String) - Static method in class burlap.domain.singleagent.blocksworld.BlocksWorld
Use this method to quickly set the various values of a block
setBlock(ObjectInstance, String, int, int, String) - Static method in class burlap.domain.singleagent.blocksworld.BlocksWorld
Use this method to quickly set the various values of a block
setBlockColor(State, int, String) - Static method in class burlap.domain.singleagent.blocksworld.BlocksWorld
Use this method to quickly set the color of a block
setBoltzmannBeta(double) - Method in class burlap.behavior.singleagent.learnbydemo.mlirl.MLIRLRequest
 
setBoltzmannBetaParameter(double) - Method in class burlap.behavior.singleagent.learnbydemo.mlirl.differentiableplanners.DifferentiableSparseSampling
 
setBoltzmannBetaParameter(double) - Method in class burlap.behavior.singleagent.learnbydemo.mlirl.differentiableplanners.DifferentiableVFPlanner
 
setBoltzmannBetaParameter(double) - Method in interface burlap.behavior.singleagent.learnbydemo.mlirl.support.QGradientPlanner
Sets this planner's Boltzmann beta parameter used to compute gradients.
setBoundaryWalls(State, int, int) - Static method in class burlap.domain.stochasticgames.gridgame.GridGame
Sets boundary walls of a domain.
setBreakTiesRandomly(boolean) - Method in class burlap.behavior.stochasticgame.mavaluefunction.policies.EGreedyMaxWellfare
Whether to break ties randomly or deterministically.
setBrickMap(State, int[][]) - Static method in class burlap.domain.singleagent.blockdude.BlockDude
Sets the state to use the provided brick map.
setBrickValue(State, int, int, int) - Static method in class burlap.domain.singleagent.blockdude.BlockDude
Sets the brick value in grid location x, y.
setC(int) - Method in class burlap.behavior.singleagent.learnbydemo.mlirl.differentiableplanners.DifferentiableSparseSampling
Sets the number of state transition samples used.
setC(int) - Method in class burlap.behavior.singleagent.planning.stochastic.sparsesampling.SparseSampling
Sets the number of state transition samples used.
setCellWallState(int, int, int) - Method in class burlap.domain.singleagent.gridworld.GridWorldDomain
Sets the map at the specified location to have the specified wall configuration.
setClassName(String) - Method in class burlap.oomdp.core.PropositionalFunction
Sets the class name for this propositional function.
setCoefficientVectors(List<short[]>) - Method in class burlap.behavior.singleagent.vfa.fourier.FourierBasis
Forces the set of coefficient vectors (and thereby Fourier basis functions) used.
setCollisionReward(double) - Method in class burlap.domain.singleagent.lunarlander.LunarLanderRF
 
setColorBlend(ColorBlend) - Method in class burlap.behavior.singleagent.auxiliary.valuefunctionvis.common.StateValuePainter2D
Sets the color blending used for the value function.
setComputeExactValueFunction(boolean) - Method in class burlap.behavior.singleagent.planning.stochastic.sparsesampling.SparseSampling
Sets whether this planner will compute the exact finite horizon value funciton (using the full transition dynamics) or if sampling to estimate the value function will be used.
setControlDepth(int) - Method in class burlap.behavior.singleagent.planning.vfa.fittedvi.FittedVI
setCorrelatedQObjective(CorrelatedEquilibriumSolver.CorrelatedEquilibriumObjective) - Method in class burlap.behavior.stochasticgame.mavaluefunction.policies.ECorrelatedQJointPolicy
Sets the correlated equilibrium objective to be solved.
setCurStateTo(State) - Method in class burlap.oomdp.singleagent.environment.Environment
Sets the current state of the environment
setCurTime(int) - Method in class burlap.behavior.singleagent.learning.actorcritic.critics.TimeIndexedTDLambda
Sets the time/depth of the current episode.
setDataset(SARSData) - Method in class burlap.behavior.singleagent.learning.lspi.LSPI
Sets the SARS dataset this object will use for LSPI
setDebugCode(int) - Method in class burlap.behavior.singleagent.learnbydemo.mlirl.differentiableplanners.DifferentiableSparseSampling
Sets the debug code used for logging plan results with DPrint.
setDebugCode(int) - Method in class burlap.behavior.singleagent.learnbydemo.mlirl.MLIRL
Sets the debug code used for printing to the terminal
setDebugCode(int) - Method in class burlap.behavior.singleagent.learnbydemo.mlirl.MultipleIntentionsMLIRL
Sets the debug code used for printing to the terminal
setDebugCode(int) - Method in class burlap.behavior.singleagent.planning.OOMDPPlanner
Sets the debug code to be used by calls to DPrint
setDebugCode(int) - Method in class burlap.behavior.singleagent.planning.stochastic.sparsesampling.SparseSampling
Sets the debug code used for logging plan results with DPrint.
setDebugCode(int) - Method in class burlap.oomdp.core.Domain
Sets the debug code used for printing debug messages.
setDebugId(int) - Method in class burlap.oomdp.stochasticgames.World
Sets the debug code that is use for printing with DPrint.
setDefaultFloorDiscretizingMultiple(double) - Method in class burlap.behavior.statehashing.DiscretizingStateHashFactory
Sets the default multiple to use for continuous attributes that do not have specific multiples set for them.
setDefaultReward(double) - Method in class burlap.domain.singleagent.lunarlander.LunarLanderRF
 
setDefaultTileWidth(double) - Method in class burlap.behavior.singleagent.interfaces.rlglue.common.RLGlueCMACSarsaLambdaFactory
 
setDefaultValueFunctionAfterARollout(boolean) - Method in class burlap.behavior.singleagent.planning.stochastic.rtdp.BoundedRTDP
Use this method to set which value function--the lower bound or upper bound--to use after a planning rollout is complete.
setDeterministicTransitionDynamics() - Method in class burlap.domain.singleagent.frostbite.FrostbiteDomain
Will set the domain to use deterministic action transitions.
setDeterministicTransitionDynamics() - Method in class burlap.domain.singleagent.gridworld.GridWorldDomain
Will set the domain to use deterministic action transitions.
setDiscValues(List<String>) - Method in class burlap.oomdp.core.Attribute
Sets a discrete attribute's categorical values
setDiscValues(String[]) - Method in class burlap.oomdp.core.Attribute
Sets a discrete attribute's categorical values.
setDiscValuesForRange(int, int, int) - Method in class burlap.oomdp.core.Attribute
Sets the possible range of discrete (@link Attribute.AttributeType.DISC) values for the attribute.
setDomain(Domain) - Method in class burlap.behavior.singleagent.learnbydemo.apprenticeship.ApprenticeshipLearningRequest
 
setDomain(Domain) - Method in class burlap.behavior.singleagent.learnbydemo.IRLRequest
 
setDomain(Domain) - Method in class burlap.behavior.singleagent.planning.OOMDPPlanner
Sets the domain of this planner.
setEpisodeWeights(double[]) - Method in class burlap.behavior.singleagent.learnbydemo.mlirl.MLIRLRequest
 
setEpsilon(double) - Method in class burlap.behavior.singleagent.learnbydemo.apprenticeship.ApprenticeshipLearningRequest
 
setEpsilon(double) - Method in class burlap.behavior.singleagent.planning.commonpolicies.EpsilonGreedy
Sets the epsilon value, where epsilon is the probability of taking a random action.
setEpsilon(double) - Method in class burlap.behavior.stochasticgame.agents.naiveq.history.SGQWActionHistoryFactory
Sets the epislon parmaeter (for epsilon greedy policy).
setExernalTermination(TerminalFunction) - Method in class burlap.behavior.singleagent.options.Option
Sets what the external MDPs terminal function is that will cause this option to terminate if it enters those terminal states.
setExit(State, int, int) - Static method in class burlap.domain.singleagent.blockdude.BlockDude
Sets the x and y position of the first exit object in the state.
setExpectationCalculationProbabilityCutoff(double) - Method in class burlap.behavior.singleagent.options.Option
Sets the minimum probability of reaching a terminal state for it to be included in the options computed transition dynamics distribution.
setExpectationHashingFactory(StateHashFactory) - Method in class burlap.behavior.singleagent.options.Option
Sets the option to use the provided hashing factory for caching transition probability results.
setExpertEpisodes(List<EpisodeAnalysis>) - Method in class burlap.behavior.singleagent.learnbydemo.apprenticeship.ApprenticeshipLearningRequest
 
setExpertEpisodes(List<EpisodeAnalysis>) - Method in class burlap.behavior.singleagent.learnbydemo.IRLRequest
 
setFd(FeatureDatabase) - Method in class burlap.behavior.singleagent.vfa.common.FDFeatureVectorGenerator
 
setFeatureDatabase(FeatureDatabase) - Method in class burlap.behavior.singleagent.learning.lspi.LSPI
Sets the feature datbase defining state features
setFeatureGenerator(StateToFeatureVectorGenerator) - Method in class burlap.behavior.singleagent.learnbydemo.apprenticeship.ApprenticeshipLearningRequest
 
setFeaturesAreForNextState(boolean) - Method in class burlap.behavior.singleagent.learnbydemo.mlirl.commonrfs.LinearStateDifferentiableRF
Sets whether features for the reward function are generated from the next state or previous state.
setFlag(int, int) - Static method in class burlap.debugtools.DebugFlags
Creates/sets a debug flag
setForgetPreviousPlanResults(boolean) - Method in class burlap.behavior.singleagent.learnbydemo.mlirl.differentiableplanners.DifferentiableSparseSampling
Sets whether previous planning results should be forgetten or resued in subsequent planning.
setForgetPreviousPlanResults(boolean) - Method in class burlap.behavior.singleagent.planning.stochastic.sparsesampling.SparseSampling
Sets whether previous planning results should be forgetten or resued in subsequent planning.
setFrameDelay(long) - Method in class burlap.oomdp.singleagent.common.VisualActionObserver
Sets how long to wait in ms for a state to be rendered before returning control the agent.
setFrameDelay(long) - Method in class burlap.oomdp.stochasticgames.common.VisualWorldObserver
Sets how long to wait in ms for a state to be rendered before returning control the world.
setGamma(double) - Method in class burlap.behavior.singleagent.learnbydemo.apprenticeship.ApprenticeshipLearningRequest
 
setGamma(double) - Method in class burlap.behavior.singleagent.learnbydemo.IRLRequest
 
setGamma(double) - Method in class burlap.behavior.singleagent.planning.OOMDPPlanner
Sets gamma, the discount factor used by this planner
setGoal(State, int, int, int, int) - Static method in class burlap.domain.stochasticgames.gridgame.GridGame
Sets a goal objects attribute values
setGoalCondition(StateConditionTest) - Method in class burlap.behavior.singleagent.planning.stochastic.rtdp.BFSRTDP
Sets the goal state that causes the BFS-like pass to stop expanding when found.
setGoalReward(double) - Method in class burlap.domain.singleagent.lunarlander.LunarLanderRF
 
setGravity(double) - Method in class burlap.domain.singleagent.lunarlander.LunarLanderDomain.LLPhysicsParams
 
setGravity(double) - Method in class burlap.domain.singleagent.lunarlander.LunarLanderDomain
Sets the gravity of the domain
setH(int) - Method in class burlap.behavior.singleagent.learnbydemo.mlirl.differentiableplanners.DifferentiableSparseSampling
Sets the height of the tree.
setH(int) - Method in class burlap.behavior.singleagent.planning.stochastic.sparsesampling.SparseSampling
Sets the height of the tree.
setHAndCByMDPError(double, double, int) - Method in class burlap.behavior.singleagent.planning.stochastic.sparsesampling.SparseSampling
Sets the height and number of transition dynamics samples in a way that ensure epsilon optimality.
setHorizontalWall(State, int, int, int, int, int) - Static method in class burlap.domain.stochasticgames.gridgame.GridGame
Sets the attribute values for a horizontal wall
setIdentityScalar(double) - Method in class burlap.behavior.singleagent.learning.lspi.LSPI
Sets the initial LSPI identity matrix scalar used.
setIgloo(State, int) - Static method in class burlap.domain.singleagent.frostbite.FrostbiteDomain
Sets the igloo building status
setInitialFunctionWeight(double) - Method in class burlap.behavior.singleagent.interfaces.rlglue.common.RLGlueCMACSarsaLambdaFactory
 
setInternalRewardFunction(JointReward) - Method in class burlap.oomdp.stochasticgames.Agent
Internal reward functions are optional, but can be useful for purposes like reward shaping.
setIterationListData() - Method in class burlap.behavior.singleagent.EpisodeSequenceVisualizer
 
setJAC(String) - Method in class burlap.oomdp.stochasticgames.explorers.SGVisualExplorer
Sets the joint action model to use
setJointActionModel(JointActionModel) - Method in class burlap.oomdp.stochasticgames.SGDomain
Sets the joint action model associated with this domain.
setJointPolicy(JointPolicy) - Method in class burlap.behavior.stochasticgame.PolicyFromJointPolicy
Sets the underlying joint policy
setK(int) - Method in class burlap.behavior.singleagent.learnbydemo.mlirl.MultipleIntentionsMLIRLRequest
Sets the number of clusters
setLambda(double) - Method in class burlap.behavior.singleagent.interfaces.rlglue.common.RLGlueCMACSarsaLambdaFactory
 
setLambda(double) - Method in class burlap.behavior.singleagent.interfaces.rlglue.common.RLGlueSARSALambdaFactory
 
setLearningPolicy(Policy) - Method in class burlap.behavior.singleagent.interfaces.rlglue.common.RLGlueCMACSarsaLambdaFactory
 
setLearningPolicy(Policy) - Method in class burlap.behavior.singleagent.interfaces.rlglue.common.RLGLueQlearningFactory
 
setLearningPolicy(Policy) - Method in class burlap.behavior.singleagent.learning.lspi.LSPI
Sets the learning policy followed by the LSPI.runLearningEpisodeFrom(State) and LSPI.runLearningEpisodeFrom(State, int) methods.
setLearningPolicy(Policy) - Method in class burlap.behavior.singleagent.learning.tdmethods.QLearning
Sets which policy this agent should use for learning.
setLearningPolicy(Policy) - Method in class burlap.behavior.singleagent.learning.tdmethods.vfa.GradientDescentSarsaLam
Sets which policy this agent should use for learning.
setLearningPolicy(PolicyFromJointPolicy) - Method in class burlap.behavior.stochasticgame.agents.maql.MultiAgentQLearning
Sets the learning policy to be followed by the agent.
setLearningRate(LearningRate) - Method in class burlap.behavior.singleagent.interfaces.rlglue.common.RLGlueCMACSarsaLambdaFactory
 
setLearningRate(LearningRate) - Method in class burlap.behavior.singleagent.interfaces.rlglue.common.RLGLueQlearningFactory
 
setLearningRate(LearningRate) - Method in class burlap.behavior.singleagent.learning.actorcritic.actor.BoltzmannActor
Sets the learning rate function to use.
setLearningRate(LearningRate) - Method in class burlap.behavior.singleagent.learning.actorcritic.critics.TDLambda
Sets the learning rate function to use.
setLearningRate(LearningRate) - Method in class burlap.behavior.singleagent.learning.tdmethods.vfa.GradientDescentSarsaLam
Sets the learning rate function to use.
setLearningRate(LearningRate) - Method in class burlap.behavior.stochasticgame.agents.naiveq.SGNaiveQLAgent
 
setLearningRateFunction(LearningRate) - Method in class burlap.behavior.singleagent.learning.tdmethods.QLearning
Sets the learning rate function to use
setLims(double, double) - Method in class burlap.oomdp.core.Attribute
Sets the upper and lower bound limits for a bounded real attribute.
setLocation(State, int, int, int) - Static method in class burlap.domain.singleagent.gridworld.GridWorldDomain
Sets the i'th location object to the specified x and y position.
setLocation(State, int, int, int, int) - Static method in class burlap.domain.singleagent.gridworld.GridWorldDomain
Sets the i'th location object to the specified x and y position and location type.
setMacroCellHorizontalCount(int) - Method in class burlap.domain.singleagent.gridworld.macro.MacroCellGridWorld
Sets the number of coumns of macro-cells (cells across the x-axis)
setMacroCellVerticalCount(int) - Method in class burlap.domain.singleagent.gridworld.macro.MacroCellGridWorld
Sets the number of rows of macro-cells (cells across the y-axis)
setMap(int[][]) - Method in class burlap.domain.singleagent.gridworld.GridWorldDomain
Set the map of the world.
setMapToFourRooms() - Method in class burlap.domain.singleagent.gridworld.GridWorldDomain
Will set the map of the world to the classic Four Rooms map used the original options work (Sutton, R.S.
setMaxCartSpeedToMaxWithMovementFromOneSideToOther() - Method in class burlap.domain.singleagent.cartpole.CartPoleDomain
Given the current action force, track length and masses, sets the max cart speed to an upperbound of what is possible from moving from one side of the track to another.
setMaxChange(double) - Method in class burlap.behavior.singleagent.learning.lspi.LSPI
Sets the maximum change in weights required to terminate policy iteration when called from the LSPI.planFromState(State), LSPI.runLearningEpisodeFrom(State) or LSPI.runLearningEpisodeFrom(State, int) methods.
setMaxDelta(double) - Method in class burlap.behavior.singleagent.planning.stochastic.rtdp.RTDP
Sets the maximum delta state value update in a rollout that will cause planning to terminate
setMaxDifference(double) - Method in class burlap.behavior.singleagent.planning.stochastic.rtdp.BoundedRTDP
Sets the max permitted difference in value function margin to permit planning termination.
setMaxDim(int) - Method in class burlap.domain.stochasticgames.gridgame.GridGame
Sets the maximum dimension of the world; it's width and height.
setMaxDynamicDepth(int) - Method in class burlap.behavior.singleagent.planning.stochastic.rtdp.RTDP
Sets the maximum depth of a rollout to use until it is prematurely temrinated to update the value function.
setMaxGT(int) - Method in class burlap.domain.stochasticgames.gridgame.GridGame
Sets the maximum goal types
setMaximumEpisodesForPlanning(int) - Method in class burlap.behavior.singleagent.learning.tdmethods.QLearning
Sets the maximum number of episodes that will be performed when the QLearning.planFromState(State) method is called.
setMaximumEpisodesForPlanning(int) - Method in class burlap.behavior.singleagent.learning.tdmethods.vfa.GradientDescentSarsaLam
Sets the maximum number of episodes that will be performed when the GradientDescentSarsaLam.planFromState(State) method is called.
setMaxIterations(int) - Method in class burlap.behavior.singleagent.learnbydemo.apprenticeship.ApprenticeshipLearningRequest
 
setMaxLearningSteps(int) - Method in class burlap.behavior.singleagent.learning.lspi.LSPI
Sets the maximum number of learning steps permitted by the LSPI.runLearningEpisodeFrom(State) method.
setMaxNumberOfRollouts(int) - Method in class burlap.behavior.singleagent.planning.stochastic.rtdp.BoundedRTDP
Sets the maximum number of rollouts permitted before planning is forced to terminate.
setMaxNumPlanningIterations(int) - Method in class burlap.behavior.singleagent.learning.lspi.LSPI
Sets the maximum number of policy iterations that will be used by the LSPI.planFromState(State) method.
setMaxPlyrs(int) - Method in class burlap.domain.stochasticgames.gridgame.GridGame
Sets the max number of players
setMaxQChangeForPlanningTerminaiton(double) - Method in class burlap.behavior.singleagent.learning.tdmethods.QLearning
Sets a max change in the Q-function threshold that will cause the QLearning.planFromState(State) to stop planning when it is achieved.
setMaxRolloutDepth(int) - Method in class burlap.behavior.singleagent.planning.stochastic.rtdp.BoundedRTDP
Sets the maximum rollout depth of any rollout.
setMaxVFAWeightChangeForPlanningTerminaiton(double) - Method in class burlap.behavior.singleagent.learning.tdmethods.vfa.GradientDescentSarsaLam
Sets a max change in the VFA weight threshold that will cause the GradientDescentSarsaLam.planFromState(State) to stop planning when it is achieved.
setMaxWT(int) - Method in class burlap.domain.stochasticgames.gridgame.GridGame
Sets the maximum number of wall types
setMaxx(int) - Method in class burlap.domain.singleagent.blockdude.BlockDude
 
setMaxy(int) - Method in class burlap.domain.singleagent.blockdude.BlockDude
 
setMinNewStepsForLearningPI(int) - Method in class burlap.behavior.singleagent.learning.lspi.LSPI
Sets the minimum number of new learning observations before policy iteration is run again.
setMinNumRolloutsWithSmallValueChange(int) - Method in class burlap.behavior.singleagent.planning.stochastic.rtdp.RTDP
Sets the minimum number of consecutive rollsouts with a value function change less than the maxDelta value that will cause RTDP to stop.
setName(String) - Method in class burlap.oomdp.core.ObjectInstance
Sets the name of this object instance.
setNormalizeValues(boolean) - Method in class burlap.behavior.singleagent.vfa.common.ConcatenatedObjectFeatureVectorGenerator
Sets whether the object values are normalized in the returned feature vector
setnTiles(int) - Method in class burlap.behavior.singleagent.interfaces.rlglue.common.RLGlueCMACSarsaLambdaFactory
 
setNumberOfLocationTypes(int) - Method in class burlap.domain.singleagent.gridworld.GridWorldDomain
Sets the number of possible location types to which a location object can belong.
setNumEpisodesToStore(int) - Method in class burlap.behavior.singleagent.learning.actorcritic.ActorCritic
 
setNumEpisodesToStore(int) - Method in class burlap.behavior.singleagent.learning.LearningAgent.LearningAgentBookKeeping
Tells the agent how many EpisodeAnalysis objects representing learning episodes to internally store.
setNumEpisodesToStore(int) - Method in interface burlap.behavior.singleagent.learning.LearningAgent
Tells the agent how many EpisodeAnalysis objects representing learning episodes to internally store.
setNumEpisodesToStore(int) - Method in class burlap.behavior.singleagent.learning.lspi.LSPI
 
setNumEpisodesToStore(int) - Method in class burlap.behavior.singleagent.learning.modellearning.artdp.ARTDP
 
setNumEpisodesToStore(int) - Method in class burlap.behavior.singleagent.learning.modellearning.rmax.PotentialShapedRMax
 
setNumEpisodesToStore(int) - Method in class burlap.behavior.singleagent.learning.tdmethods.QLearning
 
setNumEpisodesToStore(int) - Method in class burlap.behavior.singleagent.learning.tdmethods.vfa.GradientDescentSarsaLam
 
setNumPasses(int) - Method in class burlap.behavior.singleagent.planning.stochastic.rtdp.RTDP
Sets the number of rollouts to perform when planning is started (unless the value function delta is small enough).
setNumSamplesForPlanning(int) - Method in class burlap.behavior.singleagent.learning.lspi.LSPI
Sets the number of SARS samples that will be gathered by the LSPI.planFromState(State) method.
setNumXCells(int) - Method in class burlap.behavior.singleagent.auxiliary.valuefunctionvis.common.PolicyGlyphPainter2D
Sets the number of states that will be rendered along a row
setNumXCells(int) - Method in class burlap.behavior.singleagent.auxiliary.valuefunctionvis.common.StateValuePainter2D
Sets the number of states that will be rendered along a row
setNumYCells(int) - Method in class burlap.behavior.singleagent.auxiliary.valuefunctionvis.common.PolicyGlyphPainter2D
Sets the number of states that will be rendered along a row
setNumYCells(int) - Method in class burlap.behavior.singleagent.auxiliary.valuefunctionvis.common.StateValuePainter2D
Sets the number of states that will be rendered along a row
setObjectClassAttributesToTile(String, StateGridder.AttributeSpecification...) - Method in class burlap.behavior.singleagent.auxiliary.StateGridder
Sets the attribute specifications to use for a single ObjectClass
setObjectIdentiferDependence(boolean) - Method in class burlap.oomdp.core.Domain
Sets whether this domain's states are object identifier (name) dependent.
setObservability(boolean) - Method in class burlap.oomdp.core.Value
Sets whether this value is observable to the agent or not.
setObstacle(State, int, double, double, double, double) - Static method in class burlap.domain.singleagent.lunarlander.LunarLanderDomain
Sets an obstacles boundaries/position
setObstacleInCell(int, int) - Method in class burlap.domain.singleagent.gridworld.GridWorldDomain
Sets a complete cell obstacle in the designated location.
setOptionsFirst() - Method in class burlap.behavior.singleagent.planning.deterministic.uninformed.dfs.DFS
Sets the planner to explore nodes generated by options first.
setPad(State, double, double, double, double) - Static method in class burlap.domain.singleagent.lunarlander.LunarLanderDomain
Sets the first landing pad's boundaries/position
setPad(State, int, double, double, double, double) - Static method in class burlap.domain.singleagent.lunarlander.LunarLanderDomain
Sets a landing pad's boundaries/position
setParameter(int, double) - Method in class burlap.behavior.singleagent.learnbydemo.mlirl.differentiableplanners.diffvinit.DifferentiableVInit.ParamedDiffVInit
Sets the value of a given parameter.
setParameter(int, double) - Method in class burlap.behavior.singleagent.learnbydemo.mlirl.support.DifferentiableRF
Sets the value of a given parameter.
setParameters(double[]) - Method in class burlap.behavior.singleagent.learnbydemo.mlirl.differentiableplanners.diffvinit.DifferentiableVInit.ParamedDiffVInit
Sets the parameters of this differentiable value function initialization
setParameters(double[]) - Method in class burlap.behavior.singleagent.learnbydemo.mlirl.differentiableplanners.diffvinit.DiffVFRF
 
setParameters(double[]) - Method in class burlap.behavior.singleagent.learnbydemo.mlirl.support.DifferentiableRF
 
setPayoff(int, int, double, double) - Method in class burlap.behavior.stochasticgame.agents.twoplayer.singlestage.equilibriumplayer.EquilibriumPlayingAgent.BimatrixTuple
Sets the payoffs for a given row and column.
setPayout(int, double, String...) - Method in class burlap.domain.stochasticgames.normalform.SingleStageNormalFormGame
Sets the pay out that player number playerNumber receives for a given strategy profile
setPayout(int, double, int...) - Method in class burlap.domain.stochasticgames.normalform.SingleStageNormalFormGame
Sets the pay out that player number playerNumber receives for a given strategy profile
setPhysParams(LunarLanderDomain.LLPhysicsParams) - Method in class burlap.domain.singleagent.lunarlander.LunarLanderDomain.ActionIdle
 
setPhysParams(LunarLanderDomain.LLPhysicsParams) - Method in class burlap.domain.singleagent.lunarlander.LunarLanderDomain.ActionThrust
 
setPhysParams(LunarLanderDomain.LLPhysicsParams) - Method in class burlap.domain.singleagent.lunarlander.LunarLanderDomain.ActionTurn
 
setPhysParams(LunarLanderDomain.LLPhysicsParams) - Method in class burlap.domain.singleagent.lunarlander.LunarLanderDomain
 
setPlanner(OOMDPPlanner) - Method in class burlap.behavior.singleagent.learnbydemo.apprenticeship.ApprenticeshipLearningRequest
 
setPlanner(OOMDPPlanner) - Method in class burlap.behavior.singleagent.learnbydemo.IRLRequest
 
setPlanner(OOMDPPlanner) - Method in class burlap.behavior.singleagent.learnbydemo.mlirl.MLIRLRequest
 
setPlanner(OOMDPPlanner) - Method in class burlap.behavior.singleagent.planning.commonpolicies.BoltzmannQPolicy
 
setPlanner(OOMDPPlanner) - Method in class burlap.behavior.singleagent.planning.commonpolicies.EpsilonGreedy
 
setPlanner(OOMDPPlanner) - Method in class burlap.behavior.singleagent.planning.commonpolicies.GreedyDeterministicQPolicy
 
setPlanner(OOMDPPlanner) - Method in class burlap.behavior.singleagent.planning.commonpolicies.GreedyQPolicy
 
setPlanner(OOMDPPlanner) - Method in class burlap.behavior.singleagent.planning.deterministic.DDPlannerPolicy
 
setPlanner(OOMDPPlanner) - Method in class burlap.behavior.singleagent.planning.deterministic.SDPlannerPolicy
 
setPlanner(OOMDPPlanner) - Method in interface burlap.behavior.singleagent.planning.PlannerDerivedPolicy
Sets the planner whose results affect this policy.
setPlanner(OOMDPPlanner) - Method in class burlap.behavior.singleagent.planning.stochastic.montecarlo.uct.UCTTreeWalkPolicy
 
setPlannerFactory(QGradientPlannerFactory) - Method in class burlap.behavior.singleagent.learnbydemo.mlirl.MultipleIntentionsMLIRLRequest
Sets the QGradientPlannerFactory to use and also sets this request object's planner instance to a planner generated from it, if it has not already been set.
setPlannerReference(MAValueFunctionPlanner) - Method in class burlap.behavior.stochasticgame.agents.mavf.MAVFPlannerFactory.ConstantMAVFPlannerFactory
Changes the planner reference
setPlanningAndControlDepth(int) - Method in class burlap.behavior.singleagent.planning.vfa.fittedvi.FittedVI
setPlanningCollector(SARSCollector) - Method in class burlap.behavior.singleagent.learning.lspi.LSPI
Sets the SARSCollector used by the LSPI.planFromState(State) method for collecting data.
setPlanningDepth(int) - Method in class burlap.behavior.singleagent.planning.vfa.fittedvi.FittedVI
Sets the Bellman operator depth used during planning.
setPlatform(State, int, int, int, int, boolean) - Static method in class burlap.domain.singleagent.frostbite.FrostbiteDomain
Sets a platform position, size and status
setPlotCISignificance(double) - Method in class burlap.behavior.singleagent.auxiliary.performance.LearningAlgorithmExperimenter
Sets the significance used for confidence intervals.
setPlotCISignificance(double) - Method in class burlap.behavior.stochasticgame.auxiliary.performance.MultiAgentExperimenter
Sets the significance used for confidence intervals.
setPlotRefreshDelay(int) - Method in class burlap.behavior.singleagent.auxiliary.performance.LearningAlgorithmExperimenter
Sets the delay in milliseconds between automatic plot refreshes
setPlotRefreshDelay(int) - Method in class burlap.behavior.stochasticgame.auxiliary.performance.MultiAgentExperimenter
Sets the delay in milliseconds between automatic plot refreshes
setPolicy(Policy) - Method in class burlap.behavior.singleagent.auxiliary.valuefunctionvis.PolicyRenderLayer
Sets the policy to render
setPolicy(Policy) - Method in class burlap.behavior.singleagent.auxiliary.valuefunctionvis.ValueFunctionVisualizerGUI
Sets the policy to render
setPolicy(PlannerDerivedPolicy) - Method in class burlap.behavior.singleagent.learning.modellearning.artdp.ARTDP
Sets the policy to the provided one.
setPolicy(PolicyFromJointPolicy) - Method in class burlap.behavior.stochasticgame.agents.mavf.MultiAgentVFPlanningAgent
Sets the policy derived from this agents planner to follow.
setPolicyClassToEvaluate(PlannerDerivedPolicy) - Method in class burlap.behavior.singleagent.planning.stochastic.policyiteration.PolicyIteration
Sets which kind of policy to use whenever the policy is updated.
setPolicyCount(int) - Method in class burlap.behavior.singleagent.learnbydemo.apprenticeship.ApprenticeshipLearningRequest
 
setPolynomialDegree(double) - Method in class burlap.behavior.singleagent.auxiliary.valuefunctionvis.common.LandmarkColorBlendInterpolation
Sets the color blend to raise the normalized distance of values to the given degree.
setPreference(int, double) - Method in class burlap.datastructures.BoltzmannDistribution
Sets the preference for the ith elemnt
setPreferences(double[]) - Method in class burlap.datastructures.BoltzmannDistribution
Sets the input preferences
setProbSucceedTransitionDynamics(double) - Method in class burlap.domain.singleagent.gridworld.GridWorldDomain
Sets the domain to use probabilistic transitions.
setqInitFunction(ValueFunctionInitialization) - Method in class burlap.behavior.singleagent.interfaces.rlglue.common.RLGLueQlearningFactory
 
setQInitFunction(ValueFunctionInitialization) - Method in class burlap.behavior.singleagent.learning.tdmethods.QLearning
Sets how to initialize Q-values for previously unexperienced state-action pairs.
setQSourceMap(Map<String, QSourceForSingleAgent>) - Method in class burlap.behavior.stochasticgame.mavaluefunction.AgentQSourceMap.HashMapAgentQSourceMap
Sets the Q-source hash map to be used.
setQSourceProvider(MultiAgentQSourceProvider) - Method in class burlap.behavior.stochasticgame.mavaluefunction.MAQSourcePolicy
Sets the MultiAgentQSourceProvider that will be used to define this object's joint policy.
setQSourceProvider(MultiAgentQSourceProvider) - Method in class burlap.behavior.stochasticgame.mavaluefunction.policies.ECorrelatedQJointPolicy
 
setQSourceProvider(MultiAgentQSourceProvider) - Method in class burlap.behavior.stochasticgame.mavaluefunction.policies.EGreedyJointPolicy
 
setQSourceProvider(MultiAgentQSourceProvider) - Method in class burlap.behavior.stochasticgame.mavaluefunction.policies.EGreedyMaxWellfare
 
setQSourceProvider(MultiAgentQSourceProvider) - Method in class burlap.behavior.stochasticgame.mavaluefunction.policies.EMinMaxPolicy
 
setQValueInitializer(ValueFunctionInitialization) - Method in class burlap.behavior.stochasticgame.agents.naiveq.history.SGQWActionHistoryFactory
Sets the Q-value initialization function that will be used by the agent.
setQValueInitializer(ValueFunctionInitialization) - Method in class burlap.behavior.stochasticgame.agents.naiveq.SGNaiveQLAgent
 
setRandom(Random) - Method in class burlap.datastructures.StochasticTree
Sets the tree to use a specific random object when performing sampling
setRandomGenerator(Random) - Method in class burlap.behavior.singleagent.Policy.RandomPolicy
Sets the random generator used for action selection.
setRandomObject(Random) - Method in class burlap.domain.singleagent.mountaincar.MCRandomStateGenerator
Sets the random object used for generating states
setRefreshDelay(int) - Method in class burlap.behavior.singleagent.auxiliary.performance.PerformancePlotter
sets the delay in milliseconds between automatic refreshes of the plots
setRefreshDelay(int) - Method in class burlap.behavior.stochasticgame.auxiliary.performance.MultiAgentPerformancePlotter
sets the delay in milliseconds between automatic refreshes of the plots
setRenderStyle(PolicyGlyphPainter2D.PolicyGlyphRenderStyle) - Method in class burlap.behavior.singleagent.auxiliary.valuefunctionvis.common.PolicyGlyphPainter2D
Sets the rendering style
setRequest(MLIRLRequest) - Method in class burlap.behavior.singleagent.learnbydemo.mlirl.MLIRL
Sets the MLIRLRequest object defining the IRL problem.
setReward(int, int, double) - Method in class burlap.domain.singleagent.gridworld.GridWorldRewardFunction
Sets the reward the agent will receive to transitioning to position x, y
setRewardFunction(RewardFunction) - Method in class burlap.behavior.singleagent.learning.actorcritic.critics.TDLambda
Sets the reward function to use.
setRewardFunction(RewardFunction) - Method in class burlap.oomdp.singleagent.explorer.TerminalExplorer
 
setRewardFunction(JointReward) - Method in class burlap.oomdp.stochasticgames.explorers.SGTerminalExplorer
Allows the explorer to keep track of the reward received that will be printed to the output.
setRewardFunction(JointReward) - Method in class burlap.oomdp.stochasticgames.explorers.SGVisualExplorer
 
setRf(DifferentiableRF) - Method in class burlap.behavior.singleagent.learnbydemo.mlirl.MLIRLRequest
 
setRf(RewardFunction) - Method in class burlap.behavior.singleagent.planning.OOMDPPlanner
Sets the reward function used by this planner
setRfDim(int) - Method in class burlap.behavior.singleagent.learnbydemo.mlirl.differentiableplanners.diffvinit.LinearDiffRFVInit
 
setRfFeaturesAreForNextState(boolean) - Method in class burlap.behavior.singleagent.learnbydemo.mlirl.differentiableplanners.diffvinit.LinearDiffRFVInit
 
setRfFvGen(StateToFeatureVectorGenerator) - Method in class burlap.behavior.singleagent.learnbydemo.mlirl.differentiableplanners.diffvinit.LinearDiffRFVInit
 
setRollOutPolicy(Policy) - Method in class burlap.behavior.singleagent.planning.stochastic.rtdp.RTDP
Sets the rollout policy to use.
setRunRolloutsInRevere(boolean) - Method in class burlap.behavior.singleagent.planning.stochastic.rtdp.BoundedRTDP
Sets whether each rollout should be run in reverse after completion.
setSamples(List<State>) - Method in class burlap.behavior.singleagent.planning.vfa.fittedvi.FittedVI
Sets the state samples to which the value function will be fit.
setSemiWallPassableProbability(double) - Method in class burlap.domain.stochasticgames.gridgame.GridGame
Sets the probability that an agent can pass through a semi-wall.
setSetRenderLayer(StateRenderLayer) - Method in class burlap.oomdp.visualizer.Visualizer
 
setSignificanceForCI(double) - Method in class burlap.behavior.singleagent.auxiliary.performance.PerformancePlotter
Sets the significance used for confidence intervals.
setSignificanceForCI(double) - Method in class burlap.behavior.stochasticgame.auxiliary.performance.MultiAgentPerformancePlotter
Sets the significance used for confidence intervals.
setSoftTieRenderStyleDelta(double) - Method in class burlap.behavior.singleagent.auxiliary.valuefunctionvis.common.PolicyGlyphPainter2D
Sets the soft difference between max actions to determine ties when the MAXACTIONSOFSOFTTIE render style is used.
setSpp(StatePolicyPainter) - Method in class burlap.behavior.singleagent.auxiliary.valuefunctionvis.PolicyRenderLayer
Sets the state-wise policy painter
setSpp(StatePolicyPainter) - Method in class burlap.behavior.singleagent.auxiliary.valuefunctionvis.ValueFunctionVisualizerGUI
Sets the state-wise policy painter
setStartStateGenerator(StateGenerator) - Method in class burlap.behavior.singleagent.learnbydemo.apprenticeship.ApprenticeshipLearningRequest
 
setStateContext(State) - Method in interface burlap.behavior.singleagent.planning.StateConditionTestIterable
 
setStateMapping(StateMapping) - Method in class burlap.behavior.singleagent.options.Option
Sets this option to use a state mapping that maps from the source MDP states to another state representation that will be used by this option for making action selections.
setStateSelectionMode(BoundedRTDP.StateSelectionMode) - Method in class burlap.behavior.singleagent.planning.stochastic.rtdp.BoundedRTDP
Sets the state selection mode used when choosing next states to expand.
setStatesToVisualize(Collection<State>) - Method in class burlap.behavior.singleagent.auxiliary.valuefunctionvis.ValueFunctionRenderLayer
Sets the states to visualize
setStateValuesToVisualize(Collection<State>) - Method in class burlap.behavior.singleagent.auxiliary.valuefunctionvis.PolicyRenderLayer
Sets the states to visualize
setStoredAbstraction(StateAbstraction) - Method in class burlap.behavior.stochasticgame.agents.naiveq.SGNaiveQFactory
Sets the factory to provide Q-learning algorithms with the given state abstraction.
setStoredMapAbstraction(StateAbstraction) - Method in class burlap.behavior.stochasticgame.agents.naiveq.SGNaiveQLAgent
Sets the state abstraction that this agent will use
setStrategy(Policy) - Method in class burlap.behavior.stochasticgame.agents.naiveq.SGNaiveQLAgent
Sets the Q-learning policy that this agent will use (e.g., epsilon greedy)
SetStrategyAgent - Class in burlap.behavior.stochasticgame.agents
A class for an agent who makes decisions by following a specified strategy and does not respond to the other player's actions.
SetStrategyAgent(SGDomain, Policy) - Constructor for class burlap.behavior.stochasticgame.agents.SetStrategyAgent
Initializes for the given domain in which the agent will play and the strategy that they will follow.
SetStrategyAgent.SetStrategyAgentFactory - Class in burlap.behavior.stochasticgame.agents
 
SetStrategyAgent.SetStrategyAgentFactory(SGDomain, Policy) - Constructor for class burlap.behavior.stochasticgame.agents.SetStrategyAgent.SetStrategyAgentFactory
 
setSvp(StateValuePainter) - Method in class burlap.behavior.singleagent.auxiliary.valuefunctionvis.ValueFunctionRenderLayer
Sets the state-wise value function painter
setSvp(StateValuePainter) - Method in class burlap.behavior.singleagent.auxiliary.valuefunctionvis.ValueFunctionVisualizerGUI
Sets the state-wise value function painter
setSynchronizeJointActionSelectionAmongAgents(boolean) - Method in class burlap.behavior.stochasticgame.PolicyFromJointPolicy
Sets whether actions selection of this agent's policy should be synchronized with the action selection of other agents following the same underlying joint policy.
setTargetAgent(String) - Method in class burlap.behavior.stochasticgame.JointPolicy
Sets the target privledged agent from which this joint policy is defined.
setTargetAgent(String) - Method in class burlap.behavior.stochasticgame.mavaluefunction.policies.ECorrelatedQJointPolicy
 
setTargetAgent(String) - Method in class burlap.behavior.stochasticgame.mavaluefunction.policies.EGreedyJointPolicy
 
setTargetAgent(String) - Method in class burlap.behavior.stochasticgame.mavaluefunction.policies.EGreedyMaxWellfare
 
setTargetAgent(String) - Method in class burlap.behavior.stochasticgame.mavaluefunction.policies.EMinMaxPolicy
 
setTemperature(double) - Method in class burlap.datastructures.BoltzmannDistribution
Sets the temperature value to use.
setTerminalFunction(TerminalFunction) - Method in class burlap.oomdp.singleagent.explorer.VisualExplorer
 
setTerminalFunction(TerminalFunction) - Method in class burlap.oomdp.stochasticgames.explorers.SGTerminalExplorer
 
setTerminalFunction(TerminalFunction) - Method in class burlap.oomdp.stochasticgames.explorers.SGVisualExplorer
 
setTerminalFunctionf(TerminalFunction) - Method in class burlap.oomdp.singleagent.explorer.TerminalExplorer
 
setTerminalStates(Set<Integer>) - Method in class burlap.domain.singleagent.graphdefined.GraphTF
 
setTerminateMapper(DirectOptionTerminateMapper) - Method in class burlap.behavior.singleagent.options.Option
Sets this option to determine its execution results using a direct terminal state mapping rather than actually executing each action selcted by the option step by step.
setTerminateOnTrue(boolean) - Method in class burlap.oomdp.singleagent.common.SinglePFTF
Sets whether to be terminal state it is required for there to be a true grounded version of this class' propositional function or whether it is required for there to be a false grounded version.
setTf(TerminalFunction) - Method in class burlap.behavior.singleagent.planning.deterministic.TFGoalCondition
Sets the TerminalFunction used to specify the goal condition.
setTf(TerminalFunction) - Method in class burlap.behavior.singleagent.planning.OOMDPPlanner
Sets the terminal state function used by this planner
setTHistory(double[]) - Method in class burlap.behavior.singleagent.learnbydemo.apprenticeship.ApprenticeshipLearningRequest
 
setThrustValue(double) - Method in class burlap.domain.singleagent.lunarlander.LunarLanderDomain.ActionThrust
 
setToCorrectModel() - Method in class burlap.domain.singleagent.cartpole.CartPoleDomain
Sets to use the correct physics model by Florian.
setToIncorrectClassicModel() - Method in class burlap.domain.singleagent.cartpole.CartPoleDomain
Sets to the use the classic model by Barto, Sutton, and Anderson, which has incorrect friction forces and gravity in the wrong direction
setToIncorrectClassicModelWithCorrectGravity() - Method in class burlap.domain.singleagent.cartpole.CartPoleDomain
Sets to use the classic model by Barto, Sutton, and Anderson which has incorrect friction forces, but will use correct gravity.
setToStandardLunarLander() - Method in class burlap.domain.singleagent.lunarlander.LunarLanderDomain
Sets the domain to use a standard set of physics and with a standard set of two thrust actions.
gravity = -0.2
xmin = 0
xmax = 100
ymin = 0
ymax = 50
max velocity component speed = 4
maximum angle of rotation = pi/4
change in angle from turning = pi/20
thrust1 force = 0.32
thrust2 force = 0.2 (opposite gravity)
setTrackingRewardFunction(RewardFunction) - Method in class burlap.oomdp.singleagent.explorer.VisualExplorer
 
setTransition(int, int, int, double) - Method in class burlap.domain.singleagent.graphdefined.GraphDefinedDomain
Sets the probability p for transitioning to state node tNode after taking action number action in state node srcNode.
setTransitionDynamics(Map<Integer, Map<Integer, Set<GraphDefinedDomain.NodeTransitionProbibility>>>) - Method in class burlap.domain.singleagent.graphdefined.GraphDefinedDomain.GraphAction
 
setTransitionDynamics(double[][]) - Method in class burlap.domain.singleagent.gridworld.GridWorldDomain
Will set the movement direction probabilities based on the action chosen.
setType(Attribute.AttributeType) - Method in class burlap.oomdp.core.Attribute
Sets the type for this attribute.
setup() - Method in class burlap.testing.TestGridWorld
 
setup() - Method in class burlap.testing.TestPlanning
 
setupForNewEpisode() - Method in class burlap.behavior.singleagent.auxiliary.performance.PerformancePlotter.Trial
Completes the last episode and sets up the datastructures for the next episode
setupForNewEpisode() - Method in class burlap.behavior.stochasticgame.auxiliary.performance.MultiAgentPerformancePlotter.Trial
Completes the last episode and sets up the datastructures for the next episode
setUpPlottingConfiguration(int, int, int, int, TrialMode, PerformanceMetric...) - Method in class burlap.behavior.singleagent.auxiliary.performance.LearningAlgorithmExperimenter
Setsup the plotting confiruation.
setUpPlottingConfiguration(int, int, int, int, TrialMode, PerformanceMetric...) - Method in class burlap.behavior.stochasticgame.auxiliary.performance.MultiAgentExperimenter
Setsup the plotting confiruation.
setUseFeatureWiseLearningRate(boolean) - Method in class burlap.behavior.singleagent.learning.tdmethods.vfa.GradientDescentSarsaLam
Sets whether learning rate polls should be based on the VFA state feature ids, or the OO-MDP state.
setUseMaxHeap(boolean) - Method in class burlap.datastructures.HashIndexedHeap
Sets whether this heap is a max heap or a min heap
setUseReplaceTraces(boolean) - Method in class burlap.behavior.singleagent.learning.tdmethods.vfa.GradientDescentSarsaLam
Sets whether to use replacing eligibility traces rather than accumulating traces.
setUseSemiDeep(boolean) - Method in class burlap.domain.singleagent.blockdude.BlockDude
Sets whether generated domain's actions use semi-deep state copies or full deep copies.
setUseVariableCSize(boolean) - Method in class burlap.behavior.singleagent.learnbydemo.mlirl.differentiableplanners.DifferentiableSparseSampling
Sets whether the number of state transition samples (C) should be variable with respect to the depth of the node.
setUseVariableCSize(boolean) - Method in class burlap.behavior.singleagent.planning.stochastic.sparsesampling.SparseSampling
Sets whether the number of state transition samples (C) should be variable with respect to the depth of the node.
setUsingMaxMargin(boolean) - Method in class burlap.behavior.singleagent.learnbydemo.apprenticeship.ApprenticeshipLearningRequest
 
setV(DifferentiableSparseSampling.QAndQGradient) - Method in class burlap.behavior.singleagent.learnbydemo.mlirl.differentiableplanners.DifferentiableSparseSampling.DiffStateNode
 
setValue(StateHashTuple, double) - Method in class burlap.behavior.stochasticgame.mavaluefunction.MAValueFunctionPlanner.BackupBasedQSource
Sets the value of the state in this objects value function map.
setValue(String, String) - Method in class burlap.oomdp.core.ObjectInstance
Sets the value of the attribute named attName for this object instance.
setValue(String, double) - Method in class burlap.oomdp.core.ObjectInstance
Sets the value of the attribute named attName for this object instance.
setValue(String, int) - Method in class burlap.oomdp.core.ObjectInstance
Sets the value of the attribute named attName for this object instance.
setValue(String, boolean) - Method in class burlap.oomdp.core.ObjectInstance
Sets the value of the attribute named attName for this object instance.
setValue(String, int[]) - Method in class burlap.oomdp.core.ObjectInstance
Sets the value of the attribute named attName for this object instance.
setValue(String, double[]) - Method in class burlap.oomdp.core.ObjectInstance
Sets the value of the attribute named attName for this object instance.
setValue(int) - Method in class burlap.oomdp.core.Value
Sets the internal value representation using an int value
setValue(double) - Method in class burlap.oomdp.core.Value
Sets the internal value representation using a double value
setValue(String) - Method in class burlap.oomdp.core.Value
Sets the internal value representation using a string value
setValue(boolean) - Method in class burlap.oomdp.core.Value
Sets the internalvalue representation using a boolean value
setValue(int[]) - Method in class burlap.oomdp.core.Value
Sets the int array value.
setValue(double[]) - Method in class burlap.oomdp.core.Value
Sets the double array value.
setValue(int) - Method in class burlap.oomdp.core.values.DiscreteValue
 
setValue(double) - Method in class burlap.oomdp.core.values.DiscreteValue
 
setValue(boolean) - Method in class burlap.oomdp.core.values.DiscreteValue
 
setValue(String) - Method in class burlap.oomdp.core.values.DiscreteValue
 
setValue(int[]) - Method in class burlap.oomdp.core.values.DiscreteValue
 
setValue(double[]) - Method in class burlap.oomdp.core.values.DiscreteValue
 
setValue(int) - Method in class burlap.oomdp.core.values.DoubleArrayValue
 
setValue(double) - Method in class burlap.oomdp.core.values.DoubleArrayValue
 
setValue(String) - Method in class burlap.oomdp.core.values.DoubleArrayValue
 
setValue(int[]) - Method in class burlap.oomdp.core.values.DoubleArrayValue
 
setValue(double[]) - Method in class burlap.oomdp.core.values.DoubleArrayValue
 
setValue(boolean) - Method in class burlap.oomdp.core.values.DoubleArrayValue
 
setValue(int) - Method in class burlap.oomdp.core.values.IntArrayValue
 
setValue(double) - Method in class burlap.oomdp.core.values.IntArrayValue
 
setValue(String) - Method in class burlap.oomdp.core.values.IntArrayValue
 
setValue(int[]) - Method in class burlap.oomdp.core.values.IntArrayValue
 
setValue(double[]) - Method in class burlap.oomdp.core.values.IntArrayValue
 
setValue(boolean) - Method in class burlap.oomdp.core.values.IntArrayValue
 
setValue(int) - Method in class burlap.oomdp.core.values.IntValue
 
setValue(double) - Method in class burlap.oomdp.core.values.IntValue
 
setValue(String) - Method in class burlap.oomdp.core.values.IntValue
 
setValue(boolean) - Method in class burlap.oomdp.core.values.IntValue
 
setValue(int[]) - Method in class burlap.oomdp.core.values.IntValue
 
setValue(double[]) - Method in class burlap.oomdp.core.values.IntValue
 
setValue(int) - Method in class burlap.oomdp.core.values.MultiTargetRelationalValue
 
setValue(double) - Method in class burlap.oomdp.core.values.MultiTargetRelationalValue
 
setValue(String) - Method in class burlap.oomdp.core.values.MultiTargetRelationalValue
 
setValue(boolean) - Method in class burlap.oomdp.core.values.MultiTargetRelationalValue
 
setValue(int[]) - Method in class burlap.oomdp.core.values.MultiTargetRelationalValue
 
setValue(double[]) - Method in class burlap.oomdp.core.values.MultiTargetRelationalValue
 
setValue(int) - Method in class burlap.oomdp.core.values.RealValue
 
setValue(double) - Method in class burlap.oomdp.core.values.RealValue
 
setValue(String) - Method in class burlap.oomdp.core.values.RealValue
 
setValue(boolean) - Method in class burlap.oomdp.core.values.RealValue
 
setValue(int[]) - Method in class burlap.oomdp.core.values.RealValue
 
setValue(double[]) - Method in class burlap.oomdp.core.values.RealValue
 
setValue(int) - Method in class burlap.oomdp.core.values.RelationalValue
 
setValue(double) - Method in class burlap.oomdp.core.values.RelationalValue
 
setValue(boolean) - Method in class burlap.oomdp.core.values.RelationalValue
 
setValue(String) - Method in class burlap.oomdp.core.values.RelationalValue
 
setValue(int[]) - Method in class burlap.oomdp.core.values.RelationalValue
 
setValue(double[]) - Method in class burlap.oomdp.core.values.RelationalValue
 
setValue(int) - Method in class burlap.oomdp.core.values.StringValue
 
setValue(double) - Method in class burlap.oomdp.core.values.StringValue
 
setValue(String) - Method in class burlap.oomdp.core.values.StringValue
 
setValue(boolean) - Method in class burlap.oomdp.core.values.StringValue
 
setValue(int[]) - Method in class burlap.oomdp.core.values.StringValue
 
setValue(double[]) - Method in class burlap.oomdp.core.values.StringValue
 
setValueForLeafNodes(ValueFunctionInitialization) - Method in class burlap.behavior.singleagent.learnbydemo.mlirl.differentiableplanners.DifferentiableSparseSampling
Sets the ValueFunctionInitialization object to use for settting the value of leaf nodes.
setValueForLeafNodes(ValueFunctionInitialization) - Method in class burlap.behavior.singleagent.planning.stochastic.sparsesampling.SparseSampling
Sets the ValueFunctionInitialization object to use for settting the value of leaf nodes.
setValueFunctionInitialization(ValueFunctionInitialization) - Method in class burlap.behavior.singleagent.planning.ValueFunctionPlanner
Sets the value function initialization to use.
setValueFunctionToLowerBound() - Method in class burlap.behavior.singleagent.planning.stochastic.rtdp.BoundedRTDP
Sets the value function to use to be the lower bound.
setValueFunctionToUpperBound() - Method in class burlap.behavior.singleagent.planning.stochastic.rtdp.BoundedRTDP
Sets the value function to use to be the upper bound.
setValueStringRenderingFormat(int, Color, int, float, float) - Method in class burlap.behavior.singleagent.auxiliary.valuefunctionvis.common.StateValuePainter2D
Sets the rendering format of the string displaying the value of each state.
setVerticalWall(State, int, int, int, int, int) - Static method in class burlap.domain.stochasticgames.gridgame.GridGame
Sets the attribute values for a vertical wall
setVGrad(DifferentiableSparseSampling.QAndQGradient) - Method in class burlap.behavior.singleagent.learnbydemo.mlirl.differentiableplanners.DifferentiableSparseSampling.DiffStateNode
 
setVInit(ValueFunctionInitialization) - Method in class burlap.behavior.singleagent.planning.vfa.fittedvi.FittedVI
Sets the value function initialization used at the start of planning.
setVinitDim(int) - Method in class burlap.behavior.singleagent.learnbydemo.mlirl.differentiableplanners.diffvinit.LinearDiffRFVInit
 
setVinitFvGen(StateToFeatureVectorGenerator) - Method in class burlap.behavior.singleagent.learnbydemo.mlirl.differentiableplanners.diffvinit.LinearDiffRFVInit
 
setVmax(double) - Method in class burlap.domain.singleagent.lunarlander.LunarLanderDomain.LLPhysicsParams
 
setVmax(double) - Method in class burlap.domain.singleagent.lunarlander.LunarLanderDomain
Sets the maximum velocity of the agent (the agent cannot move faster than this value).
setVmax(double) - Method in class burlap.domain.singleagent.mountaincar.MCRandomStateGenerator
Sets the maximum velocity that a generated state can have.
setVmin(double) - Method in class burlap.domain.singleagent.mountaincar.MCRandomStateGenerator
Sets the minimum velocity that a generated state can have.
setVRange(double, double) - Method in class burlap.domain.singleagent.mountaincar.MCRandomStateGenerator
Sets the random velocity range that a generated state can have.
setWallInstance(ObjectInstance, int, int, int, int) - Static method in class burlap.domain.stochasticgames.gridgame.GridGame
Sets the attribute values for a wall instance
setWeight(int, double) - Method in class burlap.behavior.singleagent.vfa.common.LinearFVVFA
 
setWeight(int, double) - Method in class burlap.behavior.singleagent.vfa.common.LinearVFA
 
setWeight(double) - Method in class burlap.behavior.singleagent.vfa.FunctionWeight
Sets the weight
setWeight(int, double) - Method in interface burlap.behavior.singleagent.vfa.ValueFunctionApproximation
Sets the weight for a features
setXmax(double) - Method in class burlap.domain.singleagent.lunarlander.LunarLanderDomain.LLPhysicsParams
 
setXmax(double) - Method in class burlap.domain.singleagent.lunarlander.LunarLanderDomain
Sets the maximum x position of the lander (the agent cannot cross this boundary)
setXmax(double) - Method in class burlap.domain.singleagent.mountaincar.MCRandomStateGenerator
Sets the maximum x-value that a generated state can have.
setXmin(double) - Method in class burlap.domain.singleagent.lunarlander.LunarLanderDomain.LLPhysicsParams
 
setXmin(double) - Method in class burlap.domain.singleagent.lunarlander.LunarLanderDomain
Sets the minimum x position of the lander (the agent cannot cross this boundary)
setXmin(double) - Method in class burlap.domain.singleagent.mountaincar.MCRandomStateGenerator
Sets the minimum x-value that a generated state can have.
setXRange(double, double) - Method in class burlap.domain.singleagent.mountaincar.MCRandomStateGenerator
Sets the random x-value range that a generated state can have.
setXYAttByObjectClass(String, String, String, String) - Method in class burlap.behavior.singleagent.auxiliary.valuefunctionvis.common.PolicyGlyphPainter2D
Will set the x-y attributes to use for cell rendering to the x y attributes of the first object in the state of the designated classes.
setXYAttByObjectClass(String, String, String, String) - Method in class burlap.behavior.singleagent.auxiliary.valuefunctionvis.common.StateValuePainter2D
Will set the x-y attributes to use for cell rendering to the x y attributes of the first object in the state of the designated classes.
setXYAttByObjectReference(String, String, String, String) - Method in class burlap.behavior.singleagent.auxiliary.valuefunctionvis.common.PolicyGlyphPainter2D
Will set the x-y attributes to use for cell rendering to the x y attributes of the designated object references.
setXYAttByObjectReference(String, String, String, String) - Method in class burlap.behavior.singleagent.auxiliary.valuefunctionvis.common.StateValuePainter2D
Will set the x-y attributes to use for cell rendering to the x y attributes of the designated object references.
setYmax(double) - Method in class burlap.domain.singleagent.lunarlander.LunarLanderDomain.LLPhysicsParams
 
setYmax(double) - Method in class burlap.domain.singleagent.lunarlander.LunarLanderDomain
Sets the maximum y position of the lander (the agent cannot cross this boundary)
setYmin(double) - Method in class burlap.domain.singleagent.lunarlander.LunarLanderDomain.LLPhysicsParams
 
setYmin(double) - Method in class burlap.domain.singleagent.lunarlander.LunarLanderDomain
Sets the minimum y position of the lander (the agent cannot cross this boundary)
sg - Variable in class burlap.behavior.singleagent.auxiliary.performance.LearningAlgorithmExperimenter
The state generated used to generate states at the beginning of each episode
SGBackupOperator - Interface in burlap.behavior.stochasticgame.mavaluefunction
A stochastic games backup operator to be used in multi-agent Q-learning or value function planning.
SGDomain - Class in burlap.oomdp.stochasticgames
This class is used to define Stochastic Games Domains.
SGDomain() - Constructor for class burlap.oomdp.stochasticgames.SGDomain
 
SGNaiveQFactory - Class in burlap.behavior.stochasticgame.agents.naiveq
An agent factory that produces SGNaiveQLAgents.
SGNaiveQFactory(SGDomain, double, double, double, StateHashFactory) - Constructor for class burlap.behavior.stochasticgame.agents.naiveq.SGNaiveQFactory
Initializes the factory.
SGNaiveQFactory(SGDomain, double, double, double, StateHashFactory, StateAbstraction) - Constructor for class burlap.behavior.stochasticgame.agents.naiveq.SGNaiveQFactory
Initializes the factory.
SGNaiveQLAgent - Class in burlap.behavior.stochasticgame.agents.naiveq
A Tabular Q-learning [1] algorithm for stochastic games formalisms.
SGNaiveQLAgent(SGDomain, double, double, StateHashFactory) - Constructor for class burlap.behavior.stochasticgame.agents.naiveq.SGNaiveQLAgent
Initializes with a default Q-value of 0 and a 0.1 epsilon greedy policy/strategy
SGNaiveQLAgent(SGDomain, double, double, double, StateHashFactory) - Constructor for class burlap.behavior.stochasticgame.agents.naiveq.SGNaiveQLAgent
Initializes with a default 0.1 epsilon greedy policy/strategy
SGNaiveQLAgent(SGDomain, double, double, ValueFunctionInitialization, StateHashFactory) - Constructor for class burlap.behavior.stochasticgame.agents.naiveq.SGNaiveQLAgent
Initializes with a default 0.1 epsilon greedy policy/strategy
SGQWActionHistory - Class in burlap.behavior.stochasticgame.agents.naiveq.history
A Tabular Q-learning [1] algorithm for stochastic games formalisms that augments states with the actions each agent took in n previous time steps.
SGQWActionHistory(SGDomain, double, double, StateHashFactory, int, int, ActionIdMap) - Constructor for class burlap.behavior.stochasticgame.agents.naiveq.history.SGQWActionHistory
Initializes the learning algorithm using 0.1 epsilon greedy learning strategy/policy
SGQWActionHistory(SGDomain, double, double, StateHashFactory, int) - Constructor for class burlap.behavior.stochasticgame.agents.naiveq.history.SGQWActionHistory
Initializes the learning algorithm using 0.1 epsilon greedy learning strategy/policy
SGQWActionHistoryFactory - Class in burlap.behavior.stochasticgame.agents.naiveq.history
An agent factory for Q-learning with history agents.
SGQWActionHistoryFactory(SGDomain, double, double, StateHashFactory, int, int, ActionIdMap) - Constructor for class burlap.behavior.stochasticgame.agents.naiveq.history.SGQWActionHistoryFactory
Initializes the factory
SGQWActionHistoryFactory(SGDomain, double, double, StateHashFactory, int) - Constructor for class burlap.behavior.stochasticgame.agents.naiveq.history.SGQWActionHistoryFactory
Initializes the factory
SGStateGenerator - Class in burlap.oomdp.stochasticgames
An abstract class defining the interface and common mechanism for generating State objects specifically for stochastic games domains.
SGStateGenerator() - Constructor for class burlap.oomdp.stochasticgames.SGStateGenerator
 
SGTerminalExplorer - Class in burlap.oomdp.stochasticgames.explorers
This class allows you act as all of the agents in a domain by choosing actions for each of them to take in specific states.
SGTerminalExplorer(SGDomain, JointActionModel) - Constructor for class burlap.oomdp.stochasticgames.explorers.SGTerminalExplorer
Deprecated.
SGTerminalExplorer(SGDomain) - Constructor for class burlap.oomdp.stochasticgames.explorers.SGTerminalExplorer
Initializes the explorer with a domain and action model
SGTerminalExplorer(SGDomain, JointActionModel, Map<String, String>) - Constructor for class burlap.oomdp.stochasticgames.explorers.SGTerminalExplorer
Deprecated.
SGTerminalExplorer(SGDomain, Map<String, String>) - Constructor for class burlap.oomdp.stochasticgames.explorers.SGTerminalExplorer
Initializes the explorer with a domain and action model and shorthand names for actions
SGToSADomain - Class in burlap.behavior.stochasticgame.agents.interfacing.singleagent
This domain generator is used to produce single agent domain version of a stochastic games domain for an agent of a given type (specified by an AgentType object or for a given list of stochastic games single actions (SingleAction).
SGToSADomain(SGDomain, AgentType, SingleAgentInterface) - Constructor for class burlap.behavior.stochasticgame.agents.interfacing.singleagent.SGToSADomain
Initializes.
SGToSADomain(SGDomain, List<SingleAction>, SingleAgentInterface) - Constructor for class burlap.behavior.stochasticgame.agents.interfacing.singleagent.SGToSADomain
Initializes.
SGToSADomain.SAActionWrapper - Class in burlap.behavior.stochasticgame.agents.interfacing.singleagent
A single agent action wrapper for a stochastic game action.
SGToSADomain.SAActionWrapper(SingleAction) - Constructor for class burlap.behavior.stochasticgame.agents.interfacing.singleagent.SGToSADomain.SAActionWrapper
Initializes for a given stochastic games action.
SGVisualExplorer - Class in burlap.oomdp.stochasticgames.explorers
This class allows you act as all of the agents in a stochastic game by choosing actions for each of them to take in specific states.
SGVisualExplorer(SGDomain, Visualizer, State, JointActionModel) - Constructor for class burlap.oomdp.stochasticgames.explorers.SGVisualExplorer
Deprecated.
SGVisualExplorer(SGDomain, Visualizer, State) - Constructor for class burlap.oomdp.stochasticgames.explorers.SGVisualExplorer
Initializes the data members for the visual explorer.
SGVisualExplorer(SGDomain, Visualizer, State, JointActionModel, int, int) - Constructor for class burlap.oomdp.stochasticgames.explorers.SGVisualExplorer
Deprecated.
SGVisualExplorer(SGDomain, Visualizer, State, int, int) - Constructor for class burlap.oomdp.stochasticgames.explorers.SGVisualExplorer
Initializes the data members for the visual explorer.
sh - Variable in class burlap.behavior.singleagent.learning.actorcritic.critics.TDLambda.StateEligibilityTrace
The hashed state with which the eligibility value is associated.
sh - Variable in class burlap.behavior.singleagent.learning.tdmethods.SarsaLam.EligibilityTrace
The state for this trace
sh - Variable in class burlap.behavior.singleagent.planning.HashedTransitionProbability
 
sh - Variable in class burlap.behavior.singleagent.planning.stochastic.rtdp.BoundedRTDP.StateSelectionAndExpectedGap
The selected state
sh - Variable in class burlap.behavior.singleagent.planning.stochastic.sparsesampling.SparseSampling.HashedHeightState
The hashed state
sh - Variable in class burlap.behavior.singleagent.planning.stochastic.valueiteration.PrioritizedSweeping.BPTRNode
 
shape - Variable in class burlap.domain.singleagent.gridworld.GridWorldVisualizer.CellPainter
 
ShapedRewardFunction - Class in burlap.behavior.singleagent.shaping
This abstract class is used to define shaped reward functions.
ShapedRewardFunction(RewardFunction) - Constructor for class burlap.behavior.singleagent.shaping.ShapedRewardFunction
Initializes with the base objective task reward function.
shouldAnnotateExecution - Variable in class burlap.behavior.singleagent.options.Option
Boolean indicating whether the last option execution recording annotates the selected actions with this option's name
shouldAnnotateOptions - Variable in class burlap.behavior.singleagent.learning.tdmethods.QLearning
Whether decomposed options should have their primitive actions annotated with the options name in the returned EpisodeAnalysis objects.
shouldAnnotateOptions - Variable in class burlap.behavior.singleagent.learning.tdmethods.vfa.GradientDescentSarsaLam
Whether decomposed options should have their primitive actions annotated with the options name in the returned EpisodeAnalysis objects.
shouldDecomposeOptions - Variable in class burlap.behavior.singleagent.learning.tdmethods.QLearning
Whether options should be decomposed into actions in the returned EpisodeAnalysis objects.
shouldDecomposeOptions - Variable in class burlap.behavior.singleagent.learning.tdmethods.vfa.GradientDescentSarsaLam
Whether options should be decomposed into actions in the returned EpisodeAnalysis objects.
shouldRecordResults - Variable in class burlap.behavior.singleagent.options.Option
Boolean indicating whether the last option execution result should be saved
shouldRereunPolicyIteration(EpisodeAnalysis) - Method in class burlap.behavior.singleagent.learning.lspi.LSPI
Returns whether LSPI should be rereun given the latest learning episode results.
shouldRescaleValues - Variable in class burlap.behavior.singleagent.auxiliary.valuefunctionvis.StateValuePainter
Indicates whether this painter should scale its rendering of values to whatever it is told the minimum and maximum values are.
showPolicy - Variable in class burlap.behavior.singleagent.auxiliary.valuefunctionvis.ValueFunctionVisualizerGUI
The button to enable the visualization of the policy
shuffleGroundedActions(List<GroundedAction>, int, int) - Method in class burlap.behavior.singleagent.planning.deterministic.uninformed.dfs.DFS
Shuffles the order of actions on the index range [s, e)
significance - Variable in class burlap.behavior.singleagent.auxiliary.performance.PerformancePlotter
the significance level used for confidence intervals.
significance - Variable in class burlap.behavior.stochasticgame.auxiliary.performance.MultiAgentPerformancePlotter
the significance level used for confidence intervals.
SingleAction - Class in burlap.oomdp.stochasticgames
A single action is an action specification for individual agents in a stochastic game.
SingleAction(SGDomain, String) - Constructor for class burlap.oomdp.stochasticgames.SingleAction
Initializes this single action to be for the given domain and with the given name.
SingleAction(SGDomain, String, String[]) - Constructor for class burlap.oomdp.stochasticgames.SingleAction
Initializes this single action to be for the given domain, with the given name, and with the given parameter class types.
SingleAction(SGDomain, String, String[], String[]) - Constructor for class burlap.oomdp.stochasticgames.SingleAction
Initializes this single action to be for the given domain, with the given name, with the given parameter class types, and with the given parameter order groups.
singleActionMap - Variable in class burlap.oomdp.stochasticgames.SGDomain
A map from action names to their corresponding SingleAction
singleActions - Variable in class burlap.oomdp.stochasticgames.SGDomain
The full set of actions that could be taken by any agent.
SingleAgentInterface - Class in burlap.behavior.stochasticgame.agents.interfacing.singleagent
For a number of reasons outside the scope of this class description, BURLAP single agent learning algorithms use a different interface for interacting with the world than stochastic games agents do.
SingleAgentInterface(SGDomain, SALearningAgentFactoryForSG) - Constructor for class burlap.behavior.stochasticgame.agents.interfacing.singleagent.SingleAgentInterface
Initializes for a given stochastic games domain and a factory to produce the single agent learning object
SingleAgentInterface.MutableGroundedSingleAction - Class in burlap.behavior.stochasticgame.agents.interfacing.singleagent
A mutable grounded singled action
SingleAgentInterface.MutableGroundedSingleAction() - Constructor for class burlap.behavior.stochasticgame.agents.interfacing.singleagent.SingleAgentInterface.MutableGroundedSingleAction
 
SingleAgentInterface.MutableState - Class in burlap.behavior.stochasticgame.agents.interfacing.singleagent
A mutable OO-MDP state wrapper
SingleAgentInterface.MutableState() - Constructor for class burlap.behavior.stochasticgame.agents.interfacing.singleagent.SingleAgentInterface.MutableState
 
SingleAgentInterface.SARFWrapper - Class in burlap.behavior.stochasticgame.agents.interfacing.singleagent
A reward function for returning the last RLGlue reward.
SingleAgentInterface.SARFWrapper() - Constructor for class burlap.behavior.stochasticgame.agents.interfacing.singleagent.SingleAgentInterface.SARFWrapper
 
SingleAgentInterface.SATFWrapper - Class in burlap.behavior.stochasticgame.agents.interfacing.singleagent
A termianl function that returns true when the last RLGlue state was terminal.
SingleAgentInterface.SATFWrapper() - Constructor for class burlap.behavior.stochasticgame.agents.interfacing.singleagent.SingleAgentInterface.SATFWrapper
 
SingleGoalPFRF - Class in burlap.oomdp.singleagent.common
This class defines a reward function that returns a goal reward when any grounded form of a propositional function is true in the resulting state and a default non-goal reward otherwise.
SingleGoalPFRF(PropositionalFunction) - Constructor for class burlap.oomdp.singleagent.common.SingleGoalPFRF
Initializes the reward function to return 1 when any grounded from of pf is true in the resulting state.
SingleGoalPFRF(PropositionalFunction, double, double) - Constructor for class burlap.oomdp.singleagent.common.SingleGoalPFRF
Initializes the reward function to return the specified goal reward when any grounded from of pf is true in the resulting state and the specified non-goal reward otherwise.
SinglePFSCT - Class in burlap.behavior.singleagent.planning
A state condition class that returns true when ever any grounded version of a specified propositional function is true in a state.
SinglePFSCT(PropositionalFunction) - Constructor for class burlap.behavior.singleagent.planning.SinglePFSCT
Initializes with the propositional function that is checked for state satisfaction
SinglePFTF - Class in burlap.oomdp.singleagent.common
This class defines a terminal function that terminates in states where there exists a grounded version of a specified propositional function that is true in the state or alternatively, when there is a grounded version that is false in the state.
SinglePFTF(PropositionalFunction) - Constructor for class burlap.oomdp.singleagent.common.SinglePFTF
Initializes the propositional function that will cause the state to be terminal when any Grounded version of pf is true.
SinglePFTF(PropositionalFunction, boolean) - Constructor for class burlap.oomdp.singleagent.common.SinglePFTF
Initializes the propositional function that will cause the state to be terminal when any Grounded version of pf is true or alternatively false.
SingleStageNormalFormGame - Class in burlap.domain.stochasticgames.normalform
This stochastic game domain generator provides methods to create N-player single stage games.
SingleStageNormalFormGame(String[][], double[][][]) - Constructor for class burlap.domain.stochasticgames.normalform.SingleStageNormalFormGame
A constructor for bimatrix games with specified action names.
SingleStageNormalFormGame(double[][], double[][]) - Constructor for class burlap.domain.stochasticgames.normalform.SingleStageNormalFormGame
A construtor for a bimatrix game where the row player payoffs and colum player payoffs are provided in two different 2D double matrices.
SingleStageNormalFormGame(double[][]) - Constructor for class burlap.domain.stochasticgames.normalform.SingleStageNormalFormGame
A constructor for a bimatrix zero sum game.
SingleStageNormalFormGame(String[][]) - Constructor for class burlap.domain.stochasticgames.normalform.SingleStageNormalFormGame
A constructor for games with a symmetric number of actions for each player.
SingleStageNormalFormGame(List<List<String>>) - Constructor for class burlap.domain.stochasticgames.normalform.SingleStageNormalFormGame
A constructor for games with an asymmetric number of actions for each player.
SingleStageNormalFormGame.ActionNameMap - Class in burlap.domain.stochasticgames.normalform
A wrapper for a HashMap from strings to ints used to map action names to their action index.
SingleStageNormalFormGame.ActionNameMap() - Constructor for class burlap.domain.stochasticgames.normalform.SingleStageNormalFormGame.ActionNameMap
 
SingleStageNormalFormGame.AgentPayoutFunction - Class in burlap.domain.stochasticgames.normalform
A class for defining a payout function for a single agent for each possible strategy profile.
SingleStageNormalFormGame.AgentPayoutFunction() - Constructor for class burlap.domain.stochasticgames.normalform.SingleStageNormalFormGame.AgentPayoutFunction
 
SingleStageNormalFormGame.NFGSingleAction - Class in burlap.domain.stochasticgames.normalform
A SingleAction class that uses the parent domain generator to determine which agent can take which actions and enforces that in the preconditions.
SingleStageNormalFormGame.NFGSingleAction(SGDomain, String, SingleStageNormalFormGame.ActionNameMap[]) - Constructor for class burlap.domain.stochasticgames.normalform.SingleStageNormalFormGame.NFGSingleAction
 
SingleStageNormalFormGame.SingleStageNormalFormJointReward - Class in burlap.domain.stochasticgames.normalform
A Joint Reward Function class that uses the parent domain generators payout matrix to determine payouts for any given strategy profile.
SingleStageNormalFormGame.SingleStageNormalFormJointReward(int, SingleStageNormalFormGame.ActionNameMap[], SingleStageNormalFormGame.AgentPayoutFunction[]) - Constructor for class burlap.domain.stochasticgames.normalform.SingleStageNormalFormGame.SingleStageNormalFormJointReward
 
SingleStageNormalFormGame.StrategyProfile - Class in burlap.domain.stochasticgames.normalform
A strategy profile represented as an array of action indices that is hashable.
SingleStageNormalFormGame.StrategyProfile(int...) - Constructor for class burlap.domain.stochasticgames.normalform.SingleStageNormalFormGame.StrategyProfile
 
size() - Method in class burlap.behavior.singleagent.learning.lspi.SARSData
The number of SARS tuples stored.
size() - Method in class burlap.datastructures.HashedAggregator
Returns the number of keys stored.
size - Variable in class burlap.datastructures.HashIndexedHeap
Number of objects in the heap
size() - Method in class burlap.datastructures.HashIndexedHeap
Returns the size of the heap
size() - Method in class burlap.datastructures.StochasticTree
Returns the number of objects in this tree
size() - Method in class burlap.domain.stochasticgames.normalform.SingleStageNormalFormGame.ActionNameMap
 
size() - Method in class burlap.oomdp.stochasticgames.JointAction
Returns the number of actions in this joint action.
SIZEATTNAME - Static variable in class burlap.domain.singleagent.frostbite.FrostbiteDomain
Constant for the name of the size of a frozen platform
softTieDelta - Variable in class burlap.behavior.singleagent.auxiliary.valuefunctionvis.common.PolicyGlyphPainter2D
The max probability difference from the most likely action for which an action that is not the most likely will still be rendered under the MAXACTIONSOFTTIE rendering style.
SoftTimeInverseDecayLR - Class in burlap.behavior.learningrate
Implements a learning rate decay schedule where the learning rate at time t is alpha_0 * (n_0 + 1) / (n_0 + t), where alpha_0 is the initial learning rate and n_0 is a parameter.
SoftTimeInverseDecayLR(double, double) - Constructor for class burlap.behavior.learningrate.SoftTimeInverseDecayLR
Initializes with an initial learning rate and decay constant shift for a state independent learning rate.
SoftTimeInverseDecayLR(double, double, double) - Constructor for class burlap.behavior.learningrate.SoftTimeInverseDecayLR
Initializes with an initial learning rate and decay constant shift (n_0) for a state independent learning rate that will decay to a value no smaller than minimumLearningRate
SoftTimeInverseDecayLR(double, double, StateHashFactory, boolean) - Constructor for class burlap.behavior.learningrate.SoftTimeInverseDecayLR
Initializes with an initial learning rate and decay constant shift (n_0) for a state or state-action (or state feature-action) dependent learning rate.
SoftTimeInverseDecayLR(double, double, double, StateHashFactory, boolean) - Constructor for class burlap.behavior.learningrate.SoftTimeInverseDecayLR
Initializes with an initial learning rate and decay constant shift (n_0) for a state or state-action (or state feature-action) dependent learning rate that will decay to a value no smaller than minimumLearningRate If this learning rate function is to be used for state state features, rather than states, then the hashing factory can be null;
SoftTimeInverseDecayLR.MutableInt - Class in burlap.behavior.learningrate
A class for storing a mutable int value object
SoftTimeInverseDecayLR.MutableInt(int) - Constructor for class burlap.behavior.learningrate.SoftTimeInverseDecayLR.MutableInt
 
SoftTimeInverseDecayLR.StateWiseTimeIndex - Class in burlap.behavior.learningrate
A class for storing a time index for a state, or a time index for each action for a given state
SoftTimeInverseDecayLR.StateWiseTimeIndex() - Constructor for class burlap.behavior.learningrate.SoftTimeInverseDecayLR.StateWiseTimeIndex
 
solve(double[][], double[][]) - Method in class burlap.behavior.stochasticgame.agents.twoplayer.singlestage.equilibriumplayer.BimatrixEquilibriumSolver
Solves and caches the solution for the given bimatrix.
solver - Variable in class burlap.behavior.stochasticgame.agents.twoplayer.singlestage.equilibriumplayer.EquilibriumPlayingAgent
The solution concept to be solved for the immediate rewards.
somePFGroundingIsTrue(State) - Method in class burlap.oomdp.core.PropositionalFunction
Returns true if there existing a GroundedProp for the provided State that is in true in the State.
somePFGroundingIsTrue(PropositionalFunction) - Method in class burlap.oomdp.core.State
Deprecated.
sortActionsWithOptionsFirst() - Method in class burlap.behavior.singleagent.planning.deterministic.uninformed.dfs.DFS
Reorders the planners action list so that options are in the front of the list.
sourceAction - Variable in class burlap.behavior.singleagent.learning.modellearning.ModeledDomainGenerator.ModeledAction
The source action this action models
sourceDomain - Variable in class burlap.behavior.singleagent.learning.modellearning.models.TabularModel
The source actual domain object for which actions will be modeled.
sourceLearningRateFunction - Variable in class burlap.behavior.singleagent.vfa.fourier.FourierBasisLearningRateWrapper
The source LearningRate function that is queried.
sourcePolicy - Variable in class burlap.behavior.singleagent.learning.modellearning.rmax.UnmodeledFavoredPolicy
 
sourcePolicy - Variable in class burlap.behavior.singleagent.planning.commonpolicies.CachedPolicy
The source policy that gets cached
sourceRF - Variable in class burlap.behavior.singleagent.learning.modellearning.rmax.PotentialShapedRMax.PotentialShapedRMaxRF
The source reward function
sp - Variable in class burlap.behavior.singleagent.EpisodeSequenceVisualizer
 
sp - Variable in class burlap.behavior.singleagent.learning.lspi.SARSData.SARS
The next state
sp - Variable in class burlap.behavior.stochasticgame.GameSequenceVisualizer
 
sp - Variable in class burlap.oomdp.singleagent.explorer.VisualExplorer.SaveEpisodeAction
The State parser used to save episodes
SparseSampling - Class in burlap.behavior.singleagent.planning.stochastic.sparsesampling
An implementation of the Sparse Sampling (SS) [1] planning algorithm.
SparseSampling(Domain, RewardFunction, TerminalFunction, double, StateHashFactory, int, int) - Constructor for class burlap.behavior.singleagent.planning.stochastic.sparsesampling.SparseSampling
Initializes.
SparseSampling.HashedHeightState - Class in burlap.behavior.singleagent.planning.stochastic.sparsesampling
Tuple for a state and its height in a tree that can be hashed for quick retrieval.
SparseSampling.HashedHeightState(StateHashTuple, int) - Constructor for class burlap.behavior.singleagent.planning.stochastic.sparsesampling.SparseSampling.HashedHeightState
Initializes.
SparseSampling.StateNode - Class in burlap.behavior.singleagent.planning.stochastic.sparsesampling
A class for state nodes.
SparseSampling.StateNode(StateHashTuple, int) - Constructor for class burlap.behavior.singleagent.planning.stochastic.sparsesampling.SparseSampling.StateNode
Creates a node for the given hased state at the given height
SpecialExplorerAction - Interface in burlap.oomdp.singleagent.explorer
An interface for defining special non-domain actions to take in a visual explorer.
specification - Variable in class burlap.behavior.singleagent.vfa.cmac.Tiling
A map from object class names to attribute tile specifications for attributes of that class
spp - Variable in class burlap.behavior.singleagent.auxiliary.valuefunctionvis.PolicyRenderLayer
Painter used to visualize the policy
spp - Variable in class burlap.behavior.singleagent.auxiliary.valuefunctionvis.ValueFunctionVisualizerGUI
Painter used to visualize the policy
sprime - Variable in class burlap.behavior.singleagent.learning.actorcritic.CritiqueResult
The state to which the agent transitioned for when it took action a in state s.
sPrimeActionFeatures - Variable in class burlap.behavior.singleagent.learning.lspi.LSPI.SSFeatures
Next state-action features.
src - Variable in class burlap.oomdp.auxiliary.common.ConstantStateGenerator
 
srcAction - Variable in class burlap.behavior.singleagent.options.PrimitiveOption
The primitive action this option wraps.
srcAction - Variable in class burlap.domain.singleagent.tabularized.TabulatedDomainWrapper.ActionWrapper
 
srcDomain - Variable in class burlap.oomdp.singleagent.environment.DomainEnvironmentWrapper
 
srcState - Variable in class burlap.oomdp.stochasticgames.common.ConstantSGStateGenerator
The source state that will be copied and returned by the ConstantSGStateGenerator.generateState(List) method.
srender - Variable in class burlap.oomdp.visualizer.Visualizer
The StateRenderLayer instance for visualizing OO-MDP states.
start() - Method in class burlap.debugtools.MyTimer
Starts the timer.
startExperiment() - Method in class burlap.behavior.singleagent.auxiliary.performance.LearningAlgorithmExperimenter
Starts the experiment and runs all trails for all agents.
startExperiment() - Method in class burlap.behavior.stochasticgame.auxiliary.performance.MultiAgentExperimenter
Starts the experiment and runs all trails for all agents.
startGUI() - Method in class burlap.behavior.singleagent.auxiliary.performance.PerformancePlotter
Launches the GUI and automatic refresh thread.
startGUI() - Method in class burlap.behavior.stochasticgame.auxiliary.performance.MultiAgentPerformancePlotter
Launches the GUI and automatic refresh thread.
startNewAgent(String) - Method in class burlap.behavior.singleagent.auxiliary.performance.PerformancePlotter
Informs the plotter that data collecton for a new agent should begin.
startNewTrial() - Method in class burlap.behavior.singleagent.auxiliary.performance.PerformancePlotter
Informs the plotter that a new trial of the current agent is beginning.
startNewTrial() - Method in class burlap.behavior.stochasticgame.auxiliary.performance.MultiAgentPerformancePlotter.DatasetsAndTrials
Creates a new trial object and adds it to the end of the list of trials.
startNewTrial() - Method in class burlap.behavior.stochasticgame.auxiliary.performance.MultiAgentPerformancePlotter
Initializes the datastructures for a new trial.
startStateGenerator - Variable in class burlap.behavior.singleagent.learnbydemo.apprenticeship.ApprenticeshipLearningRequest
The initial state generator that models the initial states from which the expert trajectories were drawn
state - Variable in class burlap.behavior.singleagent.planning.stochastic.montecarlo.uct.UCTStateNode
The (hashed) state this node wraps
State - Class in burlap.oomdp.core
State objects are a collection of Object Instances.
State() - Constructor for class burlap.oomdp.core.State
 
State(State) - Constructor for class burlap.oomdp.core.State
Initializes this state as a deep copy of the object instances in the provided source state s
StateAbstraction - Interface in burlap.oomdp.auxiliary
An interface for taking an input state and returning a simpler abstracted state representation.
stateActionWeights - Variable in class burlap.behavior.singleagent.vfa.common.LinearFVVFA
The function weights when performing Q-value function approximation.
StateConditionTest - Interface in burlap.behavior.singleagent.planning
And interface for defining classes that check for certain conditions in states.
StateConditionTestIterable - Interface in burlap.behavior.singleagent.planning
An extension of the StateConditionTest that is iterable.
stateConsole - Variable in class burlap.oomdp.singleagent.explorer.VisualExplorer
 
stateConsole - Variable in class burlap.oomdp.stochasticgames.explorers.SGVisualExplorer
 
stateDepthIndex - Variable in class burlap.behavior.singleagent.planning.stochastic.montecarlo.uct.UCT
 
StateEnumerator - Class in burlap.behavior.singleagent.auxiliary
For some algorithms, it is useful to have an explicit unique state identifier for each possible state and the hashcode of a state cannot reliably give a unique number.
StateEnumerator(Domain, StateHashFactory) - Constructor for class burlap.behavior.singleagent.auxiliary.StateEnumerator
Constructs
StateFeature - Class in burlap.behavior.singleagent.vfa
A class for associating a state feature identifier with a value of that state feature
StateFeature(int, double) - Constructor for class burlap.behavior.singleagent.vfa.StateFeature
Initializes.
stateFeatures - Variable in class burlap.behavior.singleagent.vfa.ApproximationResult
The state features used to produce the predicted value.
stateFromObservation(Observation) - Method in class burlap.behavior.singleagent.interfaces.rlglue.RLGlueWrappedDomainGenerator
Returns a state object for the domain of this generator that is a result of the RLGlue observation.
StateGenerator - Interface in burlap.oomdp.auxiliary
An interface for generating State objects.
stateGenerator - Variable in class burlap.oomdp.singleagent.interfaces.rlglue.RLGlueEnvironment
The state generator for generating states for each episode
StateGridder - Class in burlap.behavior.singleagent.auxiliary
This class is used for creating a grid of states over a state space domain.
StateGridder() - Constructor for class burlap.behavior.singleagent.auxiliary.StateGridder
 
StateGridder.AttributeSpecification - Class in burlap.behavior.singleagent.auxiliary
Class for specifying the grid along a single attribute.
StateGridder.AttributeSpecification(String, double, double, int) - Constructor for class burlap.behavior.singleagent.auxiliary.StateGridder.AttributeSpecification
Initializes.
StateGridder.AttributeSpecification(Attribute, int) - Constructor for class burlap.behavior.singleagent.auxiliary.StateGridder.AttributeSpecification
Initializes with the lower and upper values of the grid being set to the Attribute objects' full domain (i.e., its Attribute.lowerLim and Attribute.upperLim data members.
stateHash(State) - Method in class burlap.behavior.singleagent.planning.OOMDPPlanner
A shorthand method for hashing a state.
stateHash(State) - Method in class burlap.behavior.stochasticgame.agents.naiveq.history.SGQWActionHistory
 
stateHash - Variable in class burlap.behavior.stochasticgame.agents.naiveq.history.SGQWActionHistoryFactory
The state hashing factory the Q-learning algorithm will use
stateHash - Variable in class burlap.behavior.stochasticgame.agents.naiveq.SGNaiveQFactory
The state hashing factory the Q-learning algorithm will use
stateHash(State) - Method in class burlap.behavior.stochasticgame.agents.naiveq.SGNaiveQLAgent
First abstracts state s, and then returns the StateHashTuple object for the abstracted state.
StateHashFactory - Interface in burlap.behavior.statehashing
This interface is to be used by classes that can produce StateHashTuple objects, object that provide a hash values for State objects.
StateHashTuple - Class in burlap.behavior.statehashing
This class provides a hash value for State objects.
StateHashTuple(State) - Constructor for class burlap.behavior.statehashing.StateHashTuple
Initializes the StateHashTuple with the given State object.
StateJSONParser - Class in burlap.oomdp.auxiliary.common
A StateParser class that uses the JSON file format and can can convert states to JSON strings (and back from them) for any possible input domain.
StateJSONParser(Domain) - Constructor for class burlap.oomdp.auxiliary.common.StateJSONParser
Initializes with a given domain object.
stateMapping - Variable in class burlap.behavior.singleagent.options.Option
An option state mapping to use to map from a source MDP state representation to a representation that this option will use for action selection.
StateMapping - Interface in burlap.behavior.singleagent.planning
A state mapping interface that maps one state into another state.
stateNodeConstructor - Variable in class burlap.behavior.singleagent.planning.stochastic.montecarlo.uct.UCT
 
stateNodes - Variable in class burlap.behavior.singleagent.learning.modellearning.models.TabularModel
A mapping from (hashed) states to state nodes that store transition statistics
StateParser - Interface in burlap.oomdp.auxiliary
This interface is used to converting states to parsable string representations and parsing those string representations back into states.
StatePolicyPainter - Interface in burlap.behavior.singleagent.auxiliary.valuefunctionvis
An interface for painting a representation of the policy for a specific state onto a 2D Graphics context.
StateReachability - Class in burlap.behavior.singleagent.auxiliary
This class provides methods for finding the set of reachable states from a source state.
StateReachability() - Constructor for class burlap.behavior.singleagent.auxiliary.StateReachability
 
StateRenderLayer - Class in burlap.oomdp.visualizer
This class provides 2D visualization of states by being provided a set of classes that can paint ObjectInstnaces to the canvas as well as classes that can paint general domain information.
StateRenderLayer() - Constructor for class burlap.oomdp.visualizer.StateRenderLayer
 
stateRepresentations - Variable in class burlap.behavior.stochasticgame.agents.naiveq.SGNaiveQLAgent
A map from hashed states to the internal state representation for the states stored in the q-table.
StateResetSpecialAction - Class in burlap.oomdp.singleagent.explorer
A special non-domain action that causes a visual explorer to rest the state to a specified base state or to a state drawn from a state generator.
StateResetSpecialAction(State) - Constructor for class burlap.oomdp.singleagent.explorer.StateResetSpecialAction
Initializes which base state to reset to
StateResetSpecialAction(StateGenerator) - Constructor for class burlap.oomdp.singleagent.explorer.StateResetSpecialAction
Initializes with a state generator to draw from on reset
states - Variable in class burlap.behavior.stochasticgame.GameAnalysis
The sequence of states
states - Variable in class burlap.behavior.stochasticgame.mavaluefunction.vfplanners.MAValueIteration
The set of states that have been found
stateSequence - Variable in class burlap.behavior.singleagent.EpisodeAnalysis
The sequence of states observed
statesToStateNodes - Variable in class burlap.behavior.singleagent.planning.stochastic.montecarlo.uct.UCT
 
statesToVisualize - Variable in class burlap.behavior.singleagent.auxiliary.valuefunctionvis.PolicyRenderLayer
The states to visualize
statesToVisualize - Variable in class burlap.behavior.singleagent.auxiliary.valuefunctionvis.ValueFunctionRenderLayer
The states to visualize
statesToVisualize - Variable in class burlap.behavior.singleagent.auxiliary.valuefunctionvis.ValueFunctionVisualizerGUI
 
stateTilings - Variable in class burlap.behavior.singleagent.vfa.cmac.CMACFeatureDatabase
For each tiling, a map from state tiles to an integer representing their feature identifier
StateToFeatureVectorGenerator - Interface in burlap.behavior.singleagent.vfa
Many functions approximation techniques require a fixed feature vector to work and in many cases, using abstract features from the state attributes is useful.
stateToString(State) - Method in class burlap.domain.singleagent.cartpole.CartPoleStateParser
 
stateToString(State) - Method in class burlap.domain.singleagent.cartpole.InvertedPendulumStateParser
 
stateToString(State) - Method in class burlap.domain.singleagent.frostbite.FrostbiteStateParser
 
stateToString(State) - Method in class burlap.domain.singleagent.gridworld.GridWorldStateParser
 
stateToString(State) - Method in class burlap.domain.singleagent.lunarlander.LLStateParser
 
stateToString(State) - Method in class burlap.domain.singleagent.mountaincar.MountainCarStateParser
 
stateToString(State) - Method in class burlap.oomdp.auxiliary.common.StateJSONParser
 
stateToString(State) - Method in class burlap.oomdp.auxiliary.common.StateYAMLParser
 
stateToString(State) - Method in class burlap.oomdp.auxiliary.common.UniversalStateParser
 
stateToString(State) - Method in interface burlap.oomdp.auxiliary.StateParser
Converts state s into a parsable string representation.
stateTransitionsAreModeled(State) - Method in class burlap.behavior.singleagent.learning.modellearning.Model
Indicates whether this model "knows" the transition dynamics from the given input state for all applicable actions.
stateTransitionsAreModeled(State) - Method in class burlap.behavior.singleagent.learning.modellearning.models.TabularModel
 
StateValuePainter - Class in burlap.behavior.singleagent.auxiliary.valuefunctionvis
An abstract class for defining the interface and common methods to paint the representation of the value function for a specific state onto a 2D graphics context.
StateValuePainter() - Constructor for class burlap.behavior.singleagent.auxiliary.valuefunctionvis.StateValuePainter
 
StateValuePainter2D - Class in burlap.behavior.singleagent.auxiliary.valuefunctionvis.common
A class for rendering the value of states as colored 2D cells on the canvas.
StateValuePainter2D() - Constructor for class burlap.behavior.singleagent.auxiliary.valuefunctionvis.common.StateValuePainter2D
Initializes using a LandmarkColorBlendInterpolation object that mixes from red (lowest value) to blue (highest value).
StateValuePainter2D(ColorBlend) - Constructor for class burlap.behavior.singleagent.auxiliary.valuefunctionvis.common.StateValuePainter2D
Initializes the value painter.
stateWeights - Variable in class burlap.behavior.singleagent.vfa.common.LinearFVVFA
The function weights when performing state value function approximation.
stateWiseMap - Variable in class burlap.behavior.learningrate.ExponentialDecayLR
The state dependent or state-action dependent learning rates
stateWiseMap - Variable in class burlap.behavior.learningrate.SoftTimeInverseDecayLR
The state dependent or state-action dependent learning rate time indices
StateYAMLParser - Class in burlap.oomdp.auxiliary.common
A StateParser class that uses the YAML file format and can can convert states to YAML strings (and back from them) for any possible input domain.
StateYAMLParser(Domain) - Constructor for class burlap.oomdp.auxiliary.common.StateYAMLParser
Initializes with a a given domain object.
StaticDomainPainter - Interface in burlap.behavior.singleagent.auxiliary.valuefunctionvis
An interface for painting general domain information to a 2D graphics context.
StaticPainter - Interface in burlap.oomdp.visualizer
This class paints general properties of a state/domain that may not be represented by any specific object instance data.
StaticRepeatedGameActionModel - Class in burlap.oomdp.stochasticgames.common
This action model can be used to take a single stage game, and cause it to repeat itself.
StaticRepeatedGameActionModel() - Constructor for class burlap.oomdp.stochasticgames.common.StaticRepeatedGameActionModel
 
StaticWeightedAStar - Class in burlap.behavior.singleagent.planning.deterministic.informed.astar
Statically weighted A* [1] implementation.
StaticWeightedAStar(Domain, RewardFunction, StateConditionTest, StateHashFactory, Heuristic, double) - Constructor for class burlap.behavior.singleagent.planning.deterministic.informed.astar.StaticWeightedAStar
Initializes the planner.
stepEpisode - Variable in class burlap.behavior.singleagent.auxiliary.performance.PerformancePlotter.Trial
Stores the steps by episode
stepEpisode - Variable in class burlap.behavior.stochasticgame.auxiliary.performance.MultiAgentPerformancePlotter.Trial
Stores the steps by episode
stepEpisodeSeries - Variable in class burlap.behavior.singleagent.auxiliary.performance.PerformancePlotter.AgentDatasets
Most recent trial's steps per step episode data
stepEpisodeSeries - Variable in class burlap.behavior.stochasticgame.auxiliary.performance.MultiAgentPerformancePlotter.AgentDatasets
Most recent trial's steps per step episode data
stepIncrement(double) - Method in class burlap.behavior.singleagent.auxiliary.performance.PerformancePlotter.Trial
Updates all datastructures with the reward received from the last step
stepIncrement(double) - Method in class burlap.behavior.stochasticgame.auxiliary.performance.MultiAgentPerformancePlotter.Trial
Updates all datastructures with the reward received from the last step
StochasticTree<T> - Class in burlap.datastructures
A class for performing sampling of a set of objects at O(lg(n)) time.
StochasticTree() - Constructor for class burlap.datastructures.StochasticTree
Initializes with an empty tree.
StochasticTree(List<Double>, List<T>) - Constructor for class burlap.datastructures.StochasticTree
Initializes a tree for objects with the given weights
StochasticTree.STNode - Class in burlap.datastructures
A class for storing a stochastic tree node.
StochasticTree.STNode(T, double, StochasticTree<T>.STNode) - Constructor for class burlap.datastructures.StochasticTree.STNode
Initializes a leaf node with the given weight and parent
StochasticTree.STNode(double) - Constructor for class burlap.datastructures.StochasticTree.STNode
Initializes a node with a weight only
StochasticTree.STNode(double, StochasticTree<T>.STNode) - Constructor for class burlap.datastructures.StochasticTree.STNode
Initializes a node with a given weight and parent node
stop() - Method in class burlap.debugtools.MyTimer
Stops the timer.
stopPlanning() - Method in class burlap.behavior.singleagent.planning.stochastic.montecarlo.uct.UCT
Returns true if rollouts and planning should cease.
stopReachabilityFromTerminalStates - Variable in class burlap.behavior.singleagent.learnbydemo.mlirl.differentiableplanners.DifferentiableVI
When the reachability analysis to find the state space is performed, a breadth first search-like pass (spreading over all stochastic transitions) is performed.
stopReachabilityFromTerminalStates - Variable in class burlap.behavior.singleagent.planning.stochastic.valueiteration.ValueIteration
When the reachability analysis to find the state space is performed, a breadth first search-like pass (spreading over all stochastic transitions) is performed.
storage - Variable in class burlap.datastructures.HashedAggregator
The backing hash map
storedAbstraction - Variable in class burlap.behavior.stochasticgame.agents.naiveq.SGNaiveQFactory
The state abstract the Q-learning algorithm will use
storedMapAbstraction - Variable in class burlap.behavior.stochasticgame.agents.naiveq.SGNaiveQLAgent
A state abstraction to use.
stringToState(String) - Method in class burlap.domain.singleagent.cartpole.CartPoleStateParser
 
stringToState(String) - Method in class burlap.domain.singleagent.cartpole.InvertedPendulumStateParser
 
stringToState(String) - Method in class burlap.domain.singleagent.frostbite.FrostbiteStateParser
 
stringToState(String) - Method in class burlap.domain.singleagent.gridworld.GridWorldStateParser
 
stringToState(String) - Method in class burlap.domain.singleagent.lunarlander.LLStateParser
 
stringToState(String) - Method in class burlap.domain.singleagent.mountaincar.MountainCarStateParser
 
stringToState(String) - Method in class burlap.oomdp.auxiliary.common.StateJSONParser
 
stringToState(String) - Method in class burlap.oomdp.auxiliary.common.StateYAMLParser
 
stringToState(String) - Method in class burlap.oomdp.auxiliary.common.UniversalStateParser
 
stringToState(String) - Method in interface burlap.oomdp.auxiliary.StateParser
Converts a string into a State object assuming the string representation was produced using this state parser.
stringVal - Variable in class burlap.oomdp.core.values.StringValue
The string value
StringValue - Class in burlap.oomdp.core.values
This class provides a value for a string.
StringValue(Attribute) - Constructor for class burlap.oomdp.core.values.StringValue
Initializes for a given attribute.
StringValue(Value) - Constructor for class burlap.oomdp.core.values.StringValue
Initializes from an existing value.
subgoalReward - Variable in class burlap.behavior.singleagent.options.LocalSubgoalRF
Defines the reward returned for transitions to subgoal states; default 0.
subgoalStateTest - Variable in class burlap.behavior.singleagent.options.LocalSubgoalRF
Defines he set of subgoal states for the option
subgoalStateTest - Variable in class burlap.behavior.singleagent.options.LocalSubgoalTF
Defines he set of subgoal states for the option
successorStates - Variable in class burlap.behavior.singleagent.planning.stochastic.montecarlo.uct.UCTActionNode
The possible successor states.
sumReturn - Variable in class burlap.behavior.singleagent.planning.stochastic.montecarlo.uct.UCTActionNode
The sum return observed for this action node
SupervisedVFA - Interface in burlap.behavior.singleagent.planning.vfa.fittedvi
An interface for learning value function approximation via a supervised learning algorithm.
SupervisedVFA.SupervisedVFAInstance - Class in burlap.behavior.singleagent.planning.vfa.fittedvi
A pair for a state and it's target value function value.
SupervisedVFA.SupervisedVFAInstance(State, double) - Constructor for class burlap.behavior.singleagent.planning.vfa.fittedvi.SupervisedVFA.SupervisedVFAInstance
Initializes
svp - Variable in class burlap.behavior.singleagent.auxiliary.valuefunctionvis.ValueFunctionRenderLayer
Painter used to visualize the value function
svp - Variable in class burlap.behavior.singleagent.auxiliary.valuefunctionvis.ValueFunctionVisualizerGUI
Painter used to visualize the value function
synchronizeJointActionSelectionAmongAgents - Variable in class burlap.behavior.stochasticgame.PolicyFromJointPolicy
 
A B C D E F G H I J K L M N O P Q R S T U V W X Y Z