- s - Variable in class burlap.behavior.functionapproximation.supervised.SupervisedVFA.SupervisedVFAInstance
-
The state
- s - Variable in class burlap.behavior.singleagent.learnfromdemo.mlirl.support.QGradientTuple
-
The state
- s - Variable in class burlap.behavior.singleagent.learning.actorcritic.CritiqueResult
-
The source state
- s - Variable in class burlap.behavior.singleagent.learning.lspi.SARSData.SARS
-
The previou state
- s - Variable in class burlap.behavior.singleagent.learning.tdmethods.QLearningStateNode
-
A hashed state entry for which Q-value will be stored.
- s - Variable in class burlap.behavior.singleagent.options.model.BFSMarkovOptionModel.OptionScanNode
-
the state this search node wraps
- s - Variable in class burlap.behavior.singleagent.planning.deterministic.SearchNode
-
The (hashed) state of this node
- s - Variable in class burlap.behavior.stochasticgames.madynamicprogramming.JAQValue
-
- s - Variable in class burlap.behavior.valuefunction.QValue
-
The state with which this Q-value is associated.
- s - Variable in class burlap.mdp.core.StateTransitionProb
-
- s - Variable in class burlap.mdp.singleagent.pomdp.beliefstate.EnumerableBeliefState.StateBelief
-
The MDP state defined by a
State
instance.
- s() - Method in class burlap.mdp.singleagent.pomdp.beliefstate.TabularBeliefState
-
- s() - Method in interface burlap.statehashing.HashableState
-
Returns the underlying source state that is hashed.
- s - Variable in class burlap.statehashing.WrappedHashableState
-
- s() - Method in class burlap.statehashing.WrappedHashableState
-
- saAfterStateRL - Variable in class burlap.visualizer.Visualizer
-
- sActionFeatures - Variable in class burlap.behavior.singleagent.learning.lspi.LSPI.SSFeatures
-
State-action features
- SADomain - Class in burlap.mdp.singleagent
-
A domain subclass for single agent domains.
- SADomain() - Constructor for class burlap.mdp.singleagent.SADomain
-
- saFeatures - Variable in class burlap.behavior.singleagent.learning.lspi.LSPI
-
The state feature database on which the linear VFA is performed
- sample(State, Action) - Method in class burlap.behavior.singleagent.learnfromdemo.CustomRewardModel
-
- sample(State, Action) - Method in class burlap.behavior.singleagent.learning.modellearning.models.TabularModel
-
- sample(State, Action) - Method in class burlap.behavior.singleagent.learning.modellearning.rmax.RMaxModel
-
- sample(State, Action) - Method in class burlap.behavior.singleagent.options.model.BFSMarkovOptionModel
-
- sample() - Method in class burlap.datastructures.BoltzmannDistribution
-
Samples the output probability distribution.
- sample() - Method in class burlap.datastructures.StochasticTree
-
Samples an element according to a probability defined by the relative weight of objects from the tree and returns it
- sample(State, Action) - Method in class burlap.domain.singleagent.blockdude.BlockDudeModel
-
- sample(State, Action) - Method in class burlap.domain.singleagent.blocksworld.BWModel
-
- sample(State, Action) - Method in class burlap.domain.singleagent.cartpole.model.CPClassicModel
-
- sample(State, Action) - Method in class burlap.domain.singleagent.cartpole.model.CPCorrectModel
-
- sample(State, Action) - Method in class burlap.domain.singleagent.cartpole.model.IPModel
-
- sample(State, Action) - Method in class burlap.domain.singleagent.frostbite.FrostbiteModel
-
- sample(State, Action) - Method in class burlap.domain.singleagent.graphdefined.GraphDefinedDomain.GraphStateModel
-
- sample(State, Action) - Method in class burlap.domain.singleagent.gridworld.GridWorldDomain.GridWorldModel
-
- sample(State, Action) - Method in class burlap.domain.singleagent.lunarlander.LunarLanderModel
-
- sample(State, Action) - Method in class burlap.domain.singleagent.mountaincar.MountainCar.MCModel
-
- sample(State, Action) - Method in class burlap.domain.singleagent.pomdp.tiger.TigerModel
-
- sample(State, Action) - Method in class burlap.domain.singleagent.pomdp.tiger.TigerObservations
-
- sample(State, JointAction) - Method in class burlap.domain.stochasticgames.gridgame.GridGameStandardMechanics
-
- sample(State, Action) - Method in class burlap.mdp.singleagent.model.DelegatedModel
-
- sample(State, Action) - Method in class burlap.mdp.singleagent.model.FactoredModel
-
- sample(State, Action) - Method in interface burlap.mdp.singleagent.model.SampleModel
-
Samples a transition from the transition distribution and returns it.
- sample(State, Action) - Method in interface burlap.mdp.singleagent.model.statemodel.SampleStateModel
-
Samples and returns a
State
from a state transition function.
- sample(State, Action) - Method in class burlap.mdp.singleagent.pomdp.BeliefMDPGenerator.BeliefModel
-
- sample() - Method in interface burlap.mdp.singleagent.pomdp.beliefstate.BeliefState
-
Samples an MDP state state from this belief distribution.
- sample() - Method in class burlap.mdp.singleagent.pomdp.beliefstate.TabularBeliefState
-
- sample(State, Action) - Method in interface burlap.mdp.singleagent.pomdp.observations.ObservationFunction
-
Samples an observation given the true MDP state and action taken in the previous step that led to the MDP state.
- sample(State, JointAction) - Method in class burlap.mdp.stochasticgames.common.StaticRepeatedGameModel
-
- sample(State, JointAction) - Method in interface burlap.mdp.stochasticgames.model.JointModel
-
- sampleBasicMovement(OOState, GridGameStandardMechanics.Location2, GridGameStandardMechanics.Location2, List<GridGameStandardMechanics.Location2>) - Method in class burlap.domain.stochasticgames.gridgame.GridGameStandardMechanics
-
Returns a movement result of the agent.
- sampleByEnumeration(FullModel, State, Action) - Static method in class burlap.mdp.singleagent.model.FullModel.Helper
-
- sampleByEnumeration(FullStateModel, State, Action) - Static method in class burlap.mdp.singleagent.model.statemodel.FullStateModel.Helper
-
- sampleByEnumeration(DiscreteObservationFunction, State, Action) - Static method in class burlap.mdp.singleagent.pomdp.observations.ObservationUtilities
-
- sampledQEstimate(Action, DifferentiableSparseSampling.QAndQGradient) - Method in class burlap.behavior.singleagent.learnfromdemo.mlirl.differentiableplanners.DifferentiableSparseSampling.DiffStateNode
-
- sampledQEstimate(Action) - Method in class burlap.behavior.singleagent.planning.stochastic.sparsesampling.SparseSampling.StateNode
-
Estimates the Q-value using sampling from the transition dynamics.
- sampleFromActionDistribution(EnumerablePolicy, State) - Static method in class burlap.behavior.policy.PolicyUtils
-
This is a helper method for stochastic policies.
- sampleHelper(StochasticTree<T>.STNode, double) - Method in class burlap.datastructures.StochasticTree
-
A recursive method for performing sampling
- SampleModel - Interface in burlap.mdp.singleagent.model
-
Interface for model that can be used to sample a transition from an input state for a given action and can indicate
when a state is terminal or not.
- samples - Variable in class burlap.behavior.singleagent.planning.vfa.fittedvi.FittedVI
-
The set of samples on which to perform value iteration.
- SampleStateModel - Interface in burlap.mdp.singleagent.model.statemodel
-
An interface for a model that can sample a state transition from the state transition function for a given input
state and action.
- sampleStrategy(double[]) - Method in class burlap.behavior.stochasticgames.agents.twoplayer.singlestage.equilibriumplayer.EquilibriumPlayingSGAgent
-
Samples an action from a strategy, where a strategy is defined as probability distribution over actions.
- sampleWallCollision(GridGameStandardMechanics.Location2, GridGameStandardMechanics.Location2, List<ObjectInstance>, boolean) - Method in class burlap.domain.stochasticgames.gridgame.GridGameStandardMechanics
-
Return true if the agent is able to move in the desired location; false if the agent moves into a solid wall
or if the agent randomly fails to move through a semi-wall that is in the way.
- SAObjectParameterizedAction() - Constructor for class burlap.mdp.singleagent.oo.ObjectParameterizedActionType.SAObjectParameterizedAction
-
- SAObjectParameterizedAction(String, String[]) - Constructor for class burlap.mdp.singleagent.oo.ObjectParameterizedActionType.SAObjectParameterizedAction
-
- sarender - Variable in class burlap.visualizer.Visualizer
-
- SARS(State, Action, double, State) - Constructor for class burlap.behavior.singleagent.learning.lspi.SARSData.SARS
-
Initializes.
- SarsaLam - Class in burlap.behavior.singleagent.learning.tdmethods
-
Tabular SARSA(\lambda) implementation [1].
- SarsaLam(SADomain, double, HashableStateFactory, double, double, double) - Constructor for class burlap.behavior.singleagent.learning.tdmethods.SarsaLam
-
Initializes SARSA(\lambda) with 0.1 epsilon greedy policy, the same Q-value initialization everywhere, and places no limit on the number of steps the
agent can take in an episode.
- SarsaLam(SADomain, double, HashableStateFactory, double, double, int, double) - Constructor for class burlap.behavior.singleagent.learning.tdmethods.SarsaLam
-
Initializes SARSA(\lambda) with 0.1 epsilon greedy policy, the same Q-value initialization everywhere.
- SarsaLam(SADomain, double, HashableStateFactory, double, double, Policy, int, double) - Constructor for class burlap.behavior.singleagent.learning.tdmethods.SarsaLam
-
Initializes SARSA(\lambda) with the same Q-value initialization everywhere.
- SarsaLam(SADomain, double, HashableStateFactory, QFunction, double, Policy, int, double) - Constructor for class burlap.behavior.singleagent.learning.tdmethods.SarsaLam
-
Initializes SARSA(\lambda).
- SarsaLam.EligibilityTrace - Class in burlap.behavior.singleagent.learning.tdmethods
-
A data structure for maintaining eligibility trace values
- sarsalamInit(double) - Method in class burlap.behavior.singleagent.learning.tdmethods.SarsaLam
-
- SARSCollector - Class in burlap.behavior.singleagent.learning.lspi
-
This object is used to collected
SARSData
(state-action-reard-state tuples) that can then be used by algorithms like LSPI for learning.
- SARSCollector(SADomain) - Constructor for class burlap.behavior.singleagent.learning.lspi.SARSCollector
-
Initializes the collector's action set using the actions that are part of the domain.
- SARSCollector(List<ActionType>) - Constructor for class burlap.behavior.singleagent.learning.lspi.SARSCollector
-
Initializes this collector's action set to use for collecting data.
- SARSCollector.UniformRandomSARSCollector - Class in burlap.behavior.singleagent.learning.lspi
-
Collects SARS data from source states generated by a
StateGenerator
by choosing actions uniformly at random.
- SARSData - Class in burlap.behavior.singleagent.learning.lspi
-
Class that provides a wrapper for a List holding a bunch of state-action-reward-state (
SARSData.SARS
) tuples.
- SARSData() - Constructor for class burlap.behavior.singleagent.learning.lspi.SARSData
-
Initializes with an empty dataset
- SARSData(int) - Constructor for class burlap.behavior.singleagent.learning.lspi.SARSData
-
Initializes with an empty dataset with initial capacity for the given parameter available.
- SARSData.SARS - Class in burlap.behavior.singleagent.learning.lspi
-
State-action-reward-state tuple.
- saThread - Variable in class burlap.behavior.stochasticgames.agents.interfacing.singleagent.LearningAgentToSGAgentInterface
-
The thread that runs the single agent learning algorithm
- satisfies(State) - Method in class burlap.mdp.auxiliary.stateconditiontest.SinglePFSCT
-
- satisfies(State) - Method in interface burlap.mdp.auxiliary.stateconditiontest.StateConditionTest
-
- satisfies(State) - Method in class burlap.mdp.auxiliary.stateconditiontest.TFGoalCondition
-
- satisifiesHeap() - Method in class burlap.datastructures.HashIndexedHeap
-
This method returns whether the data structure stored is in fact a heap (costs linear time).
- scale - Variable in class burlap.domain.singleagent.frostbite.FrostbiteDomain
-
Constant to adjust the scale of the game
- scale - Variable in class burlap.domain.singleagent.frostbite.FrostbiteModel
-
Constant to adjust the scale of the game
- scanner - Variable in class burlap.shell.BurlapShell
-
- sdp - Variable in class burlap.behavior.singleagent.auxiliary.valuefunctionvis.ValueFunctionVisualizerGUI
-
Painter used to visualize general state-independent domain information
- SDPlannerPolicy - Class in burlap.behavior.singleagent.planning.deterministic
-
This is a static deterministic valueFunction policy, which means
if the source deterministic valueFunction has not already computed
and cached the plan for a query state, then this policy
is undefined for that state and will cause the policy to throw
a corresponding
PolicyUndefinedException
exception object.
- SDPlannerPolicy() - Constructor for class burlap.behavior.singleagent.planning.deterministic.SDPlannerPolicy
-
- SDPlannerPolicy(DeterministicPlanner) - Constructor for class burlap.behavior.singleagent.planning.deterministic.SDPlannerPolicy
-
- SearchNode - Class in burlap.behavior.singleagent.planning.deterministic
-
The SearchNode class is used for classic deterministic forward search planners.
- SearchNode(HashableState) - Constructor for class burlap.behavior.singleagent.planning.deterministic.SearchNode
-
Constructs a SearchNode for the input state.
- SearchNode(HashableState, Action, SearchNode) - Constructor for class burlap.behavior.singleagent.planning.deterministic.SearchNode
-
Constructs a SearchNode for the input state and sets the generating action and back pointer to the provided elements.
- seAvgSeries - Variable in class burlap.behavior.singleagent.auxiliary.performance.PerformancePlotter.AgentDatasets
-
All trial's average steps per episode series data
- seAvgSeries - Variable in class burlap.behavior.stochasticgames.auxiliary.performance.MultiAgentPerformancePlotter.AgentDatasets
-
All trial's average steps per episode series data
- seedDefault(long) - Static method in class burlap.debugtools.RandomFactory
-
Sets the seed of the default random number generator
- seedMapped(int, long) - Static method in class burlap.debugtools.RandomFactory
-
Seeds and returns the random generator with the associated id or creates it if it does not yet exist
- seedMapped(String, long) - Static method in class burlap.debugtools.RandomFactory
-
Seeds and returns the random generator with the associated String id or creates it if it does not yet exist
- selectActionNode(UCTStateNode) - Method in class burlap.behavior.singleagent.planning.stochastic.montecarlo.uct.UCT
-
Selections which action to take.
- selectionMode - Variable in class burlap.behavior.singleagent.planning.stochastic.rtdp.BoundedRTDP
-
Which state selection mode is used.
- selector - Variable in class burlap.mdp.stochasticgames.tournament.Tournament
-
- semiWallProb - Variable in class burlap.domain.stochasticgames.gridgame.GridGame
-
The probability that an agent will pass through a semi-wall.
- serialize() - Method in class burlap.behavior.singleagent.Episode
-
- serialize() - Method in class burlap.behavior.stochasticgames.GameEpisode
-
- set(Object, Object) - Method in class burlap.domain.singleagent.blockdude.state.BlockDudeState
-
- set(Object, Object) - Method in class burlap.domain.singleagent.blocksworld.BlocksWorldState
-
- set(Object, Object) - Method in class burlap.domain.singleagent.cartpole.states.CartPoleFullState
-
- set(Object, Object) - Method in class burlap.domain.singleagent.cartpole.states.CartPoleState
-
- set(Object, Object) - Method in class burlap.domain.singleagent.cartpole.states.InvertedPendulumState
-
- set(Object, Object) - Method in class burlap.domain.singleagent.frostbite.state.FrostbiteState
-
- set(Object, Object) - Method in class burlap.domain.singleagent.graphdefined.GraphStateNode
-
- set(Object, Object) - Method in class burlap.domain.singleagent.gridworld.state.GridWorldState
-
- set(Object, Object) - Method in class burlap.domain.singleagent.lunarlander.state.LLState
-
- set(Object, Object) - Method in class burlap.domain.singleagent.mountaincar.MCState
-
- set(Object, Object) - Method in class burlap.domain.singleagent.pomdp.tiger.TigerObservation
-
- set(Object, Object) - Method in class burlap.domain.singleagent.pomdp.tiger.TigerState
-
- set(Object, Object) - Method in class burlap.domain.stochasticgames.gridgame.state.GGAgent
-
- set(Object, Object) - Method in class burlap.domain.stochasticgames.gridgame.state.GGGoal
-
- set(Object, Object) - Method in class burlap.domain.stochasticgames.gridgame.state.GGWall
-
- set(Object, Object) - Method in class burlap.domain.stochasticgames.normalform.NFGameState
-
- set(SingleStageNormalFormGame.StrategyProfile, double) - Method in class burlap.domain.stochasticgames.normalform.SingleStageNormalFormGame.AgentPayoutFunction
-
sets the payout for a given strategy profile
- set(Object, Object) - Method in class burlap.mdp.core.oo.state.generic.DeepOOState
-
- set(Object, Object) - Method in class burlap.mdp.core.oo.state.generic.GenericOOState
-
- set(Object, Object) - Method in interface burlap.mdp.core.state.MutableState
-
Sets the value for the given variable key.
- set(Object, Object) - Method in class burlap.mdp.singleagent.pomdp.beliefstate.TabularBeliefState
-
- set1DEastWall(int, int) - Method in class burlap.domain.singleagent.gridworld.GridWorldDomain
-
Sets a specified location to have a 1D east wall.
- set1DNorthWall(int, int) - Method in class burlap.domain.singleagent.gridworld.GridWorldDomain
-
Sets a specified location to have a 1D north wall.
- setActingAgent(int) - Method in class burlap.behavior.stochasticgames.PolicyFromJointPolicy
-
Sets the acting agents name
- setAction(int, Action) - Method in class burlap.mdp.stochasticgames.JointAction
-
Sets the action for the specified agent.
- setAction(int, Action) - Method in class burlap.shell.command.world.JointActionCommand
-
Sets the action for a single agent in the joint action this shell command controls
- setAction - Variable in class burlap.shell.command.world.ManualAgentsCommands
-
- setActionNameGlyphPainter(String, ActionGlyphPainter) - Method in class burlap.behavior.singleagent.auxiliary.valuefunctionvis.common.PolicyGlyphPainter2D
-
Sets which glyph painter to use for an action with the given name
- setActionOffset(Map<Action, Integer>) - Method in class burlap.behavior.functionapproximation.dense.DenseLinearVFA
-
Sets the Map
of feature index offsets into the full feature vector for each action
- setActionOffset(Action, int) - Method in class burlap.behavior.functionapproximation.dense.DenseLinearVFA
-
Sets the Map
of feature index offset into the full feature vector for the given action
- setActions(List<Action>) - Method in class burlap.mdp.stochasticgames.JointAction
-
- setActionSequence(List<Action>) - Method in class burlap.behavior.singleagent.options.MacroAction
-
- setActionsTypes(List<ActionType>) - Method in class burlap.behavior.singleagent.learning.modellearning.rmax.RMaxModel
-
- setActionTypes(List<ActionType>) - Method in class burlap.behavior.singleagent.MDPSolver
-
- setActionTypes(List<ActionType>) - Method in interface burlap.behavior.singleagent.MDPSolverInterface
-
Sets the action set the solver should use.
- setActionTypes(ActionType...) - Method in class burlap.mdp.singleagent.SADomain
-
- setActionTypes(List<ActionType>) - Method in class burlap.mdp.singleagent.SADomain
-
- SetAgentAction() - Constructor for class burlap.shell.command.world.ManualAgentsCommands.SetAgentAction
-
- setAgentDefinitions - Variable in class burlap.behavior.stochasticgames.agents.madp.MultiAgentDPPlanningAgent
-
Whether the agent definitions for this valueFunction have been set yet.
- setAgentDefinitions(List<SGAgentType>) - Method in class burlap.behavior.stochasticgames.madynamicprogramming.MADynamicProgramming
-
Sets/changes the agent definitions to use in planning.
- setAgentDetails(String, SGAgentType) - Method in class burlap.behavior.stochasticgames.agents.naiveq.history.SGQWActionHistory
-
- setAgentDetails(String, SGAgentType) - Method in class burlap.behavior.stochasticgames.agents.naiveq.SGNaiveQLAgent
-
- setAgentDetails(String, SGAgentType) - Method in class burlap.mdp.stochasticgames.agent.SGAgentBase
-
- setAgents(List<MultiAgentQLearning>) - Method in class burlap.behavior.stochasticgames.madynamicprogramming.AgentQSourceMap.MAQLControlledQSourceMap
-
Initializes with a list of agents that each keep their own Q_source.
- setAgentsInJointPolicy(List<SGAgent>) - Method in class burlap.behavior.stochasticgames.JointPolicy
-
Sets the agent definitions by querying the agent names and
SGAgentType
objects from a list of agents.
- setAgentsInJointPolicyFromWorld(World) - Method in class burlap.behavior.stochasticgames.JointPolicy
-
Sets teh agent definitions by querying the agents that exist in a
World
object.
- setAgentTypesInJointPolicy(List<SGAgentType>) - Method in class burlap.behavior.stochasticgames.JointPolicy
-
Sets the agent definitions that define the set of possible joint actions in each state.
- setAlias(String, String) - Method in class burlap.shell.BurlapShell
-
- setAlias(String, String, boolean) - Method in class burlap.shell.BurlapShell
-
- setAllowActionFromTerminalStates(boolean) - Method in class burlap.mdp.singleagent.environment.SimulatedEnvironment
-
Sets whether the environment will respond to actions from a terminal state.
- setAnginc(double) - Method in class burlap.domain.singleagent.lunarlander.LunarLanderDomain.LLPhysicsParams
-
- setAnginc(double) - Method in class burlap.domain.singleagent.lunarlander.LunarLanderDomain
-
Sets how many radians the agent will rotate from its current orientation when a turn/rotate action is applied
- setAngmax(double) - Method in class burlap.domain.singleagent.lunarlander.LunarLanderDomain.LLPhysicsParams
-
- setAngmax(double) - Method in class burlap.domain.singleagent.lunarlander.LunarLanderDomain
-
Sets the maximum rotate angle (in radians) that the lander can be rotated from the vertical orientation in either
clockwise or counterclockwise direction.
- setAuxInfoTo(PrioritizedSearchNode) - Method in class burlap.behavior.singleagent.planning.deterministic.informed.PrioritizedSearchNode
-
This method rewires the generating node information and priority to that specified in a different PrioritizedSearchNode.
- setBelief(State, double) - Method in class burlap.mdp.singleagent.pomdp.beliefstate.TabularBeliefState
-
Sets the probability mass (belief) associated with the underlying MDP state.
- setBelief(int, double) - Method in class burlap.mdp.singleagent.pomdp.beliefstate.TabularBeliefState
-
Sets the probability mass (belief) associated with the underlying MDP state.
- setBeliefState(BeliefState) - Method in class burlap.mdp.singleagent.pomdp.BeliefAgent
-
Sets this agent's current belief
- setBeliefValues(Map<Integer, Double>) - Method in class burlap.mdp.singleagent.pomdp.beliefstate.TabularBeliefState
-
- setBeliefVector(double[]) - Method in interface burlap.mdp.singleagent.pomdp.beliefstate.DenseBeliefVector
-
Sets this belief state to the provided.
- setBeliefVector(double[]) - Method in class burlap.mdp.singleagent.pomdp.beliefstate.TabularBeliefState
-
Sets this belief state to the provided.
- setBeta(double) - Method in class burlap.behavior.singleagent.planning.stochastic.dpoperator.SoftmaxOperator
-
- setBgColor(Color) - Method in class burlap.behavior.singleagent.auxiliary.valuefunctionvis.ValueFunctionVisualizerGUI
-
Sets the canvas background color
- setBGColor(Color) - Method in class burlap.visualizer.MultiLayerRenderer
-
Sets the color that will fill the canvas before rendering begins
- setBGColor(Color) - Method in class burlap.visualizer.Visualizer
-
Sets the background color of the canvas
- setBoltzmannBeta(double) - Method in class burlap.behavior.singleagent.learnfromdemo.mlirl.MLIRLRequest
-
- setBoundaryWalls(GenericOOState, int, int) - Static method in class burlap.domain.stochasticgames.gridgame.GridGame
-
/**
Sets boundary walls of a domain.
- setBreakTiesRandomly(boolean) - Method in class burlap.behavior.stochasticgames.madynamicprogramming.policies.EGreedyMaxWellfare
-
Whether to break ties randomly or deterministically.
- setC(int) - Method in class burlap.behavior.singleagent.learnfromdemo.mlirl.differentiableplanners.DifferentiableSparseSampling
-
Sets the number of state transition samples used.
- setC(int) - Method in class burlap.behavior.singleagent.planning.stochastic.sparsesampling.SparseSampling
-
Sets the number of state transition samples used.
- setCellWallState(int, int, int) - Method in class burlap.domain.singleagent.gridworld.GridWorldDomain
-
Sets the map at the specified location to have the specified wall configuration.
- setClassName(String) - Method in class burlap.domain.singleagent.blockdude.state.BlockDudeCell
-
- setCoefficientVectors(List<short[]>) - Method in class burlap.behavior.functionapproximation.dense.fourier.FourierBasis
-
Forces the set of coefficient vectors (and thereby Fourier basis functions) used.
- setCollisionReward(double) - Method in class burlap.domain.singleagent.lunarlander.LunarLanderRF
-
- setColorBlend(ColorBlend) - Method in class burlap.behavior.singleagent.auxiliary.valuefunctionvis.common.StateValuePainter2D
-
Sets the color blending used for the value function.
- setColorsForPFs(BlocksWorld.NamedColor...) - Method in class burlap.domain.singleagent.blocksworld.BlocksWorld
-
- setComputeExactValueFunction(boolean) - Method in class burlap.behavior.singleagent.planning.stochastic.sparsesampling.SparseSampling
-
Sets whether this valueFunction will compute the exact finite horizon value function (using the full transition dynamics) or if sampling
to estimate the value function will be used.
- setConfig(MaskedConfig) - Method in class burlap.statehashing.masked.MaskedHashableStateFactory
-
- setControlDepth(int) - Method in class burlap.behavior.singleagent.planning.vfa.fittedvi.FittedVI
-
- setCorrectDoorReward(double) - Method in class burlap.domain.singleagent.pomdp.tiger.TigerDomain
-
- setCorrelatedQObjective(CorrelatedEquilibriumSolver.CorrelatedEquilibriumObjective) - Method in class burlap.behavior.stochasticgames.madynamicprogramming.policies.ECorrelatedQJointPolicy
-
Sets the correlated equilibrium objective to be solved.
- setCurObservationTo(State) - Method in class burlap.mdp.singleagent.pomdp.SimulatedPOEnvironment
-
Overrides the current observation of this environment to the specified value
- setCurrentState(State) - Method in class burlap.mdp.stochasticgames.world.World
-
Sets the world state to the provided state if the a game is not currently running.
- setCurStateTo(State) - Method in class burlap.mdp.singleagent.environment.extensions.EnvironmentServer.StateSettableEnvironmentServer
-
- setCurStateTo(State) - Method in interface burlap.mdp.singleagent.environment.extensions.StateSettableEnvironment
-
Sets the current state of the environment to the specified state.
- setCurStateTo(State) - Method in class burlap.mdp.singleagent.environment.SimulatedEnvironment
-
- setCurTime(int) - Method in class burlap.behavior.singleagent.learning.actorcritic.critics.TimeIndexedTDLambda
-
Sets the time/depth of the current episode.
- setDataset(SARSData) - Method in class burlap.behavior.singleagent.learning.lspi.LSPI
-
Sets the SARS dataset this object will use for LSPI
- setDebugCode(int) - Method in class burlap.behavior.singleagent.learnfromdemo.mlirl.differentiableplanners.DifferentiableSparseSampling
-
Sets the debug code used for logging plan results with
DPrint
.
- setDebugCode(int) - Method in class burlap.behavior.singleagent.learnfromdemo.mlirl.MLIRL
-
Sets the debug code used for printing to the terminal
- setDebugCode(int) - Method in class burlap.behavior.singleagent.learnfromdemo.mlirl.MultipleIntentionsMLIRL
-
Sets the debug code used for printing to the terminal
- setDebugCode(int) - Method in class burlap.behavior.singleagent.MDPSolver
-
- setDebugCode(int) - Method in interface burlap.behavior.singleagent.MDPSolverInterface
-
Sets the debug code to be used by calls to
DPrint
- setDebugCode(int) - Method in class burlap.behavior.singleagent.planning.stochastic.sparsesampling.SparseSampling
-
Sets the debug code used for logging plan results with
DPrint
.
- setDebugId(int) - Method in class burlap.mdp.stochasticgames.world.World
-
Sets the debug code that is use for printing with
DPrint
.
- setDefaultFloorDiscretizingMultiple(double) - Method in class burlap.statehashing.discretized.DiscConfig
-
Sets the default multiple to use for continuous values that do not have specific multiples set
for them.
- setDefaultFloorDiscretizingMultiple(double) - Method in class burlap.statehashing.discretized.DiscretizingHashableStateFactory
-
Sets the default multiple to use for continuous values that do not have specific multiples set
for them.
- setDefaultFloorDiscretizingMultiple(double) - Method in class burlap.statehashing.maskeddiscretized.DiscMaskedConfig
-
Sets the default multiple to use for continuous values that do not have specific multiples set
for them.
- setDefaultFloorDiscretizingMultiple(double) - Method in class burlap.statehashing.maskeddiscretized.DiscretizingMaskedHashableStateFactory
-
Sets the default multiple to use for continuous values that do not have specific multiples set
for them.
- setDefaultReward(double) - Method in class burlap.domain.singleagent.lunarlander.LunarLanderRF
-
- setDefaultReward(double) - Method in class burlap.mdp.singleagent.common.GoalBasedRF
-
- setDefaultValueFunctionAfterARollout(boolean) - Method in class burlap.behavior.singleagent.planning.stochastic.rtdp.BoundedRTDP
-
Use this method to set which value function--the lower bound or upper bound--to use after a planning rollout is complete.
- setDeterministicTransitionDynamics() - Method in class burlap.domain.singleagent.gridworld.GridWorldDomain
-
Will set the domain to use deterministic action transitions.
- setDomain(SADomain) - Method in class burlap.behavior.singleagent.learnfromdemo.IRLRequest
-
- setDomain(SADomain) - Method in class burlap.behavior.singleagent.MDPSolver
-
- setDomain(SADomain) - Method in interface burlap.behavior.singleagent.MDPSolverInterface
-
Sets the domain of this solver.
- setDomain(PODomain) - Method in class burlap.mdp.singleagent.pomdp.beliefstate.TabularBeliefState
-
- setDomain(PODomain) - Method in class burlap.mdp.singleagent.pomdp.beliefstate.TabularBeliefUpdate
-
- setDomain(SGDomain) - Method in class burlap.mdp.stochasticgames.world.World
-
- setDomain(Domain) - Method in class burlap.shell.BurlapShell
-
- setEnv(Environment) - Method in class burlap.shell.EnvironmentShell
-
- setEnvironment(Environment) - Method in class burlap.mdp.singleagent.pomdp.BeliefAgent
-
Sets the POMDP environment
- setEnvironmentDelegate(Environment) - Method in interface burlap.mdp.singleagent.environment.extensions.EnvironmentDelegation
-
- setEnvironmentDelegate(Environment) - Method in class burlap.mdp.singleagent.environment.extensions.EnvironmentServer
-
- setEpisodeWeights(double[]) - Method in class burlap.behavior.singleagent.learnfromdemo.mlirl.MLIRLRequest
-
- setEpsilon(double) - Method in class burlap.behavior.policy.EpsilonGreedy
-
Sets the epsilon value, where epsilon is the probability of taking a random action.
- setEpsilon(double) - Method in class burlap.behavior.singleagent.learnfromdemo.apprenticeship.ApprenticeshipLearningRequest
-
- setEpsilon(double) - Method in class burlap.behavior.stochasticgames.agents.naiveq.history.SGQWActionHistoryFactory
-
Sets the epislon parmaeter (for epsilon greedy policy).
- setExpertEpisodes(List<Episode>) - Method in class burlap.behavior.singleagent.learnfromdemo.apprenticeship.ApprenticeshipLearningRequest
-
- setExpertEpisodes(List<Episode>) - Method in class burlap.behavior.singleagent.learnfromdemo.IRLRequest
-
- setFeatureGenerator(DenseStateFeatures) - Method in class burlap.behavior.singleagent.learnfromdemo.apprenticeship.ApprenticeshipLearningRequest
-
- setFeaturesAreForNextState(boolean) - Method in class burlap.behavior.singleagent.learnfromdemo.mlirl.commonrfs.LinearStateDifferentiableRF
-
Sets whether features for the reward function are generated from the next state or previous state.
- setFlag(int, int) - Static method in class burlap.debugtools.DebugFlags
-
Creates/sets a debug flag
- setForgetPreviousPlanResults(boolean) - Method in class burlap.behavior.singleagent.learnfromdemo.mlirl.differentiableplanners.DifferentiableSparseSampling
-
Sets whether previous planning results should be forgetten or resued in subsequent planning.
- setForgetPreviousPlanResults(boolean) - Method in class burlap.behavior.singleagent.planning.stochastic.sparsesampling.SparseSampling
-
Sets whether previous planning results should be forgotten or reused in subsequent planning.
- setFrameDelay(long) - Method in class burlap.mdp.singleagent.common.VisualActionObserver
-
Sets how long to wait in ms for a state to be rendered before returning control the agent.
- setFrameDelay(long) - Method in class burlap.mdp.stochasticgames.common.VisualWorldObserver
-
Sets how long to wait in ms for a state to be rendered before returning control the world.
- setGamma(double) - Method in class burlap.behavior.singleagent.learnfromdemo.IRLRequest
-
- setGamma(double) - Method in class burlap.behavior.singleagent.learning.modellearning.rmax.RMaxModel
-
- setGamma(double) - Method in class burlap.behavior.singleagent.MDPSolver
-
- setGamma(double) - Method in interface burlap.behavior.singleagent.MDPSolverInterface
-
Sets gamma, the discount factor used by this solver
- setGoalReward(double) - Method in class burlap.domain.singleagent.lunarlander.LunarLanderRF
-
- setGoalReward(double) - Method in class burlap.mdp.singleagent.common.GoalBasedRF
-
- setGravity(double) - Method in class burlap.domain.singleagent.lunarlander.LunarLanderDomain.LLPhysicsParams
-
- setGravity(double) - Method in class burlap.domain.singleagent.lunarlander.LunarLanderDomain
-
Sets the gravity of the domain
- setH(int) - Method in class burlap.behavior.singleagent.learnfromdemo.mlirl.differentiableplanners.DifferentiableSparseSampling
-
Sets the height of the tree.
- setH(int) - Method in class burlap.behavior.singleagent.planning.stochastic.sparsesampling.SparseSampling
-
Sets the height of the tree.
- setHalfTrackLength(double) - Method in class burlap.domain.singleagent.cartpole.CartPoleDomain.CartPoleRewardFunction
-
- setHalfTrackLength(double) - Method in class burlap.domain.singleagent.cartpole.CartPoleDomain.CartPoleTerminalFunction
-
- setHAndCByMDPError(double, double, int) - Method in class burlap.behavior.singleagent.planning.stochastic.sparsesampling.SparseSampling
-
Sets the height and number of transition dynamics samples in a way that ensure epsilon optimality.
- setHashingFactory(HashableStateFactory) - Method in class burlap.behavior.singleagent.MDPSolver
-
- setHashingFactory(HashableStateFactory) - Method in interface burlap.behavior.singleagent.MDPSolverInterface
-
- setHelpText(String) - Method in class burlap.shell.BurlapShell
-
- setId(int) - Method in class burlap.domain.singleagent.graphdefined.GraphStateNode
-
- setIdentityScalar(double) - Method in class burlap.behavior.singleagent.learning.lspi.LSPI
-
Sets the initial LSPI identity matrix scalar used.
- setIncludeDoNothing(boolean) - Method in class burlap.domain.singleagent.pomdp.tiger.TigerDomain
-
- setInitiationTest(StateConditionTest) - Method in class burlap.behavior.singleagent.options.SubgoalOption
-
- setInternalRewardFunction(JointRewardFunction) - Method in class burlap.mdp.stochasticgames.agent.SGAgentBase
-
- setIs(InputStream) - Method in class burlap.shell.BurlapShell
-
- setIterationListData() - Method in class burlap.behavior.singleagent.auxiliary.EpisodeSequenceVisualizer
-
- setJointActionModel(JointModel) - Method in class burlap.mdp.stochasticgames.SGDomain
-
Sets the joint action model associated with this domain.
- setJointPolicy(JointPolicy) - Method in class burlap.behavior.stochasticgames.PolicyFromJointPolicy
-
Sets the underlying joint policy
- setK(int) - Method in class burlap.behavior.singleagent.learnfromdemo.mlirl.MultipleIntentionsMLIRLRequest
-
Sets the number of clusters
- setLearningPolicy(Policy) - Method in class burlap.behavior.singleagent.learning.lspi.LSPI
-
- setLearningPolicy(Policy) - Method in class burlap.behavior.singleagent.learning.tdmethods.QLearning
-
Sets which policy this agent should use for learning.
- setLearningPolicy(Policy) - Method in class burlap.behavior.singleagent.learning.tdmethods.vfa.GradientDescentSarsaLam
-
Sets which policy this agent should use for learning.
- setLearningPolicy(PolicyFromJointPolicy) - Method in class burlap.behavior.stochasticgames.agents.maql.MultiAgentQLearning
-
Sets the learning policy to be followed by the agent.
- setLearningRate(LearningRate) - Method in class burlap.behavior.singleagent.learning.actorcritic.actor.BoltzmannActor
-
Sets the learning rate function to use.
- setLearningRate(LearningRate) - Method in class burlap.behavior.singleagent.learning.actorcritic.critics.TDLambda
-
Sets the learning rate function to use.
- setLearningRate(LearningRate) - Method in class burlap.behavior.singleagent.learning.tdmethods.vfa.GradientDescentSarsaLam
-
Sets the learning rate function to use.
- setLearningRate(LearningRate) - Method in class burlap.behavior.stochasticgames.agents.naiveq.SGNaiveQLAgent
-
- setLearningRateFunction(LearningRate) - Method in class burlap.behavior.singleagent.learning.tdmethods.QLearning
-
Sets the learning rate function to use
- setListenAccuracy(double) - Method in class burlap.domain.singleagent.pomdp.tiger.TigerDomain
-
- setListenReward(double) - Method in class burlap.domain.singleagent.pomdp.tiger.TigerDomain
-
- setManualAgent(String, ManualAgentsCommands.ManualSGAgent) - Method in class burlap.shell.command.world.ManualAgentsCommands
-
- setManualAgents(Map<String, ManualAgentsCommands.ManualSGAgent>) - Method in class burlap.shell.command.world.ManualAgentsCommands
-
- setMap(int[][]) - Method in class burlap.domain.singleagent.gridworld.GridWorldDomain
-
Set the map of the world.
- setMapToFourRooms() - Method in class burlap.domain.singleagent.gridworld.GridWorldDomain
-
Will set the map of the world to the classic Four Rooms map used the original options work (Sutton, R.S.
- setMaxAbsoluteAngle(double) - Method in class burlap.domain.singleagent.cartpole.CartPoleDomain.CartPoleRewardFunction
-
- setMaxAbsoluteAngle(double) - Method in class burlap.domain.singleagent.cartpole.CartPoleDomain.CartPoleTerminalFunction
-
- setMaxChange(double) - Method in class burlap.behavior.singleagent.learning.lspi.LSPI
-
- setMaxDelta(double) - Method in class burlap.behavior.singleagent.planning.stochastic.rtdp.RTDP
-
Sets the maximum delta state value update in a rollout that will cause planning to terminate
- setMaxDifference(double) - Method in class burlap.behavior.singleagent.planning.stochastic.rtdp.BoundedRTDP
-
Sets the max permitted difference in value function margin to permit planning termination.
- setMaxDim(int) - Method in class burlap.domain.stochasticgames.gridgame.GridGame
-
Sets the maximum dimension of the world; it's width and height.
- setMaxDynamicDepth(int) - Method in class burlap.behavior.singleagent.planning.stochastic.rtdp.RTDP
-
Sets the maximum depth of a rollout to use until it is prematurely temrinated to update the value function.
- setMaxGT(int) - Method in class burlap.domain.stochasticgames.gridgame.GridGame
-
Sets the maximum goal types
- setMaximumEpisodesForPlanning(int) - Method in class burlap.behavior.singleagent.learning.tdmethods.QLearning
-
- setMaximumEpisodesForPlanning(int) - Method in class burlap.behavior.singleagent.learning.tdmethods.vfa.GradientDescentSarsaLam
-
- setMaxIterations(int) - Method in class burlap.behavior.singleagent.learnfromdemo.apprenticeship.ApprenticeshipLearningRequest
-
- setMaxLearningSteps(int) - Method in class burlap.behavior.singleagent.learning.lspi.LSPI
-
- setMaxNumberOfRollouts(int) - Method in class burlap.behavior.singleagent.planning.stochastic.rtdp.BoundedRTDP
-
Sets the maximum number of rollouts permitted before planning is forced to terminate.
- setMaxNumPlanningIterations(int) - Method in class burlap.behavior.singleagent.learning.lspi.LSPI
-
- setMaxPlyrs(int) - Method in class burlap.domain.stochasticgames.gridgame.GridGame
-
Sets the max number of players
- setMaxQChangeForPlanningTerminaiton(double) - Method in class burlap.behavior.singleagent.learning.tdmethods.QLearning
-
- setMaxRolloutDepth(int) - Method in class burlap.behavior.singleagent.planning.stochastic.rtdp.BoundedRTDP
-
Sets the maximum rollout depth of any rollout.
- setMaxVFAWeightChangeForPlanningTerminaiton(double) - Method in class burlap.behavior.singleagent.learning.tdmethods.vfa.GradientDescentSarsaLam
-
- setMaxWT(int) - Method in class burlap.domain.stochasticgames.gridgame.GridGame
-
Sets the maximum number of wall types
- setMaxx(int) - Method in class burlap.domain.singleagent.blockdude.BlockDude
-
- setMaxy(int) - Method in class burlap.domain.singleagent.blockdude.BlockDude
-
- setMinNewStepsForLearningPI(int) - Method in class burlap.behavior.singleagent.learning.lspi.LSPI
-
Sets the minimum number of new learning observations before policy iteration is run again.
- setMinNumRolloutsWithSmallValueChange(int) - Method in class burlap.behavior.singleagent.planning.stochastic.rtdp.RTDP
-
Sets the minimum number of consecutive rollsouts with a value function change less than the maxDelta value that will cause RTDP
to stop.
- setMinProb(double) - Method in class burlap.behavior.singleagent.options.model.BFSMarkovOptionModel
-
- setModel(SampleModel) - Method in class burlap.behavior.singleagent.MDPSolver
-
- setModel(SampleModel) - Method in interface burlap.behavior.singleagent.MDPSolverInterface
-
Sets the model to use for this solver
- setModel(SampleModel) - Method in class burlap.mdp.singleagent.SADomain
-
- setName(String) - Method in class burlap.behavior.singleagent.options.MacroAction
-
- setName(String) - Method in class burlap.behavior.singleagent.options.SubgoalOption
-
- setName(String) - Method in class burlap.domain.singleagent.blockdude.state.BlockDudeCell
-
- setName(String) - Method in class burlap.domain.singleagent.blocksworld.BlocksWorldBlock
-
- setName(String) - Method in class burlap.domain.singleagent.frostbite.state.FrostbitePlatform
-
- setName(String) - Method in class burlap.domain.singleagent.gridworld.state.GridAgent
-
- setName(String) - Method in class burlap.domain.singleagent.gridworld.state.GridLocation
-
- setName(String) - Method in class burlap.domain.singleagent.lunarlander.state.LLBlock
-
- setName(String) - Method in class burlap.domain.stochasticgames.gridgame.state.GGAgent
-
- setName(String) - Method in class burlap.domain.stochasticgames.gridgame.state.GGGoal
-
- setName(String) - Method in class burlap.domain.stochasticgames.gridgame.state.GGWall
-
- setName(String) - Method in class burlap.mdp.core.action.SimpleAction
-
- setNextAction(Action) - Method in class burlap.shell.command.world.ManualAgentsCommands.ManualSGAgent
-
- setNothingReward(double) - Method in class burlap.domain.singleagent.pomdp.tiger.TigerDomain
-
- setNumActions(int) - Method in class burlap.behavior.functionapproximation.dense.DenseCrossProductFeatures
-
- setNumberOfLocationTypes(int) - Method in class burlap.domain.singleagent.gridworld.GridWorldDomain
-
Sets the number of possible location types to which a location object can belong.
- setNumberPlatformCol(int) - Method in class burlap.domain.singleagent.frostbite.FrostbiteModel
-
- setNumEpisodesToStore(int) - Method in class burlap.behavior.singleagent.learning.actorcritic.ActorCritic
-
- setNumEpisodesToStore(int) - Method in class burlap.behavior.singleagent.learning.lspi.LSPI
-
- setNumEpisodesToStore(int) - Method in class burlap.behavior.singleagent.learning.modellearning.artdp.ARTDP
-
- setNumEpisodesToStore(int) - Method in class burlap.behavior.singleagent.learning.modellearning.rmax.PotentialShapedRMax
-
- setNumPasses(int) - Method in class burlap.behavior.singleagent.planning.stochastic.rtdp.RTDP
-
Sets the number of rollouts to perform when planning is started (unless the value function delta is small enough).
- setNumSamplesForPlanning(int) - Method in class burlap.behavior.singleagent.learning.lspi.LSPI
-
- setNumXCells(int) - Method in class burlap.behavior.singleagent.auxiliary.valuefunctionvis.common.StateValuePainter2D
-
Sets the number of states that will be rendered along a row
- setNumYCells(int) - Method in class burlap.behavior.singleagent.auxiliary.valuefunctionvis.common.StateValuePainter2D
-
Sets the number of states that will be rendered along a row
- setObjectParameters(String[]) - Method in interface burlap.mdp.core.oo.ObjectParameterizedAction
-
Sets the object parameters for this
Action
.
- setObjectParameters(String[]) - Method in class burlap.mdp.singleagent.oo.ObjectParameterizedActionType.SAObjectParameterizedAction
-
- setObjectsByClass(Map<String, List<ObjectInstance>>) - Method in class burlap.mdp.core.oo.state.generic.GenericOOState
-
Setter method for underlying data to support serialization
- setObjectsMap(Map<String, ObjectInstance>) - Method in class burlap.mdp.core.oo.state.generic.GenericOOState
-
Setter method for underlying data to support serialization
- setObs(Observation) - Method in class burlap.behavior.singleagent.interfaces.rlglue.RLGlueState
-
- setObservationFunction(ObservationFunction) - Method in class burlap.mdp.singleagent.pomdp.PODomain
-
- setObstacleInCell(int, int) - Method in class burlap.domain.singleagent.gridworld.GridWorldDomain
-
Sets a complete cell obstacle in the designated location.
- setOperator(DPOperator) - Method in class burlap.behavior.singleagent.learnfromdemo.mlirl.differentiableplanners.DifferentiableDP
-
- setOperator(DifferentiableDPOperator) - Method in class burlap.behavior.singleagent.learnfromdemo.mlirl.differentiableplanners.DifferentiableSparseSampling
-
- setOperator(DPOperator) - Method in class burlap.behavior.singleagent.planning.stochastic.DynamicProgramming
-
Sets the dynamic programming operator use.
- setOperator(DPOperator) - Method in class burlap.behavior.singleagent.planning.stochastic.rtdp.BoundedRTDP
-
- setOperator(DPOperator) - Method in class burlap.behavior.singleagent.planning.stochastic.sparsesampling.SparseSampling
-
- setOptionsFirst() - Method in class burlap.behavior.singleagent.planning.deterministic.uninformed.dfs.DFS
-
Sets the valueFunction to explore nodes generated by options first.
- setOs(PrintStream) - Method in class burlap.shell.BurlapShell
-
- setPainter(Visualizer) - Method in class burlap.mdp.singleagent.common.VisualActionObserver
-
- setParameter(int, double) - Method in class burlap.behavior.functionapproximation.dense.DenseLinearVFA
-
- setParameter(int, double) - Method in class burlap.behavior.functionapproximation.dense.DenseStateActionLinearVFA
-
- setParameter(int, double) - Method in interface burlap.behavior.functionapproximation.ParametricFunction
-
Sets the value of the ith parameter to given value
- setParameter(int, double) - Method in class burlap.behavior.functionapproximation.sparse.LinearVFA
-
- setParameter(int, double) - Method in class burlap.behavior.singleagent.learnfromdemo.mlirl.commonrfs.LinearStateActionDifferentiableRF
-
- setParameter(int, double) - Method in class burlap.behavior.singleagent.learnfromdemo.mlirl.commonrfs.LinearStateDifferentiableRF
-
- setParameter(int, double) - Method in class burlap.behavior.singleagent.learnfromdemo.mlirl.differentiableplanners.diffvinit.DiffVFRF
-
- setParameter(int, double) - Method in class burlap.behavior.singleagent.learnfromdemo.mlirl.differentiableplanners.diffvinit.LinearDiffRFVInit
-
- setParameter(int, double) - Method in class burlap.behavior.singleagent.learnfromdemo.mlirl.differentiableplanners.diffvinit.LinearStateDiffVF
-
- setParameter(int, double) - Method in class burlap.behavior.singleagent.learnfromdemo.mlirl.differentiableplanners.diffvinit.VanillaDiffVinit
-
- setPayoff(int, int, double, double) - Method in class burlap.behavior.stochasticgames.agents.twoplayer.singlestage.equilibriumplayer.EquilibriumPlayingSGAgent.BimatrixTuple
-
Sets the payoffs for a given row and column.
- setPayout(int, double, String...) - Method in class burlap.domain.stochasticgames.normalform.SingleStageNormalFormGame
-
Sets the pay out that player number playerNumber
receives for a given strategy profile
- setPayout(int, double, int...) - Method in class burlap.domain.stochasticgames.normalform.SingleStageNormalFormGame
-
Sets the pay out that player number playerNumber
receives for a given strategy profile
- setPhysParams(LunarLanderDomain.LLPhysicsParams) - Method in class burlap.domain.singleagent.lunarlander.LunarLanderDomain
-
- setPlanner(Planner) - Method in class burlap.behavior.singleagent.learnfromdemo.IRLRequest
-
- setPlanner(Planner) - Method in class burlap.behavior.singleagent.learnfromdemo.mlirl.MLIRLRequest
-
- setPlannerFactory(QGradientPlannerFactory) - Method in class burlap.behavior.singleagent.learnfromdemo.mlirl.MultipleIntentionsMLIRLRequest
-
Sets the
QGradientPlannerFactory
to use and also
sets this request object's valueFunction instance to a valueFunction generated from it, if it has not already been set.
- setPlannerReference(MADynamicProgramming) - Method in class burlap.behavior.stochasticgames.agents.madp.MADPPlannerFactory.ConstantMADPPlannerFactory
-
Changes the valueFunction reference
- setPlanningAndControlDepth(int) - Method in class burlap.behavior.singleagent.planning.vfa.fittedvi.FittedVI
-
- setPlanningCollector(SARSCollector) - Method in class burlap.behavior.singleagent.learning.lspi.LSPI
-
- setPlanningDepth(int) - Method in class burlap.behavior.singleagent.planning.vfa.fittedvi.FittedVI
-
Sets the Bellman operator depth used during planning.
- setPlotCISignificance(double) - Method in class burlap.behavior.singleagent.auxiliary.performance.LearningAlgorithmExperimenter
-
Sets the significance used for confidence intervals.
- setPlotCISignificance(double) - Method in class burlap.behavior.stochasticgames.auxiliary.performance.MultiAgentExperimenter
-
Sets the significance used for confidence intervals.
- setPlotRefreshDelay(int) - Method in class burlap.behavior.singleagent.auxiliary.performance.LearningAlgorithmExperimenter
-
Sets the delay in milliseconds between automatic plot refreshes
- setPlotRefreshDelay(int) - Method in class burlap.behavior.stochasticgames.auxiliary.performance.MultiAgentExperimenter
-
Sets the delay in milliseconds between automatic plot refreshes
- setPolicy(Policy) - Method in class burlap.behavior.singleagent.auxiliary.valuefunctionvis.PolicyRenderLayer
-
Sets the policy to render
- setPolicy(Policy) - Method in class burlap.behavior.singleagent.auxiliary.valuefunctionvis.ValueFunctionVisualizerGUI
-
Sets the policy to render
- setPolicy(SolverDerivedPolicy) - Method in class burlap.behavior.singleagent.learning.modellearning.artdp.ARTDP
-
Sets the policy to the provided one.
- setPolicy(Policy) - Method in class burlap.behavior.singleagent.options.SubgoalOption
-
- setPolicy(PolicyFromJointPolicy) - Method in class burlap.behavior.stochasticgames.agents.madp.MultiAgentDPPlanningAgent
-
Sets the policy derived from this agents valueFunction to follow.
- setPolicyCount(int) - Method in class burlap.behavior.singleagent.learnfromdemo.apprenticeship.ApprenticeshipLearningRequest
-
- setPolicyToEvaluate(EnumerablePolicy) - Method in class burlap.behavior.singleagent.planning.stochastic.policyiteration.PolicyIteration
-
Sets the initial policy that will be evaluated when planning with policy iteration begins.
- setPolynomialDegree(double) - Method in class burlap.behavior.singleagent.auxiliary.valuefunctionvis.common.LandmarkColorBlendInterpolation
-
Sets the color blend to raise the normalized distance of values to the given degree.
- setPotentialFunction(PotentialFunction) - Method in class burlap.behavior.singleagent.learning.modellearning.rmax.RMaxModel
-
- setPreference(int, double) - Method in class burlap.datastructures.BoltzmannDistribution
-
Sets the preference for the ith elemnt
- setPreferences(double[]) - Method in class burlap.datastructures.BoltzmannDistribution
-
Sets the input preferences
- setProbSucceedTransitionDynamics(double) - Method in class burlap.domain.singleagent.gridworld.GridWorldDomain
-
Sets the domain to use probabilistic transitions.
- setQInitFunction(QFunction) - Method in class burlap.behavior.singleagent.learning.tdmethods.QLearning
-
Sets how to initialize Q-values for previously unexperienced state-action pairs.
- setQSourceMap(Map<Integer, QSourceForSingleAgent>) - Method in class burlap.behavior.stochasticgames.madynamicprogramming.AgentQSourceMap.HashMapAgentQSourceMap
-
Sets the Q-source hash map to be used.
- setQSourceProvider(MultiAgentQSourceProvider) - Method in class burlap.behavior.stochasticgames.madynamicprogramming.MAQSourcePolicy
-
- setQSourceProvider(MultiAgentQSourceProvider) - Method in class burlap.behavior.stochasticgames.madynamicprogramming.policies.ECorrelatedQJointPolicy
-
- setQSourceProvider(MultiAgentQSourceProvider) - Method in class burlap.behavior.stochasticgames.madynamicprogramming.policies.EGreedyJointPolicy
-
- setQSourceProvider(MultiAgentQSourceProvider) - Method in class burlap.behavior.stochasticgames.madynamicprogramming.policies.EGreedyMaxWellfare
-
- setQSourceProvider(MultiAgentQSourceProvider) - Method in class burlap.behavior.stochasticgames.madynamicprogramming.policies.EMinMaxPolicy
-
- setQValueInitializer(QFunction) - Method in class burlap.behavior.stochasticgames.agents.naiveq.history.SGQWActionHistoryFactory
-
Sets the Q-value initialization function that will be used by the agent.
- setQValueInitializer(QFunction) - Method in class burlap.behavior.stochasticgames.agents.naiveq.SGNaiveQLAgent
-
- setRandom(Random) - Method in class burlap.datastructures.StochasticTree
-
Sets the tree to use a specific random object when performing sampling
- setRandomGenerator(Random) - Method in class burlap.behavior.policy.RandomPolicy
-
Sets the random generator used for action selection.
- setRandomObject(Random) - Method in class burlap.domain.singleagent.mountaincar.MCRandomStateGenerator
-
Sets the random object used for generating states
- setRefreshDelay(int) - Method in class burlap.behavior.singleagent.auxiliary.performance.PerformancePlotter
-
sets the delay in milliseconds between automatic refreshes of the plots
- setRefreshDelay(int) - Method in class burlap.behavior.stochasticgames.auxiliary.performance.MultiAgentPerformancePlotter
-
sets the delay in milliseconds between automatic refreshes of the plots
- setRenderStyle(PolicyGlyphPainter2D.PolicyGlyphRenderStyle) - Method in class burlap.behavior.singleagent.auxiliary.valuefunctionvis.common.PolicyGlyphPainter2D
-
Sets the rendering style
- setRepaintOnActionInitiation(boolean) - Method in class burlap.mdp.singleagent.common.VisualActionObserver
-
- setRepaintStateOnEnvironmentInteraction(boolean) - Method in class burlap.mdp.singleagent.common.VisualActionObserver
-
- setRequest(MLIRLRequest) - Method in class burlap.behavior.singleagent.learnfromdemo.mlirl.MLIRL
-
- setReward(int, int, double) - Method in class burlap.domain.singleagent.gridworld.GridWorldRewardFunction
-
Sets the reward the agent will receive to transitioning to position x, y
- setRf(DifferentiableRF) - Method in class burlap.behavior.singleagent.learnfromdemo.mlirl.MLIRLRequest
-
- setRf(RewardFunction) - Method in class burlap.domain.singleagent.blockdude.BlockDude
-
- setRf(RewardFunction) - Method in class burlap.domain.singleagent.blocksworld.BlocksWorld
-
- setRf(RewardFunction) - Method in class burlap.domain.singleagent.cartpole.CartPoleDomain
-
- setRf(RewardFunction) - Method in class burlap.domain.singleagent.cartpole.InvertedPendulum
-
- setRf(RewardFunction) - Method in class burlap.domain.singleagent.frostbite.FrostbiteDomain
-
- setRf(RewardFunction) - Method in class burlap.domain.singleagent.graphdefined.GraphDefinedDomain
-
- setRf(RewardFunction) - Method in class burlap.domain.singleagent.gridworld.GridWorldDomain
-
- setRf(RewardFunction) - Method in class burlap.domain.singleagent.lunarlander.LunarLanderDomain
-
- setRf(RewardFunction) - Method in class burlap.domain.singleagent.mountaincar.MountainCar
-
- setRf(RewardFunction) - Method in class burlap.mdp.singleagent.model.FactoredModel
-
- setRfDim(int) - Method in class burlap.behavior.singleagent.learnfromdemo.mlirl.differentiableplanners.diffvinit.LinearDiffRFVInit
-
- setRfFeaturesAreForNextState(boolean) - Method in class burlap.behavior.singleagent.learnfromdemo.mlirl.differentiableplanners.diffvinit.LinearDiffRFVInit
-
- setRfFvGen(DenseStateFeatures) - Method in class burlap.behavior.singleagent.learnfromdemo.mlirl.differentiableplanners.diffvinit.LinearDiffRFVInit
-
- setRollOutPolicy(Policy) - Method in class burlap.behavior.singleagent.planning.stochastic.rtdp.RTDP
-
Sets the rollout policy to use.
- setRunRolloutsInRevere(boolean) - Method in class burlap.behavior.singleagent.planning.stochastic.rtdp.BoundedRTDP
-
Sets whether each rollout should be run in reverse after completion.
- setS(State) - Method in class burlap.statehashing.WrappedHashableState
-
Setter for Java Bean serialization purposes.
- setSaFeatures(DenseStateActionFeatures) - Method in class burlap.behavior.singleagent.learning.lspi.LSPI
-
Sets the state-action features to used
- setSamples(List<State>) - Method in class burlap.behavior.singleagent.planning.vfa.fittedvi.FittedVI
-
Sets the state samples to which the value function will be fit.
- setScale(int) - Method in class burlap.domain.singleagent.frostbite.FrostbiteDomain
-
- setScale(int) - Method in class burlap.domain.singleagent.frostbite.FrostbiteModel
-
- setSemiWallPassableProbability(double) - Method in class burlap.domain.stochasticgames.gridgame.GridGame
-
Sets the probability that an agent can pass through a semi-wall.
- setSetRenderLayer(StateRenderLayer) - Method in class burlap.visualizer.Visualizer
-
- setSignificanceForCI(double) - Method in class burlap.behavior.singleagent.auxiliary.performance.PerformancePlotter
-
Sets the significance used for confidence intervals.
- setSignificanceForCI(double) - Method in class burlap.behavior.stochasticgames.auxiliary.performance.MultiAgentPerformancePlotter
-
Sets the significance used for confidence intervals.
- setSoftTieRenderStyleDelta(double) - Method in class burlap.behavior.singleagent.auxiliary.valuefunctionvis.common.PolicyGlyphPainter2D
-
Sets the soft difference between max actions to determine ties when the MAXACTIONSOFSOFTTIE render style is used.
- setSolver(MDPSolverInterface) - Method in class burlap.behavior.policy.BoltzmannQPolicy
-
- setSolver(MDPSolverInterface) - Method in class burlap.behavior.policy.EpsilonGreedy
-
- setSolver(MDPSolverInterface) - Method in class burlap.behavior.policy.GreedyDeterministicQPolicy
-
- setSolver(MDPSolverInterface) - Method in class burlap.behavior.policy.GreedyQPolicy
-
- setSolver(MDPSolverInterface) - Method in interface burlap.behavior.policy.SolverDerivedPolicy
-
Sets the valueFunction whose results affect this policy.
- setSolver(MDPSolverInterface) - Method in class burlap.behavior.singleagent.planning.deterministic.DDPlannerPolicy
-
- setSolver(MDPSolverInterface) - Method in class burlap.behavior.singleagent.planning.deterministic.SDPlannerPolicy
-
- setSolver(MDPSolverInterface) - Method in class burlap.behavior.singleagent.planning.stochastic.montecarlo.uct.UCTTreeWalkPolicy
-
- setSourceModel(KWIKModel) - Method in class burlap.behavior.singleagent.learning.modellearning.rmax.RMaxModel
-
- setSparseStateFeatures(SparseStateFeatures) - Method in class burlap.behavior.functionapproximation.dense.SparseToDenseFeatures
-
- setSpp(StatePolicyPainter) - Method in class burlap.behavior.singleagent.auxiliary.valuefunctionvis.PolicyRenderLayer
-
Sets the state-wise policy painter
- setSpp(StatePolicyPainter) - Method in class burlap.behavior.singleagent.auxiliary.valuefunctionvis.ValueFunctionVisualizerGUI
-
Sets the state-wise policy painter
- setStartStateGenerator(StateGenerator) - Method in class burlap.behavior.singleagent.learnfromdemo.apprenticeship.ApprenticeshipLearningRequest
-
- setStateActionRenderLayer(StateActionRenderLayer, boolean) - Method in class burlap.visualizer.Visualizer
-
- setStateContext(State) - Method in interface burlap.mdp.auxiliary.stateconditiontest.StateConditionTestIterable
-
- setStateEnumerator(StateEnumerator) - Method in class burlap.mdp.singleagent.pomdp.beliefstate.TabularBeliefState
-
- setStateEnumerator(StateEnumerator) - Method in class burlap.mdp.singleagent.pomdp.beliefstate.TabularBeliefUpdate
-
- setStateEnumerator(StateEnumerator) - Method in class burlap.mdp.singleagent.pomdp.PODomain
-
Sets the
StateEnumerator
used by this domain to enumerate all underlying MDP states.
- setStateFeatures(DenseStateFeatures) - Method in class burlap.behavior.functionapproximation.dense.DenseCrossProductFeatures
-
- setStateGenerator(StateGenerator) - Method in class burlap.mdp.singleagent.environment.SimulatedEnvironment
-
- setStateModel(SampleStateModel) - Method in class burlap.mdp.singleagent.model.FactoredModel
-
- setStateSelectionMode(BoundedRTDP.StateSelectionMode) - Method in class burlap.behavior.singleagent.planning.stochastic.rtdp.BoundedRTDP
-
Sets the state selection mode used when choosing next states to expand.
- setStatesToVisualize(Collection<State>) - Method in class burlap.behavior.singleagent.auxiliary.valuefunctionvis.PolicyRenderLayer
-
Sets the states to visualize
- setStatesToVisualize(Collection<State>) - Method in class burlap.behavior.singleagent.auxiliary.valuefunctionvis.ValueFunctionRenderLayer
-
Sets the states to visualize
- setStoredAbstraction(StateMapping) - Method in class burlap.behavior.stochasticgames.agents.naiveq.SGNaiveQFactory
-
Sets the factory to provide Q-learning algorithms with the given state abstraction.
- setStoredMapAbstraction(StateMapping) - Method in class burlap.behavior.stochasticgames.agents.naiveq.SGNaiveQLAgent
-
Sets the state abstraction that this agent will use
- setStrategy(Policy) - Method in class burlap.behavior.stochasticgames.agents.naiveq.SGNaiveQLAgent
-
Sets the Q-learning policy that this agent will use (e.g., epsilon greedy)
- SetStrategyAgentFactory(SGDomain, Policy) - Constructor for class burlap.behavior.stochasticgames.agents.SetStrategySGAgent.SetStrategyAgentFactory
-
- SetStrategySGAgent - Class in burlap.behavior.stochasticgames.agents
-
A class for an agent who makes decisions by following a specified strategy and does not respond to the other player's actions.
- SetStrategySGAgent(SGDomain, Policy, String, SGAgentType) - Constructor for class burlap.behavior.stochasticgames.agents.SetStrategySGAgent
-
Initializes for the given domain in which the agent will play and the strategy that they will follow.
- SetStrategySGAgent.SetStrategyAgentFactory - Class in burlap.behavior.stochasticgames.agents
-
- setSvp(StateValuePainter) - Method in class burlap.behavior.singleagent.auxiliary.valuefunctionvis.ValueFunctionRenderLayer
-
Sets the state-wise value function painter
- setSvp(StateValuePainter) - Method in class burlap.behavior.singleagent.auxiliary.valuefunctionvis.ValueFunctionVisualizerGUI
-
Sets the state-wise value function painter
- setSynchronizeJointActionSelectionAmongAgents(boolean) - Method in class burlap.behavior.stochasticgames.PolicyFromJointPolicy
-
Sets whether actions selection of this agent's policy should be synchronized with the action selection of other agents
following the same underlying joint policy.
- setTargetAgent(int) - Method in class burlap.behavior.stochasticgames.JointPolicy
-
Sets the target privileged agent from which this joint policy is defined.
- setTargetAgent(int) - Method in class burlap.behavior.stochasticgames.madynamicprogramming.policies.ECorrelatedQJointPolicy
-
- setTargetAgent(int) - Method in class burlap.behavior.stochasticgames.madynamicprogramming.policies.EGreedyJointPolicy
-
- setTargetAgent(int) - Method in class burlap.behavior.stochasticgames.madynamicprogramming.policies.EGreedyMaxWellfare
-
- setTargetAgent(int) - Method in class burlap.behavior.stochasticgames.madynamicprogramming.policies.EMinMaxPolicy
-
- setTemperature(double) - Method in class burlap.datastructures.BoltzmannDistribution
-
Sets the temperature value to use.
- setTerminalStates(Set<Integer>) - Method in class burlap.domain.singleagent.graphdefined.GraphTF
-
- setTerminateOnTrue(boolean) - Method in class burlap.mdp.auxiliary.common.SinglePFTF
-
Sets whether to be terminal state it is required for there to be a true grounded version of this class' propositional function
or whether it is required for there to be a false grounded version.
- setTerminationStates(StateConditionTest) - Method in class burlap.behavior.singleagent.options.SubgoalOption
-
- setTf(TerminalFunction) - Method in class burlap.domain.singleagent.blockdude.BlockDude
-
- setTf(TerminalFunction) - Method in class burlap.domain.singleagent.blocksworld.BlocksWorld
-
- setTf(TerminalFunction) - Method in class burlap.domain.singleagent.cartpole.CartPoleDomain
-
- setTf(TerminalFunction) - Method in class burlap.domain.singleagent.cartpole.InvertedPendulum
-
- setTf(TerminalFunction) - Method in class burlap.domain.singleagent.frostbite.FrostbiteDomain
-
- setTf(TerminalFunction) - Method in class burlap.domain.singleagent.graphdefined.GraphDefinedDomain
-
- setTf(TerminalFunction) - Method in class burlap.domain.singleagent.gridworld.GridWorldDomain
-
- setTf(TerminalFunction) - Method in class burlap.domain.singleagent.lunarlander.LunarLanderDomain
-
- setTf(TerminalFunction) - Method in class burlap.domain.singleagent.mountaincar.MountainCar
-
- setTf(TerminalFunction) - Method in class burlap.mdp.auxiliary.stateconditiontest.TFGoalCondition
-
- setTf(TerminalFunction) - Method in class burlap.mdp.singleagent.model.FactoredModel
-
- setTheTaskSpec(TaskSpec) - Method in class burlap.behavior.singleagent.interfaces.rlglue.RLGlueDomain
-
- setTHistory(double[]) - Method in class burlap.behavior.singleagent.learnfromdemo.apprenticeship.ApprenticeshipLearningRequest
-
- setToCorrectModel() - Method in class burlap.domain.singleagent.cartpole.CartPoleDomain
-
Sets to use the correct physics model by Florian.
- setToIncorrectClassicModel() - Method in class burlap.domain.singleagent.cartpole.CartPoleDomain
-
Sets to the use the classic model by Barto, Sutton, and Anderson, which has incorrect friction forces and gravity
in the wrong direction
- setToIncorrectClassicModelWithCorrectGravity() - Method in class burlap.domain.singleagent.cartpole.CartPoleDomain
-
Sets to use the classic model by Barto, Sutton, and Anderson which has incorrect friction forces, but will use
correct gravity.
- setToStandardLunarLander() - Method in class burlap.domain.singleagent.lunarlander.LunarLanderDomain
-
Sets the domain to use a standard set of physics and with a standard set of two thrust actions.
- setTransition(int, int, int, double) - Method in class burlap.domain.singleagent.graphdefined.GraphDefinedDomain
-
Sets the probability p
for transitioning to state node tNode
after taking action number action
in state node srcNode
.
- setTransitionDynamics(Map<Integer, Map<Integer, Set<GraphDefinedDomain.NodeTransitionProbability>>>) - Method in class burlap.domain.singleagent.graphdefined.GraphDefinedDomain.GraphStateModel
-
- setTransitionDynamics(double[][]) - Method in class burlap.domain.singleagent.gridworld.GridWorldDomain
-
Will set the movement direction probabilities based on the action chosen.
- setup() - Method in class burlap.testing.TestBlockDude
-
- setup() - Method in class burlap.testing.TestGridWorld
-
- setup() - Method in class burlap.testing.TestHashing
-
- setup() - Method in class burlap.testing.TestPlanning
-
- setUpdater(BeliefUpdate) - Method in class burlap.mdp.singleagent.pomdp.BeliefAgent
-
- setupForNewEpisode() - Method in class burlap.behavior.singleagent.auxiliary.performance.PerformancePlotter.Trial
-
Completes the last episode and sets up the datastructures for the next episode
- setupForNewEpisode() - Method in class burlap.behavior.stochasticgames.auxiliary.performance.MultiAgentPerformancePlotter.Trial
-
Completes the last episode and sets up the datastructures for the next episode
- setUpPlottingConfiguration(int, int, int, int, TrialMode, PerformanceMetric...) - Method in class burlap.behavior.singleagent.auxiliary.performance.LearningAlgorithmExperimenter
-
Setsup the plotting confiruation.
- setUpPlottingConfiguration(int, int, int, int, TrialMode, PerformanceMetric...) - Method in class burlap.behavior.stochasticgames.auxiliary.performance.MultiAgentExperimenter
-
Setsup the plotting confiruation.
- setUseFeatureWiseLearningRate(boolean) - Method in class burlap.behavior.singleagent.learning.tdmethods.vfa.GradientDescentSarsaLam
-
Sets whether learning rate polls should be based on the VFA state feature ids, or the OO-MDP state.
- setUseMaxHeap(boolean) - Method in class burlap.datastructures.HashIndexedHeap
-
Sets whether this heap is a max heap or a min heap
- setUseReplaceTraces(boolean) - Method in class burlap.behavior.singleagent.learning.tdmethods.vfa.GradientDescentSarsaLam
-
Sets whether to use replacing eligibility traces rather than accumulating traces.
- setUseVariableCSize(boolean) - Method in class burlap.behavior.singleagent.learnfromdemo.mlirl.differentiableplanners.DifferentiableSparseSampling
-
Sets whether the number of state transition samples (C) should be variable with respect to the depth of the node.
- setUseVariableCSize(boolean) - Method in class burlap.behavior.singleagent.planning.stochastic.sparsesampling.SparseSampling
-
Sets whether the number of state transition samples (C) should be variable with respect to the depth of the node.
- setUsingMaxMargin(boolean) - Method in class burlap.behavior.singleagent.learnfromdemo.apprenticeship.ApprenticeshipLearningRequest
-
- setV(DifferentiableSparseSampling.QAndQGradient) - Method in class burlap.behavior.singleagent.learnfromdemo.mlirl.differentiableplanners.DifferentiableSparseSampling.DiffStateNode
-
- setValue(HashableState, double) - Method in class burlap.behavior.stochasticgames.madynamicprogramming.MADynamicProgramming.BackupBasedQSource
-
Sets the value of the state in this objects value function map.
- setValueForLeafNodes(ValueFunction) - Method in class burlap.behavior.singleagent.learnfromdemo.mlirl.differentiableplanners.DifferentiableSparseSampling
-
Sets the
ValueFunction
object to use for settting the value of leaf nodes.
- setValueForLeafNodes(ValueFunction) - Method in class burlap.behavior.singleagent.planning.stochastic.sparsesampling.SparseSampling
-
Sets the
ValueFunction
object to use for settting the value of leaf nodes.
- setValueFunctionInitialization(ValueFunction) - Method in class burlap.behavior.singleagent.planning.stochastic.DynamicProgramming
-
Sets the value function initialization to use.
- setValueFunctionToLowerBound() - Method in class burlap.behavior.singleagent.planning.stochastic.rtdp.BoundedRTDP
-
Sets the value function to use to be the lower bound.
- setValueFunctionToUpperBound() - Method in class burlap.behavior.singleagent.planning.stochastic.rtdp.BoundedRTDP
-
Sets the value function to use to be the upper bound.
- setValueStringRenderingFormat(int, Color, int, float, float) - Method in class burlap.behavior.singleagent.auxiliary.valuefunctionvis.common.StateValuePainter2D
-
Sets the rendering format of the string displaying the value of each state.
- SetVarCommand - Class in burlap.shell.command.env
-
- SetVarCommand() - Constructor for class burlap.shell.command.env.SetVarCommand
-
- SetVarSGCommand - Class in burlap.shell.command.world
-
- SetVarSGCommand() - Constructor for class burlap.shell.command.world.SetVarSGCommand
-
- setVGrad(DifferentiableSparseSampling.QAndQGradient) - Method in class burlap.behavior.singleagent.learnfromdemo.mlirl.differentiableplanners.DifferentiableSparseSampling.DiffStateNode
-
- setVInit(ValueFunction) - Method in class burlap.behavior.singleagent.planning.vfa.fittedvi.FittedVI
-
Sets the value function initialization used at the start of planning.
- setVinitDim(int) - Method in class burlap.behavior.singleagent.learnfromdemo.mlirl.differentiableplanners.diffvinit.LinearDiffRFVInit
-
- setVinitFvGen(DenseStateFeatures) - Method in class burlap.behavior.singleagent.learnfromdemo.mlirl.differentiableplanners.diffvinit.LinearDiffRFVInit
-
- setVisualizer(Visualizer) - Method in class burlap.shell.BurlapShell
-
- setVmax(double) - Method in class burlap.domain.singleagent.lunarlander.LunarLanderDomain.LLPhysicsParams
-
- setVmax(double) - Method in class burlap.domain.singleagent.lunarlander.LunarLanderDomain
-
Sets the maximum velocity of the agent (the agent cannot move faster than this value).
- setVmax(double) - Method in class burlap.domain.singleagent.mountaincar.MCRandomStateGenerator
-
Sets the maximum velocity that a generated state can have.
- setVmin(double) - Method in class burlap.domain.singleagent.mountaincar.MCRandomStateGenerator
-
Sets the minimum velocity that a generated state can have.
- setVRange(double, double) - Method in class burlap.domain.singleagent.mountaincar.MCRandomStateGenerator
-
Sets the random velocity range that a generated state can have.
- setW(World) - Method in class burlap.shell.visual.SGVisualExplorer
-
Sets the
World
associated with this visual explorer and shell.
- setWelcomeMessage(String) - Method in class burlap.shell.BurlapShell
-
- setWorld(World) - Method in class burlap.behavior.stochasticgames.auxiliary.performance.MultiAgentPerformancePlotter
-
- setWorld(World) - Method in class burlap.shell.SGWorldShell
-
- setWrongDoorReward(double) - Method in class burlap.domain.singleagent.pomdp.tiger.TigerDomain
-
- setXmax(double) - Method in class burlap.domain.singleagent.lunarlander.LunarLanderDomain.LLPhysicsParams
-
- setXmax(double) - Method in class burlap.domain.singleagent.lunarlander.LunarLanderDomain
-
Sets the maximum x position of the lander (the agent cannot cross this boundary)
- setXmax(double) - Method in class burlap.domain.singleagent.mountaincar.MCRandomStateGenerator
-
Sets the maximum x-value that a generated state can have.
- setXmin(double) - Method in class burlap.domain.singleagent.lunarlander.LunarLanderDomain.LLPhysicsParams
-
- setXmin(double) - Method in class burlap.domain.singleagent.lunarlander.LunarLanderDomain
-
Sets the minimum x position of the lander (the agent cannot cross this boundary)
- setXmin(double) - Method in class burlap.domain.singleagent.mountaincar.MCRandomStateGenerator
-
Sets the minimum x-value that a generated state can have.
- setXRange(double, double) - Method in class burlap.domain.singleagent.mountaincar.MCRandomStateGenerator
-
Sets the random x-value range that a generated state can have.
- setXY(int, int) - Method in class burlap.domain.singleagent.blockdude.state.BlockDudeCell
-
- setXYKeys(Object, Object, VariableDomain, VariableDomain, double, double) - Method in class burlap.behavior.singleagent.auxiliary.valuefunctionvis.common.PolicyGlyphPainter2D
-
Sets the variable keys for the x and y variables in the state and the width of cells along those domains.
- setXYKeys(Object, Object, VariableDomain, VariableDomain, double, double) - Method in class burlap.behavior.singleagent.auxiliary.valuefunctionvis.common.StateValuePainter2D
-
Sets the variable keys for the x and y variables in the state and the width of cells along those domains.
- setYmax(double) - Method in class burlap.domain.singleagent.lunarlander.LunarLanderDomain.LLPhysicsParams
-
- setYmax(double) - Method in class burlap.domain.singleagent.lunarlander.LunarLanderDomain
-
Sets the maximum y position of the lander (the agent cannot cross this boundary)
- setYmin(double) - Method in class burlap.domain.singleagent.lunarlander.LunarLanderDomain.LLPhysicsParams
-
- setYmin(double) - Method in class burlap.domain.singleagent.lunarlander.LunarLanderDomain
-
Sets the minimum y position of the lander (the agent cannot cross this boundary)
- sFeatures - Variable in class burlap.behavior.functionapproximation.sparse.SparseCrossProductFeatures
-
- sg - Variable in class burlap.domain.singleagent.pomdp.tiger.TigerModel
-
- SGAgent - Interface in burlap.mdp.stochasticgames.agent
-
This abstract class defines the the shell code and interface for creating agents
that can make decisions in mutli-agent stochastic game worlds.
- SGAgentBase - Class in burlap.mdp.stochasticgames.agent
-
- SGAgentBase() - Constructor for class burlap.mdp.stochasticgames.agent.SGAgentBase
-
- SGAgentType - Class in burlap.mdp.stochasticgames.agent
-
This class specifies the type of agent a stochastic games agent can be.
- SGAgentType(String, List<ActionType>) - Constructor for class burlap.mdp.stochasticgames.agent.SGAgentType
-
Creates a new agent type with a given name, and actions available to the agent.
- SGBackupOperator - Interface in burlap.behavior.stochasticgames.madynamicprogramming
-
A stochastic games backup operator to be used in multi-agent Q-learning or value function planning.
- SGDomain - Class in burlap.mdp.stochasticgames
-
This class is used to define Stochastic Games Domains.
- SGDomain() - Constructor for class burlap.mdp.stochasticgames.SGDomain
-
- SGNaiveQFactory - Class in burlap.behavior.stochasticgames.agents.naiveq
-
- SGNaiveQFactory(SGDomain, double, double, double, HashableStateFactory) - Constructor for class burlap.behavior.stochasticgames.agents.naiveq.SGNaiveQFactory
-
Initializes the factory.
- SGNaiveQFactory(SGDomain, double, double, double, HashableStateFactory, StateMapping) - Constructor for class burlap.behavior.stochasticgames.agents.naiveq.SGNaiveQFactory
-
Initializes the factory.
- SGNaiveQLAgent - Class in burlap.behavior.stochasticgames.agents.naiveq
-
A Tabular Q-learning [1] algorithm for stochastic games formalisms.
- SGNaiveQLAgent(SGDomain, double, double, HashableStateFactory) - Constructor for class burlap.behavior.stochasticgames.agents.naiveq.SGNaiveQLAgent
-
Initializes with a default Q-value of 0 and a 0.1 epsilon greedy policy/strategy
- SGNaiveQLAgent(SGDomain, double, double, double, HashableStateFactory) - Constructor for class burlap.behavior.stochasticgames.agents.naiveq.SGNaiveQLAgent
-
Initializes with a default 0.1 epsilon greedy policy/strategy
- SGNaiveQLAgent(SGDomain, double, double, QFunction, HashableStateFactory) - Constructor for class burlap.behavior.stochasticgames.agents.naiveq.SGNaiveQLAgent
-
Initializes with a default 0.1 epsilon greedy policy/strategy
- SGQWActionHistory - Class in burlap.behavior.stochasticgames.agents.naiveq.history
-
A Tabular Q-learning [1] algorithm for stochastic games formalisms that augments states with the actions each agent took in n
previous time steps.
- SGQWActionHistory(SGDomain, double, double, HashableStateFactory, int) - Constructor for class burlap.behavior.stochasticgames.agents.naiveq.history.SGQWActionHistory
-
Initializes the learning algorithm using 0.1 epsilon greedy learning strategy/policy
- SGQWActionHistoryFactory - Class in burlap.behavior.stochasticgames.agents.naiveq.history
-
An agent factory for Q-learning with history agents.
- SGQWActionHistoryFactory(SGDomain, double, double, HashableStateFactory, int) - Constructor for class burlap.behavior.stochasticgames.agents.naiveq.history.SGQWActionHistoryFactory
-
Initializes the factory
- SGVisualExplorer - Class in burlap.shell.visual
-
This class allows you act as all of the agents in a stochastic game (controlled by a
World
object)
by choosing actions for each of them to take in specific states.
- SGVisualExplorer(SGDomain, Visualizer, State) - Constructor for class burlap.shell.visual.SGVisualExplorer
-
Initializes the data members for the visual explorer.
- SGVisualExplorer(SGDomain, Visualizer, State, int, int) - Constructor for class burlap.shell.visual.SGVisualExplorer
-
Initializes the data members for the visual explorer.
- SGVisualExplorer(SGDomain, World, Visualizer, int, int) - Constructor for class burlap.shell.visual.SGVisualExplorer
-
Initializes the data members for the visual explorer.
- SGWorldShell - Class in burlap.shell
-
- SGWorldShell(Domain, InputStream, PrintStream, World) - Constructor for class burlap.shell.SGWorldShell
-
- SGWorldShell(Domain, World) - Constructor for class burlap.shell.SGWorldShell
-
Creates a SGWorldShell for std in and std out
- SGWorldShell(SGDomain, State) - Constructor for class burlap.shell.SGWorldShell
-
Creates s SGWorldShell with a new world using the domain and using std in and std out.
- sh - Variable in class burlap.behavior.singleagent.learning.actorcritic.critics.TDLambda.StateEligibilityTrace
-
The hashed state with which the eligibility value is associated.
- sh - Variable in class burlap.behavior.singleagent.learning.tdmethods.SarsaLam.EligibilityTrace
-
The state for this trace
- sh - Variable in class burlap.behavior.singleagent.planning.stochastic.rtdp.BoundedRTDP.StateSelectionAndExpectedGap
-
The selected state
- sh - Variable in class burlap.behavior.singleagent.planning.stochastic.sparsesampling.SparseSampling.HashedHeightState
-
The hashed state
- sh - Variable in class burlap.behavior.singleagent.planning.stochastic.valueiteration.PrioritizedSweeping.BPTRNode
-
- ShallowCopyState - Annotation Type in burlap.mdp.core.state.annotations
-
A marker for
State
implementations that indicates that their copy operation is shallow.
- ShallowIdentityStateMapping - Class in burlap.mdp.auxiliary.common
-
A StateAbstraction class the input state without copying it.
- ShallowIdentityStateMapping() - Constructor for class burlap.mdp.auxiliary.common.ShallowIdentityStateMapping
-
- shape - Variable in class burlap.domain.singleagent.gridworld.GridWorldVisualizer.CellPainter
-
- ShapedRewardFunction - Class in burlap.behavior.singleagent.shaping
-
This abstract class is used to define shaped reward functions.
- ShapedRewardFunction(RewardFunction) - Constructor for class burlap.behavior.singleagent.shaping.ShapedRewardFunction
-
Initializes with the base objective task reward function.
- shell - Variable in class burlap.shell.visual.SGVisualExplorer
-
- shell - Variable in class burlap.shell.visual.VisualExplorer
-
- ShellCommand - Interface in burlap.shell.command
-
An interface for implementing shell commands.
- ShellCommandEvent(String, ShellCommand, int) - Constructor for class burlap.shell.ShellObserver.ShellCommandEvent
-
Initializes.
- ShellObserver - Interface in burlap.shell
-
- ShellObserver.ShellCommandEvent - Class in burlap.shell
-
Stores information about a command event in various public data members.
- shouldDecomposeOptions - Variable in class burlap.behavior.singleagent.learning.tdmethods.QLearning
-
Whether options should be decomposed into actions in the returned
Episode
objects.
- shouldDecomposeOptions - Variable in class burlap.behavior.singleagent.learning.tdmethods.vfa.GradientDescentSarsaLam
-
Whether options should be decomposed into actions in the returned
Episode
objects.
- shouldRereunPolicyIteration(Episode) - Method in class burlap.behavior.singleagent.learning.lspi.LSPI
-
Returns whether LSPI should be rereun given the latest learning episode results.
- shouldRescaleValues - Variable in class burlap.behavior.singleagent.auxiliary.valuefunctionvis.StateValuePainter
-
Indicates whether this painter should scale its rendering of values to whatever it is told the minimum and maximum values are.
- showPolicy - Variable in class burlap.behavior.singleagent.auxiliary.valuefunctionvis.ValueFunctionVisualizerGUI
-
The button to enable the visualization of the policy
- shuffleGroundedActions(List<Action>, int, int) - Method in class burlap.behavior.singleagent.planning.deterministic.uninformed.dfs.DFS
-
Shuffles the order of actions on the index range [s, e)
- significance - Variable in class burlap.behavior.singleagent.auxiliary.performance.PerformancePlotter
-
the significance level used for confidence intervals.
- significance - Variable in class burlap.behavior.stochasticgames.auxiliary.performance.MultiAgentPerformancePlotter
-
the significance level used for confidence intervals.
- SimpleAction - Class in burlap.mdp.core.action
-
A simple implementation of
Action
for unparameterized actions.
- SimpleAction() - Constructor for class burlap.mdp.core.action.SimpleAction
-
- SimpleAction(String) - Constructor for class burlap.mdp.core.action.SimpleAction
-
- SimpleHashableStateFactory - Class in burlap.statehashing.simple
-
- SimpleHashableStateFactory() - Constructor for class burlap.statehashing.simple.SimpleHashableStateFactory
-
Default constructor: object identifier independent and no hash code caching.
- SimpleHashableStateFactory(boolean) - Constructor for class burlap.statehashing.simple.SimpleHashableStateFactory
-
Initializes with no hash code caching.
- SimulatedEnvironment - Class in burlap.mdp.singleagent.environment
-
- SimulatedEnvironment(SADomain) - Constructor for class burlap.mdp.singleagent.environment.SimulatedEnvironment
-
- SimulatedEnvironment(SADomain, State) - Constructor for class burlap.mdp.singleagent.environment.SimulatedEnvironment
-
- SimulatedEnvironment(SADomain, StateGenerator) - Constructor for class burlap.mdp.singleagent.environment.SimulatedEnvironment
-
- SimulatedEnvironment(SampleModel) - Constructor for class burlap.mdp.singleagent.environment.SimulatedEnvironment
-
- SimulatedEnvironment(SampleModel, State) - Constructor for class burlap.mdp.singleagent.environment.SimulatedEnvironment
-
- SimulatedEnvironment(SampleModel, StateGenerator) - Constructor for class burlap.mdp.singleagent.environment.SimulatedEnvironment
-
- SimulatedPOEnvironment - Class in burlap.mdp.singleagent.pomdp
-
- SimulatedPOEnvironment(PODomain) - Constructor for class burlap.mdp.singleagent.pomdp.SimulatedPOEnvironment
-
- SimulatedPOEnvironment(PODomain, State) - Constructor for class burlap.mdp.singleagent.pomdp.SimulatedPOEnvironment
-
- SimulatedPOEnvironment(PODomain, StateGenerator) - Constructor for class burlap.mdp.singleagent.pomdp.SimulatedPOEnvironment
-
- sIndex - Variable in class burlap.behavior.stochasticgames.agents.naiveq.history.HistoryState
-
- SingleGoalPFRF - Class in burlap.mdp.singleagent.common
-
This class defines a reward function that returns a goal reward when any grounded form of a propositional
function is true in the resulting state and a default non-goal reward otherwise.
- SingleGoalPFRF(PropositionalFunction) - Constructor for class burlap.mdp.singleagent.common.SingleGoalPFRF
-
Initializes the reward function to return 1 when any grounded from of pf is true in the resulting
state.
- SingleGoalPFRF(PropositionalFunction, double, double) - Constructor for class burlap.mdp.singleagent.common.SingleGoalPFRF
-
Initializes the reward function to return the specified goal reward when any grounded from of pf is true in the resulting
state and the specified non-goal reward otherwise.
- SinglePFSCT - Class in burlap.mdp.auxiliary.stateconditiontest
-
A state condition class that returns true when ever any grounded version of a specified
propositional function is true in a state.
- SinglePFSCT(PropositionalFunction) - Constructor for class burlap.mdp.auxiliary.stateconditiontest.SinglePFSCT
-
Initializes with the propositional function that is checked for state satisfaction
- SinglePFTF - Class in burlap.mdp.auxiliary.common
-
This class defines a terminal function that terminates in states where there exists a grounded version of a specified
propositional function that is true in the state or alternatively, when there is a grounded version that is false in the state.
- SinglePFTF(PropositionalFunction) - Constructor for class burlap.mdp.auxiliary.common.SinglePFTF
-
Initializes the propositional function that will cause the state to be terminal when any Grounded version of
pf is true.
- SinglePFTF(PropositionalFunction, boolean) - Constructor for class burlap.mdp.auxiliary.common.SinglePFTF
-
Initializes the propositional function that will cause the state to be terminal when any Grounded version of
pf is true or alternatively false.
- SingleStageNormalFormGame - Class in burlap.domain.stochasticgames.normalform
-
This stochastic game domain generator provides methods to create N-player single stage games.
- SingleStageNormalFormGame(String[][], double[][][]) - Constructor for class burlap.domain.stochasticgames.normalform.SingleStageNormalFormGame
-
A constructor for bimatrix games with specified action names.
- SingleStageNormalFormGame(double[][], double[][]) - Constructor for class burlap.domain.stochasticgames.normalform.SingleStageNormalFormGame
-
A constructor for a bimatrix game where the row player payoffs and column player payoffs are provided in two different 2D double matrices.
- SingleStageNormalFormGame(double[][]) - Constructor for class burlap.domain.stochasticgames.normalform.SingleStageNormalFormGame
-
A constructor for a bimatrix zero sum game.
- SingleStageNormalFormGame(String[][]) - Constructor for class burlap.domain.stochasticgames.normalform.SingleStageNormalFormGame
-
A constructor for games with a symmetric number of actions for each player.
- SingleStageNormalFormGame(List<List<String>>) - Constructor for class burlap.domain.stochasticgames.normalform.SingleStageNormalFormGame
-
A constructor for games with an asymmetric number of actions for each player.
- SingleStageNormalFormGame.ActionNameMap - Class in burlap.domain.stochasticgames.normalform
-
A wrapper for a HashMap from strings to ints used to map action names to their action index.
- SingleStageNormalFormGame.AgentPayoutFunction - Class in burlap.domain.stochasticgames.normalform
-
A class for defining a payout function for a single agent for each possible strategy profile.
- SingleStageNormalFormGame.MatrixAction - Class in burlap.domain.stochasticgames.normalform
-
- SingleStageNormalFormGame.SingleStageNormalFormJointRewardFunction - Class in burlap.domain.stochasticgames.normalform
-
A Joint Reward Function class that uses the parent domain generators payout matrix to determine payouts for any given strategy profile.
- SingleStageNormalFormGame.StrategyProfile - Class in burlap.domain.stochasticgames.normalform
-
A strategy profile represented as an array of action indices that is hashable.
- SingleStageNormalFormJointRewardFunction(int, SingleStageNormalFormGame.ActionNameMap[], SingleStageNormalFormGame.AgentPayoutFunction[]) - Constructor for class burlap.domain.stochasticgames.normalform.SingleStageNormalFormGame.SingleStageNormalFormJointRewardFunction
-
- size() - Method in class burlap.behavior.singleagent.learning.lspi.SARSData
-
The number of SARS tuples stored.
- size() - Method in class burlap.datastructures.HashedAggregator
-
Returns the number of keys stored.
- size - Variable in class burlap.datastructures.HashIndexedHeap
-
Number of objects in the heap
- size() - Method in class burlap.datastructures.HashIndexedHeap
-
Returns the size of the heap
- size() - Method in class burlap.datastructures.StochasticTree
-
Returns the number of objects in this tree
- size - Variable in class burlap.domain.singleagent.frostbite.state.FrostbitePlatform
-
- size() - Method in class burlap.domain.stochasticgames.normalform.SingleStageNormalFormGame.ActionNameMap
-
- size() - Method in class burlap.mdp.stochasticgames.JointAction
-
Returns the number of actions in this joint action.
- SoftmaxOperator - Class in burlap.behavior.singleagent.planning.stochastic.dpoperator
-
A softmax/Boltzmann operator.
- SoftmaxOperator() - Constructor for class burlap.behavior.singleagent.planning.stochastic.dpoperator.SoftmaxOperator
-
Initializes with beta = 1.0
- SoftmaxOperator(double) - Constructor for class burlap.behavior.singleagent.planning.stochastic.dpoperator.SoftmaxOperator
-
Initializes.
- softTieDelta - Variable in class burlap.behavior.singleagent.auxiliary.valuefunctionvis.common.PolicyGlyphPainter2D
-
The max probability difference from the most likely action for which an action that is not the most likely will still be rendered under the
MAXACTIONSOFTTIE rendering style.
- SoftTimeInverseDecayLR - Class in burlap.behavior.learningrate
-
Implements a learning rate decay schedule where the learning rate at time t is alpha_0 * (n_0 + 1) / (n_0 + t), where alpha_0 is the initial learning rate and n_0 is a parameter.
- SoftTimeInverseDecayLR(double, double) - Constructor for class burlap.behavior.learningrate.SoftTimeInverseDecayLR
-
Initializes with an initial learning rate and decay constant shift for a state independent learning rate.
- SoftTimeInverseDecayLR(double, double, double) - Constructor for class burlap.behavior.learningrate.SoftTimeInverseDecayLR
-
Initializes with an initial learning rate and decay constant shift (n_0) for a state independent learning rate that will decay to a value no smaller than minimumLearningRate
- SoftTimeInverseDecayLR(double, double, HashableStateFactory, boolean) - Constructor for class burlap.behavior.learningrate.SoftTimeInverseDecayLR
-
Initializes with an initial learning rate and decay constant shift (n_0) for a state or state-action (or state feature-action) dependent learning rate.
- SoftTimeInverseDecayLR(double, double, double, HashableStateFactory, boolean) - Constructor for class burlap.behavior.learningrate.SoftTimeInverseDecayLR
-
Initializes with an initial learning rate and decay constant shift (n_0) for a state or state-action (or state feature-action) dependent learning rate that will decay to a value no smaller than minimumLearningRate
If this learning rate function is to be used for state state features, rather than states,
then the hashing factory can be null;
- SoftTimeInverseDecayLR.MutableInt - Class in burlap.behavior.learningrate
-
A class for storing a mutable int value object
- SoftTimeInverseDecayLR.StateWiseTimeIndex - Class in burlap.behavior.learningrate
-
A class for storing a time index for a state, or a time index for each action for a given state
- solve(double[][], double[][]) - Method in class burlap.behavior.stochasticgames.agents.twoplayer.singlestage.equilibriumplayer.BimatrixEquilibriumSolver
-
Solves and caches the solution for the given bimatrix.
- solver - Variable in class burlap.behavior.stochasticgames.agents.twoplayer.singlestage.equilibriumplayer.EquilibriumPlayingSGAgent
-
The solution concept to be solved for the immediate rewards.
- SolverDerivedPolicy - Interface in burlap.behavior.policy
-
An interface for defining policies that refer to a
MDPSolverInterface
objects to defined the policy.
- solverInit(SADomain, double, HashableStateFactory) - Method in class burlap.behavior.singleagent.MDPSolver
-
- solverInit(SADomain, double, HashableStateFactory) - Method in interface burlap.behavior.singleagent.MDPSolverInterface
-
Initializes the solver with the common elements.
- someGroundingIsTrue(OOState) - Method in class burlap.mdp.core.oo.propositional.PropositionalFunction
-
- sortActionsWithOptionsFirst() - Method in class burlap.behavior.singleagent.planning.deterministic.uninformed.dfs.DFS
-
Reorders the planners action list so that options are in the front of the list.
- sourceDomain - Variable in class burlap.behavior.singleagent.learning.modellearning.models.TabularModel
-
The source actual domain object for which actions will be modeled.
- sourceLearningRateFunction - Variable in class burlap.behavior.functionapproximation.dense.fourier.FourierBasisLearningRateWrapper
-
- sourceModel - Variable in class burlap.behavior.singleagent.learning.modellearning.rmax.RMaxModel
-
- sourcePolicy - Variable in class burlap.behavior.policy.CachedPolicy
-
The source policy that gets cached
- sourcePolicy - Variable in class burlap.behavior.singleagent.learning.modellearning.rmax.UnmodeledFavoredPolicy
-
- sp - Variable in class burlap.behavior.singleagent.learning.lspi.SARSData.SARS
-
The next state
- span() - Method in class burlap.mdp.core.state.vardomain.VariableDomain
-
Returns the spanning size of the domain; that is, upper - lower
- SparseCrossProductFeatures - Class in burlap.behavior.functionapproximation.sparse
-
- SparseCrossProductFeatures(SparseStateFeatures) - Constructor for class burlap.behavior.functionapproximation.sparse.SparseCrossProductFeatures
-
- SparseCrossProductFeatures(SparseStateFeatures, Map<Action, SparseCrossProductFeatures.FeaturesMap>, int) - Constructor for class burlap.behavior.functionapproximation.sparse.SparseCrossProductFeatures
-
- SparseCrossProductFeatures.FeaturesMap - Class in burlap.behavior.functionapproximation.sparse
-
- SparseGradient() - Constructor for class burlap.behavior.functionapproximation.FunctionGradient.SparseGradient
-
Initializes with the gradient unspecified for any weights.
- SparseGradient(int) - Constructor for class burlap.behavior.functionapproximation.FunctionGradient.SparseGradient
-
Initializes with the gradient unspecified, but reserves space for the given capacity
- SparseSampling - Class in burlap.behavior.singleagent.planning.stochastic.sparsesampling
-
An implementation of the Sparse Sampling (SS) [1] planning algorithm.
- SparseSampling(SADomain, double, HashableStateFactory, int, int) - Constructor for class burlap.behavior.singleagent.planning.stochastic.sparsesampling.SparseSampling
-
Initializes.
- SparseSampling.HashedHeightState - Class in burlap.behavior.singleagent.planning.stochastic.sparsesampling
-
Tuple for a state and its height in a tree that can be hashed for quick retrieval.
- SparseSampling.StateNode - Class in burlap.behavior.singleagent.planning.stochastic.sparsesampling
-
A class for state nodes.
- SparseStateActionFeatures - Interface in burlap.behavior.functionapproximation.sparse
-
- sparseStateFeatures - Variable in class burlap.behavior.functionapproximation.dense.SparseToDenseFeatures
-
- sparseStateFeatures - Variable in class burlap.behavior.functionapproximation.sparse.LinearVFA
-
The state features
- SparseStateFeatures - Interface in burlap.behavior.functionapproximation.sparse
-
An interface for defining a database of state features that can be returned for any given input state or input state-action pair.
- SparseToDenseFeatures - Class in burlap.behavior.functionapproximation.dense
-
- SparseToDenseFeatures(SparseStateFeatures) - Constructor for class burlap.behavior.functionapproximation.dense.SparseToDenseFeatures
-
Initializes.
- specificObjectPainters - Variable in class burlap.visualizer.OOStatePainter
-
Map of painters that define how to paint specific objects; if an object it appears in both specific and general lists, the specific painter is used
- specs() - Method in class burlap.behavior.singleagent.auxiliary.gridset.FlatStateGridder
-
Returns the set of all grid specs defined.
- spp - Variable in class burlap.behavior.singleagent.auxiliary.valuefunctionvis.PolicyRenderLayer
-
Painter used to visualize the policy
- spp - Variable in class burlap.behavior.singleagent.auxiliary.valuefunctionvis.ValueFunctionVisualizerGUI
-
Painter used to visualize the policy
- sprime - Variable in class burlap.behavior.singleagent.learning.actorcritic.CritiqueResult
-
The state to which the agent transitioned for when it took action a in state s.
- sPrimeActionFeatures - Variable in class burlap.behavior.singleagent.learning.lspi.LSPI.SSFeatures
-
Next state-action features.
- src - Variable in class burlap.mdp.auxiliary.common.ConstantStateGenerator
-
- srcAction - Variable in class burlap.behavior.policy.support.AnnotatedAction
-
- srcTerminateStates - Variable in class burlap.behavior.singleagent.options.model.BFSMarkovOptionModel
-
- srender - Variable in class burlap.visualizer.Visualizer
-
- SSFeatures(double[], double[]) - Constructor for class burlap.behavior.singleagent.learning.lspi.LSPI.SSFeatures
-
Initializes.
- stack(BlocksWorldState, ObjectParameterizedAction) - Method in class burlap.domain.singleagent.blocksworld.BWModel
-
- StackActionType(String) - Constructor for class burlap.domain.singleagent.blocksworld.BlocksWorld.StackActionType
-
- start() - Method in class burlap.debugtools.MyTimer
-
Starts the timer if it is not running.
- start() - Method in class burlap.shell.BurlapShell
-
- startExperiment() - Method in class burlap.behavior.singleagent.auxiliary.performance.LearningAlgorithmExperimenter
-
Starts the experiment and runs all trails for all agents.
- startExperiment() - Method in class burlap.behavior.stochasticgames.auxiliary.performance.MultiAgentExperimenter
-
Starts the experiment and runs all trails for all agents.
- startGUI() - Method in class burlap.behavior.singleagent.auxiliary.performance.PerformancePlotter
-
Launches the GUI and automatic refresh thread.
- startGUI() - Method in class burlap.behavior.stochasticgames.auxiliary.performance.MultiAgentPerformancePlotter
-
Launches the GUI and automatic refresh thread.
- startLiveStatePolling(int) - Method in class burlap.shell.visual.VisualExplorer
-
Starts a thread that polls this explorer's
Environment
every
msPollDelay milliseconds for its current state and updates the visualizer to that state.
- startNewAgent(String) - Method in class burlap.behavior.singleagent.auxiliary.performance.PerformancePlotter
-
Informs the plotter that data collecton for a new agent should begin.
- startNewExperiment() - Method in interface burlap.behavior.singleagent.auxiliary.performance.ExperimentalEnvironment
-
- startNewTrial() - Method in class burlap.behavior.singleagent.auxiliary.performance.PerformancePlotter
-
Informs the plotter that a new trial of the current agent is beginning.
- startNewTrial() - Method in class burlap.behavior.stochasticgames.auxiliary.performance.MultiAgentPerformancePlotter.DatasetsAndTrials
-
Creates a new trial object and adds it to the end of the list of trials.
- startNewTrial() - Method in class burlap.behavior.stochasticgames.auxiliary.performance.MultiAgentPerformancePlotter
-
Initializes the datastructures for a new trial.
- startStateGenerator - Variable in class burlap.behavior.singleagent.learnfromdemo.apprenticeship.ApprenticeshipLearningRequest
-
The initial state generator that models the initial states from which the expert trajectories were drawn
- state(int) - Method in class burlap.behavior.singleagent.Episode
-
Returns the state observed at time step t.
- state - Variable in class burlap.behavior.singleagent.planning.stochastic.montecarlo.uct.UCTStateNode
-
The (hashed) state this node wraps
- state(int) - Method in class burlap.behavior.stochasticgames.GameEpisode
-
Returns the state stored at time step t where t=0 refers to the initial state
- State - Interface in burlap.mdp.core.state
-
A State instance is used to define the state of an environment or an observation from the environment.
- stateActionFeatures - Variable in class burlap.behavior.functionapproximation.sparse.LinearVFA
-
The State-action features based on the cross product of state features and actions
- StateActionRenderLayer - Class in burlap.visualizer
-
A class for rendering state-action events.
- StateActionRenderLayer() - Constructor for class burlap.visualizer.StateActionRenderLayer
-
- stateActionWeights - Variable in class burlap.behavior.functionapproximation.dense.DenseLinearVFA
-
The function weights when performing Q-value function approximation.
- stateActionWeights - Variable in class burlap.behavior.functionapproximation.dense.DenseStateActionLinearVFA
-
The function weights when performing Q-value function approximation.
- StateBelief(State, double) - Constructor for class burlap.mdp.singleagent.pomdp.beliefstate.EnumerableBeliefState.StateBelief
-
Initializes
- stateClass(String) - Method in interface burlap.mdp.core.oo.OODomain
-
Returns the Java class used to define an OO-MDP object class with the given name.
- stateClass(String) - Method in class burlap.mdp.singleagent.oo.OOSADomain
-
- stateClass(String) - Method in class burlap.mdp.stochasticgames.oo.OOSGDomain
-
- stateClasses() - Method in interface burlap.mdp.core.oo.OODomain
-
Returns the Java classes used to define OO-MDP object classes.
- stateClasses() - Method in class burlap.mdp.singleagent.oo.OOSADomain
-
- stateClasses() - Method in class burlap.mdp.stochasticgames.oo.OOSGDomain
-
- stateClassesMap - Variable in class burlap.mdp.singleagent.oo.OOSADomain
-
- stateClassesMap - Variable in class burlap.mdp.stochasticgames.oo.OOSGDomain
-
- StateConditionTest - Interface in burlap.mdp.auxiliary.stateconditiontest
-
And interface for defining classes that check for certain conditions in states.
- StateConditionTestIterable - Interface in burlap.mdp.auxiliary.stateconditiontest
-
An extension of the StateConditionTest that is iterable.
- stateConsole - Variable in class burlap.shell.visual.SGVisualExplorer
-
- stateConsole - Variable in class burlap.shell.visual.VisualExplorer
-
- stateDepthIndex - Variable in class burlap.behavior.singleagent.planning.stochastic.montecarlo.uct.UCT
-
- StateDomain - Interface in burlap.mdp.core.state.vardomain
-
An interface extension for when a
State
can specify the numeric domain of one or more of its variables.
- StateEligibilityTrace(HashableState, double, TDLambda.VValue) - Constructor for class burlap.behavior.singleagent.learning.actorcritic.critics.TDLambda.StateEligibilityTrace
-
Initializes with hashed state, eligibility value and the value function value associated with the state.
- StateEnumerator - Class in burlap.behavior.singleagent.auxiliary
-
For some algorithms, it is useful to have an explicit unique state identifier for each possible state and the hashcode of a state cannot reliably give
a unique number.
- StateEnumerator(Domain, HashableStateFactory) - Constructor for class burlap.behavior.singleagent.auxiliary.StateEnumerator
-
Constructs
- stateEnumerator - Variable in class burlap.mdp.singleagent.pomdp.beliefstate.TabularBeliefState
-
A state enumerator for determining the index of MDP states in the belief vector.
- stateEnumerator - Variable in class burlap.mdp.singleagent.pomdp.beliefstate.TabularBeliefUpdate
-
- stateEnumerator - Variable in class burlap.mdp.singleagent.pomdp.PODomain
-
The underlying MDP state enumerator
- StateFeature - Class in burlap.behavior.functionapproximation.sparse
-
A class for associating a state feature identifier with a value of that state feature
- StateFeature(int, double) - Constructor for class burlap.behavior.functionapproximation.sparse.StateFeature
-
Initializes.
- stateFeatures - Variable in class burlap.behavior.functionapproximation.dense.DenseCrossProductFeatures
-
The state features
- stateFeatures - Variable in class burlap.behavior.functionapproximation.dense.DenseLinearVFA
-
The state feature vector generator used for linear value function approximation.
- stateFlattener - Variable in class burlap.domain.singleagent.rlglue.RLGlueEnvironment
-
Used to flatten states into a vector representation
- stateForId(int) - Method in class burlap.mdp.singleagent.pomdp.beliefstate.TabularBeliefState
-
Returns the corresponding MDP state for the provided unique identifier.
- stateFromObservation(Observation) - Static method in class burlap.behavior.singleagent.interfaces.rlglue.RLGlueDomain
-
Creates a BURLAP
State
from a RLGlue
Observation
.
- stateGenerator - Variable in class burlap.domain.singleagent.rlglue.RLGlueEnvironment
-
The state generator for generating states for each episode
- StateGenerator - Interface in burlap.mdp.auxiliary
-
An interface for generating State objects.
- stateGenerator - Variable in class burlap.mdp.singleagent.environment.SimulatedEnvironment
-
- stateHash(State) - Method in class burlap.behavior.singleagent.MDPSolver
-
A shorthand method for hashing a state.
- stateHash - Variable in class burlap.behavior.stochasticgames.agents.naiveq.history.SGQWActionHistoryFactory
-
The state hashing factory the Q-learning algorithm will use
- stateHash - Variable in class burlap.behavior.stochasticgames.agents.naiveq.SGNaiveQFactory
-
The state hashing factory the Q-learning algorithm will use
- stateHash(State) - Method in class burlap.behavior.stochasticgames.agents.naiveq.SGNaiveQLAgent
-
First abstracts state s, and then returns the
HashableState
object for the abstracted state.
- StateMapping - Interface in burlap.mdp.auxiliary
-
A state mapping interface that maps one state into another state.
- stateModel - Variable in class burlap.mdp.singleagent.model.FactoredModel
-
- StateNode(HashableState, int) - Constructor for class burlap.behavior.singleagent.planning.stochastic.sparsesampling.SparseSampling.StateNode
-
Creates a node for the given hased state at the given height
- stateNodeConstructor - Variable in class burlap.behavior.singleagent.planning.stochastic.montecarlo.uct.UCT
-
- stateNodes - Variable in class burlap.behavior.singleagent.learning.modellearning.models.TabularModel
-
A mapping from (hashed) states to state nodes that store transition statistics
- StatePainter - Interface in burlap.visualizer
-
This class paints general properties of a state/domain that may not be represented
by any specific object instance data.
- statePainters - Variable in class burlap.visualizer.StateRenderLayer
-
list of static painters that pain static non-object defined properties of the domain
- StatePolicyPainter - Interface in burlap.behavior.singleagent.auxiliary.valuefunctionvis
-
An interface for painting a representation of the policy for a specific state onto a 2D Graphics context.
- StateReachability - Class in burlap.behavior.singleagent.auxiliary
-
This class provides methods for finding the set of reachable states from a source state.
- StateReference() - Constructor for class burlap.behavior.singleagent.interfaces.rlglue.RLGlueAgent.StateReference
-
- StateReference() - Constructor for class burlap.behavior.stochasticgames.agents.interfacing.singleagent.LearningAgentToSGAgentInterface.StateReference
-
- StateRenderLayer - Class in burlap.visualizer
-
This class provides 2D visualization of states by being provided a set of state painters to iteratively call to paint
ono the canvas.
- StateRenderLayer() - Constructor for class burlap.visualizer.StateRenderLayer
-
- stateRepresentations - Variable in class burlap.behavior.stochasticgames.agents.naiveq.SGNaiveQLAgent
-
A map from hashed states to the internal state representation for the states stored in the q-table.
- states - Variable in class burlap.behavior.stochasticgames.GameEpisode
-
The sequence of states
- states - Variable in class burlap.behavior.stochasticgames.madynamicprogramming.dpplanners.MAValueIteration
-
The set of states that have been found
- StateSelectionAndExpectedGap(HashableState, double) - Constructor for class burlap.behavior.singleagent.planning.stochastic.rtdp.BoundedRTDP.StateSelectionAndExpectedGap
-
Initializes.
- statesEqual(State, State) - Method in class burlap.statehashing.simple.IDSimpleHashableState
-
Returns true if the two input states are equal.
- statesEqual(State, State) - Method in class burlap.statehashing.simple.IISimpleHashableState
-
Returns true if the two input states are equal.
- stateSequence - Variable in class burlap.behavior.singleagent.Episode
-
The sequence of states observed
- StateSettableEnvironment - Interface in burlap.mdp.singleagent.environment.extensions
-
An interface to be used with
Environment
instances that allows
the environment to have its set set to a client specified state.
- StateSettableEnvironmentServer(StateSettableEnvironment, EnvironmentObserver...) - Constructor for class burlap.mdp.singleagent.environment.extensions.EnvironmentServer.StateSettableEnvironmentServer
-
- statesToStateNodes - Variable in class burlap.behavior.singleagent.planning.stochastic.montecarlo.uct.UCT
-
- statesToVisualize - Variable in class burlap.behavior.singleagent.auxiliary.valuefunctionvis.PolicyRenderLayer
-
The states to visualize
- statesToVisualize - Variable in class burlap.behavior.singleagent.auxiliary.valuefunctionvis.ValueFunctionRenderLayer
-
The states to visualize
- statesToVisualize - Variable in class burlap.behavior.singleagent.auxiliary.valuefunctionvis.ValueFunctionVisualizerGUI
-
- StateTimeElibilityTrace(HashableState, int, double, TDLambda.VValue) - Constructor for class burlap.behavior.singleagent.learning.actorcritic.critics.TimeIndexedTDLambda.StateTimeElibilityTrace
-
Initializes with hashed state, eligibility value, time/depth of the state, and the value function value associated with the state.
- stateToString(State) - Static method in class burlap.mdp.core.state.StateUtilities
-
A standard method for turning an arbitrary
State
into a
String
representation.
- StateTransitionProb - Class in burlap.mdp.core
-
A tuple for a
State
and a double specifying the probability of transitioning to that state.
- StateTransitionProb() - Constructor for class burlap.mdp.core.StateTransitionProb
-
- StateTransitionProb(State, double) - Constructor for class burlap.mdp.core.StateTransitionProb
-
- stateTransitions(State, Action) - Method in class burlap.domain.singleagent.blockdude.BlockDudeModel
-
- stateTransitions(State, Action) - Method in class burlap.domain.singleagent.blocksworld.BWModel
-
- stateTransitions(State, Action) - Method in class burlap.domain.singleagent.cartpole.model.CPClassicModel
-
- stateTransitions(State, Action) - Method in class burlap.domain.singleagent.cartpole.model.CPCorrectModel
-
- stateTransitions(State, Action) - Method in class burlap.domain.singleagent.cartpole.model.IPModel
-
- stateTransitions(State, Action) - Method in class burlap.domain.singleagent.frostbite.FrostbiteModel
-
- stateTransitions(State, Action) - Method in class burlap.domain.singleagent.graphdefined.GraphDefinedDomain.GraphStateModel
-
- stateTransitions(State, Action) - Method in class burlap.domain.singleagent.gridworld.GridWorldDomain.GridWorldModel
-
- stateTransitions(State, Action) - Method in class burlap.domain.singleagent.lunarlander.LunarLanderModel
-
- stateTransitions(State, Action) - Method in class burlap.domain.singleagent.mountaincar.MountainCar.MCModel
-
- stateTransitions(State, JointAction) - Method in class burlap.domain.stochasticgames.gridgame.GridGameStandardMechanics
-
- stateTransitions(State, Action) - Method in interface burlap.mdp.singleagent.model.statemodel.FullStateModel
-
Returns the set of possible transitions when
Action
is applied in
State
s.
- stateTransitions(State, JointAction) - Method in class burlap.mdp.stochasticgames.common.StaticRepeatedGameModel
-
- stateTransitions(State, JointAction) - Method in interface burlap.mdp.stochasticgames.model.FullJointModel
-
Returns the transition probabilities for applying the provided
JointAction
action in the given state.
- stateTransitionsModeled(KWIKModel, List<ActionType>, State) - Static method in class burlap.behavior.singleagent.learning.modellearning.KWIKModel.Helper
-
- StateUtilities - Class in burlap.mdp.core.state
-
A class with static methods for common tasks with states.
- StateValuePainter - Class in burlap.behavior.singleagent.auxiliary.valuefunctionvis
-
An abstract class for defining the interface and common methods to paint the representation of the value function for a specific state onto
a 2D graphics context.
- StateValuePainter() - Constructor for class burlap.behavior.singleagent.auxiliary.valuefunctionvis.StateValuePainter
-
- StateValuePainter2D - Class in burlap.behavior.singleagent.auxiliary.valuefunctionvis.common
-
A class for rendering the value of states as colored 2D cells on the canvas.
- StateValuePainter2D() - Constructor for class burlap.behavior.singleagent.auxiliary.valuefunctionvis.common.StateValuePainter2D
-
- StateValuePainter2D(ColorBlend) - Constructor for class burlap.behavior.singleagent.auxiliary.valuefunctionvis.common.StateValuePainter2D
-
Initializes the value painter.
- stateWeights - Variable in class burlap.behavior.functionapproximation.dense.DenseLinearVFA
-
The function weights when performing state value function approximation.
- StateWiseLearningRate() - Constructor for class burlap.behavior.learningrate.ExponentialDecayLR.StateWiseLearningRate
-
- stateWiseMap - Variable in class burlap.behavior.learningrate.ExponentialDecayLR
-
The state dependent or state-action dependent learning rates
- stateWiseMap - Variable in class burlap.behavior.learningrate.SoftTimeInverseDecayLR
-
The state dependent or state-action dependent learning rate time indices
- StateWiseTimeIndex() - Constructor for class burlap.behavior.learningrate.SoftTimeInverseDecayLR.StateWiseTimeIndex
-
- StaticDomainPainter - Interface in burlap.behavior.singleagent.auxiliary.valuefunctionvis
-
An interface for painting general domain information to a 2D graphics context.
- StaticRepeatedGameModel - Class in burlap.mdp.stochasticgames.common
-
This action model can be used to take a single stage game, and cause it to repeat itself.
- StaticRepeatedGameModel() - Constructor for class burlap.mdp.stochasticgames.common.StaticRepeatedGameModel
-
- StaticWeightedAStar - Class in burlap.behavior.singleagent.planning.deterministic.informed.astar
-
Statically weighted A* [1] implementation.
- StaticWeightedAStar(SADomain, StateConditionTest, HashableStateFactory, Heuristic, double) - Constructor for class burlap.behavior.singleagent.planning.deterministic.informed.astar.StaticWeightedAStar
-
Initializes.
- stepEpisode - Variable in class burlap.behavior.singleagent.auxiliary.performance.PerformancePlotter.Trial
-
Stores the steps by episode
- stepEpisode - Variable in class burlap.behavior.stochasticgames.auxiliary.performance.MultiAgentPerformancePlotter.Trial
-
Stores the steps by episode
- stepEpisodeSeries - Variable in class burlap.behavior.singleagent.auxiliary.performance.PerformancePlotter.AgentDatasets
-
Most recent trial's steps per step episode data
- stepEpisodeSeries - Variable in class burlap.behavior.stochasticgames.auxiliary.performance.MultiAgentPerformancePlotter.AgentDatasets
-
Most recent trial's steps per step episode data
- stepIncrement(double) - Method in class burlap.behavior.singleagent.auxiliary.performance.PerformancePlotter.Trial
-
Updates all datastructures with the reward received from the last step
- stepIncrement(double) - Method in class burlap.behavior.stochasticgames.auxiliary.performance.MultiAgentPerformancePlotter.Trial
-
Updates all datastructures with the reward received from the last step
- stepSize - Variable in class burlap.domain.singleagent.frostbite.FrostbiteModel
-
- STNode(T, double, StochasticTree<T>.STNode) - Constructor for class burlap.datastructures.StochasticTree.STNode
-
Initializes a leaf node with the given weight and parent
- STNode(double) - Constructor for class burlap.datastructures.StochasticTree.STNode
-
Initializes a node with a weight only
- STNode(double, StochasticTree<T>.STNode) - Constructor for class burlap.datastructures.StochasticTree.STNode
-
Initializes a node with a given weight and parent node
- StochasticTree<T> - Class in burlap.datastructures
-
A class for performing sampling of a set of objects at O(lg(n)) time.
- StochasticTree() - Constructor for class burlap.datastructures.StochasticTree
-
Initializes with an empty tree.
- StochasticTree(List<Double>, List<T>) - Constructor for class burlap.datastructures.StochasticTree
-
Initializes a tree for objects with the given weights
- StochasticTree.STNode - Class in burlap.datastructures
-
A class for storing a stochastic tree node.
- stop() - Method in class burlap.debugtools.MyTimer
-
Stops the timer.
- stopLivePolling() - Method in class burlap.shell.visual.VisualExplorer
-
Stops this class from live polling this explorer's
Environment
.
- stopPlanning() - Method in class burlap.behavior.singleagent.planning.stochastic.montecarlo.uct.UCT
-
Returns true if rollouts and planning should cease.
- stopReachabilityFromTerminalStates - Variable in class burlap.behavior.singleagent.learnfromdemo.mlirl.differentiableplanners.DifferentiableVI
-
When the reachability analysis to find the state space is performed, a breadth first search-like pass
(spreading over all stochastic transitions) is performed.
- stopReachabilityFromTerminalStates - Variable in class burlap.behavior.singleagent.planning.stochastic.valueiteration.ValueIteration
-
When the reachability analysis to find the state space is performed, a breadth first search-like pass
(spreading over all stochastic transitions) is performed.
- storage - Variable in class burlap.datastructures.HashedAggregator
-
The backing hash map
- storedAbstraction - Variable in class burlap.behavior.stochasticgames.agents.naiveq.SGNaiveQFactory
-
The state abstract the Q-learning algorithm will use
- storedMapAbstraction - Variable in class burlap.behavior.stochasticgames.agents.naiveq.SGNaiveQLAgent
-
A state abstraction to use.
- storedQ(State, Action) - Method in class burlap.behavior.stochasticgames.agents.naiveq.SGNaiveQLAgent
-
- StrategyProfile(int...) - Constructor for class burlap.domain.stochasticgames.normalform.SingleStageNormalFormGame.StrategyProfile
-
- stringOrBoolean(Object) - Static method in class burlap.mdp.core.state.StateUtilities
-
Takes an input object, typically value to which a variable should be set, that is either a String representation of a boolean, or a
Boolean
, and returns the corresponding Boolean
.
- stringOrNumber(Object) - Static method in class burlap.mdp.core.state.StateUtilities
-
Takes an input object, typically value to which a variable should be set, that is either a String representation of a number, or a
Number
, and returns the corresponding Number
.
- SubDifferentiableMaxOperator - Class in burlap.behavior.singleagent.learnfromdemo.mlirl.differentiableplanners.dpoperator
-
- SubDifferentiableMaxOperator() - Constructor for class burlap.behavior.singleagent.learnfromdemo.mlirl.differentiableplanners.dpoperator.SubDifferentiableMaxOperator
-
- SubgoalOption - Class in burlap.behavior.singleagent.options
-
A class for a classic subgoal Markov option.
- SubgoalOption() - Constructor for class burlap.behavior.singleagent.options.SubgoalOption
-
A default constructor for serialization purposes.
- SubgoalOption(String, Policy, StateConditionTest, StateConditionTest) - Constructor for class burlap.behavior.singleagent.options.SubgoalOption
-
Initializes.
- successorStates - Variable in class burlap.behavior.singleagent.planning.stochastic.montecarlo.uct.UCTActionNode
-
The possible successor states.
- sumReturn - Variable in class burlap.behavior.singleagent.planning.stochastic.montecarlo.uct.UCTActionNode
-
The sum return observed for this action node
- SupervisedVFA - Interface in burlap.behavior.functionapproximation.supervised
-
An interface for learning value function approximation via a supervised learning algorithm.
- SupervisedVFA.SupervisedVFAInstance - Class in burlap.behavior.functionapproximation.supervised
-
A pair for a state and it's target value function value.
- SupervisedVFAInstance(State, double) - Constructor for class burlap.behavior.functionapproximation.supervised.SupervisedVFA.SupervisedVFAInstance
-
Initializes
- svp - Variable in class burlap.behavior.singleagent.auxiliary.valuefunctionvis.ValueFunctionRenderLayer
-
Painter used to visualize the value function
- svp - Variable in class burlap.behavior.singleagent.auxiliary.valuefunctionvis.ValueFunctionVisualizerGUI
-
Painter used to visualize the value function
- synchronizeJointActionSelectionAmongAgents - Variable in class burlap.behavior.stochasticgames.PolicyFromJointPolicy
-