- s - Variable in class burlap.behavior.singleagent.learnfromdemo.mlirl.support.QGradientTuple
-
The state
- s - Variable in class burlap.behavior.singleagent.learning.actorcritic.CritiqueResult
-
The source state
- s - Variable in class burlap.behavior.singleagent.learning.lspi.SARSData.SARS
-
The previou state
- s - Variable in class burlap.behavior.singleagent.learning.tdmethods.QLearningStateNode
-
A hashed state entry for which Q-value will be stored.
- s - Variable in class burlap.behavior.singleagent.planning.deterministic.SearchNode
-
The (hashed) state of this node
- s - Variable in class burlap.behavior.singleagent.planning.vfa.fittedvi.SupervisedVFA.SupervisedVFAInstance
-
The state
- s - Variable in class burlap.behavior.singleagent.vfa.cmac.Tiling.StateTile
-
The state the tile is for
- s - Variable in class burlap.behavior.stochasticgames.madynamicprogramming.JAQValue
-
- s - Variable in class burlap.behavior.valuefunction.QValue
-
The state with which this Q-value is associated.
- s - Variable in class burlap.oomdp.core.TransitionProbability
-
The state to which the agent may transition.
- s - Variable in class burlap.oomdp.singleagent.pomdp.beliefstate.EnumerableBeliefState.StateBelief
-
The MDP state defined by a
State
instance.
- s - Variable in class burlap.oomdp.statehashing.HashableState
-
- saAfterStateRL - Variable in class burlap.oomdp.visualizer.Visualizer
-
- sActionFeatures - Variable in class burlap.behavior.singleagent.learning.lspi.LSPI.SSFeatures
-
State-action features
- SADomain - Class in burlap.oomdp.singleagent
-
A domain subclass for single agent domains.
- SADomain() - Constructor for class burlap.oomdp.singleagent.SADomain
-
- sample() - Method in class burlap.datastructures.BoltzmannDistribution
-
Samples the output probability distribution.
- sample() - Method in class burlap.datastructures.StochasticTree
-
Samples an element according to a probability defined by the relative weight of objects from the tree and returns it
- sampleBasicMovement(State, GridGameStandardMechanics.Location2, GridGameStandardMechanics.Location2, List<GridGameStandardMechanics.Location2>) - Method in class burlap.domain.stochasticgames.gridgame.GridGameStandardMechanics
-
Returns a movement result of the agent.
- sampledBellmanQEstimate(GroundedAction, DifferentiableSparseSampling.QAndQGradient) - Method in class burlap.behavior.singleagent.learnfromdemo.mlirl.differentiableplanners.DifferentiableSparseSampling.DiffStateNode
-
- sampledBellmanQEstimate(GroundedAction) - Method in class burlap.behavior.singleagent.planning.stochastic.sparsesampling.SparseSampling.StateNode
-
Estimates the Q-value using sampling from the transition dynamics.
- sampleFromActionDistribution(State) - Method in class burlap.behavior.policy.Policy
-
This is a helper method for stochastic policies.
- sampleHelper(StochasticTree<T>.STNode, double) - Method in class burlap.datastructures.StochasticTree
-
A recursive method for performing sampling
- sampleModel(State, GroundedAction) - Method in class burlap.behavior.singleagent.learning.modellearning.Model
-
A method to sample this model's transition dynamics for the given state and action.
- sampleModelHelper(State, GroundedAction) - Method in class burlap.behavior.singleagent.learning.modellearning.Model
-
A helper method to sample this model's transition dynamics for the given state and action.
- sampleModelHelper(State, GroundedAction) - Method in class burlap.behavior.singleagent.learning.modellearning.models.TabularModel
-
- sampleObservation(State, GroundedAction) - Method in class burlap.domain.singleagent.pomdp.tiger.TigerDomain.TigerObservations
-
- sampleObservation(State, GroundedAction) - Method in class burlap.oomdp.singleagent.pomdp.ObservationFunction
-
Samples an observation given the true MDP state and action taken in the previous step that led to the MDP state.
- sampleObservationByEnumeration(State, GroundedAction) - Method in class burlap.oomdp.singleagent.pomdp.ObservationFunction
-
- samples - Variable in class burlap.behavior.singleagent.planning.vfa.fittedvi.FittedVI
-
The set of samples on which to perform value iteration.
- sampleStateFromBelief() - Method in interface burlap.oomdp.singleagent.pomdp.beliefstate.BeliefState
-
Samples an MDP state state from this belief distribution.
- sampleStateFromBelief() - Method in class burlap.oomdp.singleagent.pomdp.beliefstate.tabular.TabularBeliefState
-
- sampleStrategy(double[]) - Method in class burlap.behavior.stochasticgames.agents.twoplayer.singlestage.equilibriumplayer.EquilibriumPlayingSGAgent
-
Samples an action from a strategy, where a strategy is defined as probability distribution over actions.
- sampleTransitionFromEnumeratedDistribution(State, GroundedAction) - Static method in class burlap.oomdp.singleagent.FullActionModel.FullActionModelHelper
-
- sampleTransitionFromTransitionProbabilities(State, GroundedAction) - Method in class burlap.behavior.singleagent.learning.modellearning.Model
-
- sampleWallCollision(GridGameStandardMechanics.Location2, GridGameStandardMechanics.Location2, List<ObjectInstance>, boolean) - Method in class burlap.domain.stochasticgames.gridgame.GridGameStandardMechanics
-
Return true if the agent is able to move in the desired location; false if the agent moves into a solid wall
or if the agent randomly fails to move through a semi-wall that is in the way.
- saOnlyReward(State, GroundedAction) - Method in class burlap.oomdp.singleagent.pomdp.BeliefMDPGenerator.BeliefRF
-
Returns the belief MDP reward when the POMDP reward function is independent from the next state transition.
- sarender - Variable in class burlap.oomdp.visualizer.Visualizer
-
- SarsaLam - Class in burlap.behavior.singleagent.learning.tdmethods
-
Tabular SARSA(\lambda) implementation [1].
- SarsaLam(Domain, double, HashableStateFactory, double, double, double) - Constructor for class burlap.behavior.singleagent.learning.tdmethods.SarsaLam
-
Initializes SARSA(\lambda) with 0.1 epsilon greedy policy, the same Q-value initialization everywhere, and places no limit on the number of steps the
agent can take in an episode.
- SarsaLam(Domain, double, HashableStateFactory, double, double, int, double) - Constructor for class burlap.behavior.singleagent.learning.tdmethods.SarsaLam
-
Initializes SARSA(\lambda) with 0.1 epsilon greedy policy, the same Q-value initialization everywhere.
- SarsaLam(Domain, double, HashableStateFactory, double, double, Policy, int, double) - Constructor for class burlap.behavior.singleagent.learning.tdmethods.SarsaLam
-
Initializes SARSA(\lambda) with the same Q-value initialization everywhere.
- SarsaLam(Domain, double, HashableStateFactory, ValueFunctionInitialization, double, Policy, int, double) - Constructor for class burlap.behavior.singleagent.learning.tdmethods.SarsaLam
-
Initializes SARSA(\lambda).
- SarsaLam.EligibilityTrace - Class in burlap.behavior.singleagent.learning.tdmethods
-
A data structure for maintaining eligibility trace values
- SarsaLam.EligibilityTrace(HashableState, QValue, double) - Constructor for class burlap.behavior.singleagent.learning.tdmethods.SarsaLam.EligibilityTrace
-
Creates a new eligibility trace to track for an episode.
- sarsalamInit(double) - Method in class burlap.behavior.singleagent.learning.tdmethods.SarsaLam
-
- sarsaLearningExample(String) - Method in class burlap.tutorials.bpl.BasicBehavior
-
- SARSCollector - Class in burlap.behavior.singleagent.learning.lspi
-
This object is used to collected
SARSData
(state-action-reard-state tuples) that can then be used by algorithms like LSPI for learning.
- SARSCollector(Domain) - Constructor for class burlap.behavior.singleagent.learning.lspi.SARSCollector
-
Initializes the collector's action set using the actions that are part of the domain.
- SARSCollector(List<Action>) - Constructor for class burlap.behavior.singleagent.learning.lspi.SARSCollector
-
Initializes this collector's action set to use for collecting data.
- SARSCollector.UniformRandomSARSCollector - Class in burlap.behavior.singleagent.learning.lspi
-
Collects SARS data from source states generated by a
StateGenerator
by choosing actions uniformly at random.
- SARSCollector.UniformRandomSARSCollector(Domain) - Constructor for class burlap.behavior.singleagent.learning.lspi.SARSCollector.UniformRandomSARSCollector
-
Initializes the collector's action set using the actions that are part of the domain.
- SARSCollector.UniformRandomSARSCollector(List<Action>) - Constructor for class burlap.behavior.singleagent.learning.lspi.SARSCollector.UniformRandomSARSCollector
-
Initializes this collector's action set to use for collecting data.
- SARSData - Class in burlap.behavior.singleagent.learning.lspi
-
Class that provides a wrapper for a List holding a bunch of state-action-reward-state (
SARSData.SARS
) tuples.
- SARSData() - Constructor for class burlap.behavior.singleagent.learning.lspi.SARSData
-
Initializes with an empty dataset
- SARSData(int) - Constructor for class burlap.behavior.singleagent.learning.lspi.SARSData
-
Initializes with an empty dataset with initial capacity for the given parameter available.
- SARSData.SARS - Class in burlap.behavior.singleagent.learning.lspi
-
State-action-reward-state tuple.
- SARSData.SARS(State, GroundedAction, double, State) - Constructor for class burlap.behavior.singleagent.learning.lspi.SARSData.SARS
-
Initializes.
- sasReward(State, GroundedAction) - Method in class burlap.oomdp.singleagent.pomdp.BeliefMDPGenerator.BeliefRF
-
Returns the belief MDP reward when the POMDP reward function is dependent on the next state transition.
- saThread - Variable in class burlap.behavior.stochasticgames.agents.interfacing.singleagent.LearningAgentToSGAgentInterface
-
The thread that runs the single agent learning algorithm
- satisfies(State) - Method in class burlap.oomdp.auxiliary.stateconditiontest.SinglePFSCT
-
- satisfies(State) - Method in interface burlap.oomdp.auxiliary.stateconditiontest.StateConditionTest
-
- satisfies(State) - Method in class burlap.oomdp.auxiliary.stateconditiontest.TFGoalCondition
-
- satisfiesGoal(State) - Method in class burlap.behavior.singleagent.planning.stochastic.rtdp.BFSRTDP
-
Returns whether a state is a goal state.
- satisifiesHeap() - Method in class burlap.datastructures.HashIndexedHeap
-
This method returns whether the data structure stored is in fact a heap (costs linear time).
- scanner - Variable in class burlap.shell.BurlapShell
-
- sdp - Variable in class burlap.behavior.singleagent.auxiliary.valuefunctionvis.ValueFunctionVisualizerGUI
-
Painter used to visualize general state-independent domain information
- SDPlannerPolicy - Class in burlap.behavior.singleagent.planning.deterministic
-
This is a static deterministic valueFunction policy, which means
if the source deterministic valueFunction has not already computed
and cached the plan for a query state, then this policy
is undefined for that state and will cause the policy to throw
a corresponding
Policy.PolicyUndefinedException
exception object.
- SDPlannerPolicy() - Constructor for class burlap.behavior.singleagent.planning.deterministic.SDPlannerPolicy
-
- SDPlannerPolicy(DeterministicPlanner) - Constructor for class burlap.behavior.singleagent.planning.deterministic.SDPlannerPolicy
-
- SearchNode - Class in burlap.behavior.singleagent.planning.deterministic
-
The SearchNode class is used for classic deterministic forward search planners.
- SearchNode(HashableState) - Constructor for class burlap.behavior.singleagent.planning.deterministic.SearchNode
-
Constructs a SearchNode for the input state.
- SearchNode(HashableState, GroundedAction, SearchNode) - Constructor for class burlap.behavior.singleagent.planning.deterministic.SearchNode
-
Constructs a SearchNode for the input state and sets the generating action and back pointer to the provided elements.
- seAvgSeries - Variable in class burlap.behavior.singleagent.auxiliary.performance.PerformancePlotter.AgentDatasets
-
All trial's average steps per episode series data
- seAvgSeries - Variable in class burlap.behavior.stochasticgames.auxiliary.performance.MultiAgentPerformancePlotter.AgentDatasets
-
All trial's average steps per episode series data
- seedDefault(long) - Static method in class burlap.debugtools.RandomFactory
-
Sets the seed of the default random number generator
- seedMapped(int, long) - Static method in class burlap.debugtools.RandomFactory
-
Seeds and returns the random generator with the associated id or creates it if it does not yet exist
- seedMapped(String, long) - Static method in class burlap.debugtools.RandomFactory
-
Seeds and returns the random generator with the associated String id or creates it if it does not yet exist
- selectActionNode(UCTStateNode) - Method in class burlap.behavior.singleagent.planning.stochastic.montecarlo.uct.UCT
-
Selections which action to take.
- selectionMode - Variable in class burlap.behavior.singleagent.planning.stochastic.rtdp.BoundedRTDP
-
Which state selection mode is used.
- selector - Variable in class burlap.oomdp.stochasticgames.tournament.Tournament
-
- semiDeepCopy(String...) - Method in class burlap.oomdp.core.states.MutableState
-
Performs a semi-deep copy of the state in which only the objects with the names in deepCopyObjectNames are deep copied and the rest of the
objects are shallowed copied.
- semiDeepCopy(ObjectInstance...) - Method in class burlap.oomdp.core.states.MutableState
-
Performs a semi-deep copy of the state in which only the objects in deepCopyObjects are deep copied and the rest of the
objects are shallowed copied.
- semiDeepCopy(Set<ObjectInstance>) - Method in class burlap.oomdp.core.states.MutableState
-
Performs a semi-deep copy of the state in which only the objects in deepCopyObjects are deep copied and the rest of the
objects are shallowed copied.
- semiWallProb - Variable in class burlap.domain.stochasticgames.gridgame.GridGame
-
The probability that an agent will pass through a semi-wall.
- SerializableCartPoleStateFactory - Class in burlap.domain.singleagent.cartpole
-
- SerializableCartPoleStateFactory() - Constructor for class burlap.domain.singleagent.cartpole.SerializableCartPoleStateFactory
-
- SerializableCartPoleStateFactory.CartPoleStateParser - Class in burlap.domain.singleagent.cartpole
-
- SerializableCartPoleStateFactory.CartPoleStateParser(Domain) - Constructor for class burlap.domain.singleagent.cartpole.SerializableCartPoleStateFactory.CartPoleStateParser
-
- SerializableCartPoleStateFactory.SerializableCartPoleState - Class in burlap.domain.singleagent.cartpole
-
- SerializableCartPoleStateFactory.SerializableCartPoleState() - Constructor for class burlap.domain.singleagent.cartpole.SerializableCartPoleStateFactory.SerializableCartPoleState
-
- SerializableCartPoleStateFactory.SerializableCartPoleState(State) - Constructor for class burlap.domain.singleagent.cartpole.SerializableCartPoleStateFactory.SerializableCartPoleState
-
- SerializableFrostbiteStateFactory - Class in burlap.domain.singleagent.frostbite
-
- SerializableFrostbiteStateFactory() - Constructor for class burlap.domain.singleagent.frostbite.SerializableFrostbiteStateFactory
-
- SerializableFrostbiteStateFactory.FrostbiteStateParser - Class in burlap.domain.singleagent.frostbite
-
- SerializableFrostbiteStateFactory.FrostbiteStateParser(Domain) - Constructor for class burlap.domain.singleagent.frostbite.SerializableFrostbiteStateFactory.FrostbiteStateParser
-
- SerializableFrostbiteStateFactory.SerializableFrostbiteState - Class in burlap.domain.singleagent.frostbite
-
- SerializableFrostbiteStateFactory.SerializableFrostbiteState() - Constructor for class burlap.domain.singleagent.frostbite.SerializableFrostbiteStateFactory.SerializableFrostbiteState
-
- SerializableFrostbiteStateFactory.SerializableFrostbiteState(State) - Constructor for class burlap.domain.singleagent.frostbite.SerializableFrostbiteStateFactory.SerializableFrostbiteState
-
- SerializableGridWorldStateFactory - Class in burlap.domain.singleagent.gridworld
-
- SerializableGridWorldStateFactory() - Constructor for class burlap.domain.singleagent.gridworld.SerializableGridWorldStateFactory
-
- SerializableGridWorldStateFactory.GridWorldStateParser - Class in burlap.domain.singleagent.gridworld
-
- SerializableGridWorldStateFactory.GridWorldStateParser(Domain) - Constructor for class burlap.domain.singleagent.gridworld.SerializableGridWorldStateFactory.GridWorldStateParser
-
- SerializableGridWorldStateFactory.SerializableGridWorldState - Class in burlap.domain.singleagent.gridworld
-
- SerializableGridWorldStateFactory.SerializableGridWorldState() - Constructor for class burlap.domain.singleagent.gridworld.SerializableGridWorldStateFactory.SerializableGridWorldState
-
- SerializableGridWorldStateFactory.SerializableGridWorldState(State) - Constructor for class burlap.domain.singleagent.gridworld.SerializableGridWorldStateFactory.SerializableGridWorldState
-
- SerializableInvertedPendulumStateFactory - Class in burlap.domain.singleagent.cartpole
-
- SerializableInvertedPendulumStateFactory() - Constructor for class burlap.domain.singleagent.cartpole.SerializableInvertedPendulumStateFactory
-
- SerializableInvertedPendulumStateFactory.InvertedPendulumStateParser - Class in burlap.domain.singleagent.cartpole
-
- SerializableInvertedPendulumStateFactory.InvertedPendulumStateParser(Domain) - Constructor for class burlap.domain.singleagent.cartpole.SerializableInvertedPendulumStateFactory.InvertedPendulumStateParser
-
- SerializableInvertedPendulumStateFactory.SerializableInvertedPendulumState - Class in burlap.domain.singleagent.cartpole
-
- SerializableInvertedPendulumStateFactory.SerializableInvertedPendulumState() - Constructor for class burlap.domain.singleagent.cartpole.SerializableInvertedPendulumStateFactory.SerializableInvertedPendulumState
-
- SerializableInvertedPendulumStateFactory.SerializableInvertedPendulumState(State) - Constructor for class burlap.domain.singleagent.cartpole.SerializableInvertedPendulumStateFactory.SerializableInvertedPendulumState
-
- SerializableLunarLanderStateFactory - Class in burlap.domain.singleagent.lunarlander
-
- SerializableLunarLanderStateFactory() - Constructor for class burlap.domain.singleagent.lunarlander.SerializableLunarLanderStateFactory
-
- SerializableLunarLanderStateFactory.LunarLanderStateParser - Class in burlap.domain.singleagent.lunarlander
-
- SerializableLunarLanderStateFactory.LunarLanderStateParser(Domain) - Constructor for class burlap.domain.singleagent.lunarlander.SerializableLunarLanderStateFactory.LunarLanderStateParser
-
- SerializableLunarLanderStateFactory.SerializableLunarLanderState - Class in burlap.domain.singleagent.lunarlander
-
- SerializableLunarLanderStateFactory.SerializableLunarLanderState() - Constructor for class burlap.domain.singleagent.lunarlander.SerializableLunarLanderStateFactory.SerializableLunarLanderState
-
- SerializableLunarLanderStateFactory.SerializableLunarLanderState(State) - Constructor for class burlap.domain.singleagent.lunarlander.SerializableLunarLanderStateFactory.SerializableLunarLanderState
-
- SerializableMountainCarStateFactory - Class in burlap.domain.singleagent.mountaincar
-
- SerializableMountainCarStateFactory() - Constructor for class burlap.domain.singleagent.mountaincar.SerializableMountainCarStateFactory
-
- SerializableMountainCarStateFactory.MountainCarStateParser - Class in burlap.domain.singleagent.mountaincar
-
- SerializableMountainCarStateFactory.MountainCarStateParser(Domain) - Constructor for class burlap.domain.singleagent.mountaincar.SerializableMountainCarStateFactory.MountainCarStateParser
-
- SerializableMountainCarStateFactory.SerializableMountainCarState - Class in burlap.domain.singleagent.mountaincar
-
- SerializableMountainCarStateFactory.SerializableMountainCarState() - Constructor for class burlap.domain.singleagent.mountaincar.SerializableMountainCarStateFactory.SerializableMountainCarState
-
- SerializableMountainCarStateFactory.SerializableMountainCarState(State) - Constructor for class burlap.domain.singleagent.mountaincar.SerializableMountainCarStateFactory.SerializableMountainCarState
-
- SerializableState - Class in burlap.oomdp.stateserialization
-
- SerializableState() - Constructor for class burlap.oomdp.stateserialization.SerializableState
-
- SerializableState(State) - Constructor for class burlap.oomdp.stateserialization.SerializableState
-
- SerializableStateFactory - Interface in burlap.oomdp.stateserialization
-
- serialize() - Method in class burlap.behavior.singleagent.EpisodeAnalysis
-
- serialize(SerializableStateFactory) - Method in class burlap.behavior.singleagent.EpisodeAnalysis
-
- serialize() - Method in class burlap.behavior.stochasticgames.GameAnalysis
-
- serialize(SerializableStateFactory) - Method in class burlap.behavior.stochasticgames.GameAnalysis
-
- serialize(State) - Method in class burlap.domain.singleagent.cartpole.SerializableCartPoleStateFactory.SerializableCartPoleState
-
- serialize(State) - Method in class burlap.domain.singleagent.cartpole.SerializableCartPoleStateFactory
-
- serialize(State) - Method in class burlap.domain.singleagent.cartpole.SerializableInvertedPendulumStateFactory.SerializableInvertedPendulumState
-
- serialize(State) - Method in class burlap.domain.singleagent.cartpole.SerializableInvertedPendulumStateFactory
-
- serialize(State) - Method in class burlap.domain.singleagent.frostbite.SerializableFrostbiteStateFactory.SerializableFrostbiteState
-
- serialize(State) - Method in class burlap.domain.singleagent.frostbite.SerializableFrostbiteStateFactory
-
- serialize(State) - Method in class burlap.domain.singleagent.gridworld.SerializableGridWorldStateFactory.SerializableGridWorldState
-
- serialize(State) - Method in class burlap.domain.singleagent.gridworld.SerializableGridWorldStateFactory
-
- serialize(State) - Method in class burlap.domain.singleagent.lunarlander.SerializableLunarLanderStateFactory.SerializableLunarLanderState
-
- serialize(State) - Method in class burlap.domain.singleagent.lunarlander.SerializableLunarLanderStateFactory
-
- serialize(State) - Method in class burlap.domain.singleagent.mountaincar.SerializableMountainCarStateFactory.SerializableMountainCarState
-
- serialize(State) - Method in class burlap.domain.singleagent.mountaincar.SerializableMountainCarStateFactory
-
- serialize(State) - Method in class burlap.oomdp.stateserialization.SerializableState
-
Causes this object to be a serializable representation of the input
State
- serialize(State) - Method in interface burlap.oomdp.stateserialization.SerializableStateFactory
-
- serialize(State) - Method in class burlap.oomdp.stateserialization.simple.SimpleSerializableState
-
- serialize(State) - Method in class burlap.oomdp.stateserialization.simple.SimpleSerializableStateFactory
-
- set(SingleStageNormalFormGame.StrategyProfile, double) - Method in class burlap.domain.stochasticgames.normalform.SingleStageNormalFormGame.AgentPayoutFunction
-
sets the payout for a given strategy profile
- set1DEastWall(int, int) - Method in class burlap.domain.singleagent.gridworld.GridWorldDomain
-
Sets a specified location to have a 1D east wall.
- set1DNorthWall(int, int) - Method in class burlap.domain.singleagent.gridworld.GridWorldDomain
-
Sets a specified location to have a 1D north wall.
- setActingAgentName(String) - Method in class burlap.behavior.stochasticgames.PolicyFromJointPolicy
-
Sets the acting agents name
- setAction - Variable in class burlap.shell.command.world.ManualAgentsCommands
-
- setActionNameGlyphPainter(String, ActionGlyphPainter) - Method in class burlap.behavior.singleagent.auxiliary.valuefunctionvis.common.PolicyGlyphPainter2D
-
Sets which glyph painter to use for an action with the given name
- setActionObserverForAllAction(ActionObserver) - Method in class burlap.oomdp.singleagent.SADomain
-
Clears all action observers for all actions in this domain and then sets them to have the single action observer provided
- setActionOffset(Map<AbstractGroundedAction, Integer>) - Method in class burlap.behavior.singleagent.vfa.common.LinearFVVFA
-
Sets the Map
of feature index offsets into the full feature vector for each action
- setActionOffset(AbstractGroundedAction, int) - Method in class burlap.behavior.singleagent.vfa.common.LinearFVVFA
-
Sets the Map
of feature index offset into the full feature vector for the given action
- setActions(List<Action>) - Method in class burlap.behavior.singleagent.MDPSolver
-
- setActions(List<Action>) - Method in interface burlap.behavior.singleagent.MDPSolverInterface
-
Sets the action set the solver should use.
- setAgent(State, int, int, int, boolean) - Static method in class burlap.domain.singleagent.blockdude.BlockDude
-
Sets the agent object's x, y, direction, and holding attribute to the specified values.
- setAgent(State, int, int) - Static method in class burlap.domain.singleagent.frostbite.FrostbiteDomain
-
Sets the agent s position, with a height of 0 (on the ground)
- setAgent(State, int, int, int) - Static method in class burlap.domain.singleagent.frostbite.FrostbiteDomain
-
Sets the agent s position and height
- setAgent(State, int, int) - Static method in class burlap.domain.singleagent.gridworld.GridWorldDomain
-
Sets the first agent object in s to the specified x and y position.
- setAgent(State, double, double, double) - Static method in class burlap.domain.singleagent.lunarlander.LunarLanderDomain
-
Sets the agent/lander position/orientation and zeros out the lander velocity.
- setAgent(State, double, double, double, double, double) - Static method in class burlap.domain.singleagent.lunarlander.LunarLanderDomain
-
Sets the agent/lander position/orientation and the velocity.
- setAgent(State, double, double) - Static method in class burlap.domain.singleagent.mountaincar.MountainCar
-
Sets the agent position in the provided state to the given position and with the given velocity.
- setAgent(State, int, int, int, int) - Static method in class burlap.domain.stochasticgames.gridgame.GridGame
-
Sets an agent's attribute values
- setAgentDefinitions - Variable in class burlap.behavior.stochasticgames.agents.madp.MultiAgentDPPlanningAgent
-
Whether the agent definitions for this valueFunction have been set yet.
- setAgentDefinitions(Map<String, SGAgentType>) - Method in class burlap.behavior.stochasticgames.madynamicprogramming.MADynamicProgramming
-
Sets/changes the agent definitions to use in planning.
- setAgents(List<MultiAgentQLearning>) - Method in class burlap.behavior.stochasticgames.madynamicprogramming.AgentQSourceMap.MAQLControlledQSourceMap
-
Initializes with a list of agents that each keep their own Q_source.
- setAgentsInJointPolicy(Map<String, SGAgentType>) - Method in class burlap.behavior.stochasticgames.JointPolicy
-
Sets the agent definitions that define the set of possible joint actions in each state.
- setAgentsInJointPolicy(List<SGAgent>) - Method in class burlap.behavior.stochasticgames.JointPolicy
-
Sets the agent definitions by querying the agent names and
SGAgentType
objects from a list of agents.
- setAgentsInJointPolicyFromWorld(World) - Method in class burlap.behavior.stochasticgames.JointPolicy
-
Sets teh agent definitions by querying the agents that exist in a
World
object.
- setAlias(String, String) - Method in class burlap.shell.BurlapShell
-
- setAlias(String, String, boolean) - Method in class burlap.shell.BurlapShell
-
- setAllowActionFromTerminalStates(boolean) - Method in class burlap.oomdp.singleagent.environment.SimulatedEnvironment
-
Sets whether the environment will respond to actions from a terminal state.
- setAnginc(double) - Method in class burlap.domain.singleagent.lunarlander.LunarLanderDomain.LLPhysicsParams
-
- setAnginc(double) - Method in class burlap.domain.singleagent.lunarlander.LunarLanderDomain
-
Sets how many radians the agent will rotate from its current orientation when a turn/rotate action is applied
- setAngmax(double) - Method in class burlap.domain.singleagent.lunarlander.LunarLanderDomain.LLPhysicsParams
-
- setAngmax(double) - Method in class burlap.domain.singleagent.lunarlander.LunarLanderDomain
-
Sets the maximum rotate angle (in radians) that the lander can be rotated from the vertical orientation in either
clockwise or counterclockwise direction.
- SetAttributeCommand - Class in burlap.shell.command.env
-
- SetAttributeCommand() - Constructor for class burlap.shell.command.env.SetAttributeCommand
-
- setAttributes(List<Attribute>) - Method in class burlap.oomdp.core.ObjectClass
-
Sets the attributes used to define this object class
- SetAttributeSGCommand - Class in burlap.shell.command.world
-
- SetAttributeSGCommand() - Constructor for class burlap.shell.command.world.SetAttributeSGCommand
-
- setAuxInfoTo(PrioritizedSearchNode) - Method in class burlap.behavior.singleagent.planning.deterministic.informed.PrioritizedSearchNode
-
This method rewires the generating node information and priority to that specified in a different PrioritizedSearchNode.
- setBase(State) - Method in class burlap.oomdp.stochasticgames.explorers.HardStateResetSpecialAction
-
Sets the base state to reset to
- setBaseStateGenerator(StateGenerator) - Method in class burlap.oomdp.stochasticgames.explorers.HardStateResetSpecialAction
-
Sets the state generator to draw from on reset
- setBelief(State, double) - Method in class burlap.oomdp.singleagent.pomdp.beliefstate.tabular.TabularBeliefState
-
Sets the probability mass (belief) associated with the underlying MDP state.
- setBelief(int, double) - Method in class burlap.oomdp.singleagent.pomdp.beliefstate.tabular.TabularBeliefState
-
Sets the probability mass (belief) associated with the underlying MDP state.
- setBeliefState(BeliefState) - Method in class burlap.oomdp.singleagent.pomdp.BeliefAgent
-
Sets this agent's current belief
- setBeliefVector(double[]) - Method in interface burlap.oomdp.singleagent.pomdp.beliefstate.DenseBeliefVector
-
Sets this belief state to the provided.
- setBeliefVector(double[]) - Method in class burlap.oomdp.singleagent.pomdp.beliefstate.tabular.TabularBeliefState
-
Sets this belief state to the provided.
- setBgColor(Color) - Method in class burlap.behavior.singleagent.auxiliary.valuefunctionvis.ValueFunctionVisualizerGUI
-
Sets the canvas background color
- setBGColor(Color) - Method in class burlap.oomdp.visualizer.MultiLayerRenderer
-
Sets the color that will fill the canvas before rendering begins
- setBGColor(Color) - Method in class burlap.oomdp.visualizer.Visualizer
-
Sets the background color of the canvas
- setBlock(State, int, int, int) - Static method in class burlap.domain.singleagent.blockdude.BlockDude
-
Sets the ith block's x and y position in a state.
- setBlock(State, int, String, int, int, String) - Static method in class burlap.domain.singleagent.blocksworld.BlocksWorld
-
Use this method to quickly set the various values of a block
- setBlock(State, String, String, int, int, String) - Static method in class burlap.domain.singleagent.blocksworld.BlocksWorld
-
Use this method to quickly set the various values of a block
- setBlock(ObjectInstance, String, int, int, String) - Static method in class burlap.domain.singleagent.blocksworld.BlocksWorld
-
Use this method to quickly set the various values of a block
- setBlockColor(State, int, String) - Static method in class burlap.domain.singleagent.blocksworld.BlocksWorld
-
Use this method to quickly set the color of a block
- setBoltzmannBeta(double) - Method in class burlap.behavior.singleagent.learnfromdemo.mlirl.MLIRLRequest
-
- setBoltzmannBetaParameter(double) - Method in class burlap.behavior.singleagent.learnfromdemo.mlirl.differentiableplanners.DifferentiableDP
-
- setBoltzmannBetaParameter(double) - Method in class burlap.behavior.singleagent.learnfromdemo.mlirl.differentiableplanners.DifferentiableSparseSampling
-
- setBoltzmannBetaParameter(double) - Method in interface burlap.behavior.singleagent.learnfromdemo.mlirl.support.QGradientPlanner
-
Sets this valueFunction's Boltzmann beta parameter used to compute gradients.
- setBoundaryWalls(State, int, int) - Static method in class burlap.domain.stochasticgames.gridgame.GridGame
-
Sets boundary walls of a domain.
- setBreakTiesRandomly(boolean) - Method in class burlap.behavior.stochasticgames.madynamicprogramming.policies.EGreedyMaxWellfare
-
Whether to break ties randomly or deterministically.
- setBrickMap(State, int[][]) - Static method in class burlap.domain.singleagent.blockdude.BlockDude
-
Sets the state to use the provided brick map.
- setBrickValue(State, int, int, int) - Static method in class burlap.domain.singleagent.blockdude.BlockDude
-
Sets the brick value in grid location x, y.
- setC(int) - Method in class burlap.behavior.singleagent.learnfromdemo.mlirl.differentiableplanners.DifferentiableSparseSampling
-
Sets the number of state transition samples used.
- setC(int) - Method in class burlap.behavior.singleagent.planning.stochastic.sparsesampling.SparseSampling
-
Sets the number of state transition samples used.
- setCellWallState(int, int, int) - Method in class burlap.domain.singleagent.gridworld.GridWorldDomain
-
Sets the map at the specified location to have the specified wall configuration.
- setClassName(String) - Method in class burlap.oomdp.core.PropositionalFunction
-
Sets the class name for this propositional function.
- setCoefficientVectors(List<short[]>) - Method in class burlap.behavior.singleagent.vfa.fourier.FourierBasis
-
Forces the set of coefficient vectors (and thereby Fourier basis functions) used.
- setCollisionReward(double) - Method in class burlap.domain.singleagent.lunarlander.LunarLanderRF
-
- setColorBlend(ColorBlend) - Method in class burlap.behavior.singleagent.auxiliary.valuefunctionvis.common.StateValuePainter2D
-
Sets the color blending used for the value function.
- setComputeExactValueFunction(boolean) - Method in class burlap.behavior.singleagent.planning.stochastic.sparsesampling.SparseSampling
-
Sets whether this valueFunction will compute the exact finite horizon value funciton (using the full transition dynamics) or if sampling
to estimate the value function will be used.
- setControlDepth(int) - Method in class burlap.behavior.singleagent.planning.vfa.fittedvi.FittedVI
-
- setCorrelatedQObjective(CorrelatedEquilibriumSolver.CorrelatedEquilibriumObjective) - Method in class burlap.behavior.stochasticgames.madynamicprogramming.policies.ECorrelatedQJointPolicy
-
Sets the correlated equilibrium objective to be solved.
- setCurObservationTo(State) - Method in class burlap.oomdp.singleagent.pomdp.SimulatedPOEnvironment
-
Overrides the current observation of this environment to the specified value
- setCurrentState(State) - Method in class burlap.oomdp.stochasticgames.World
-
Sets the world state to the provided state if the a game is not currently running.
- setCurStateTo(State) - Method in class burlap.oomdp.singleagent.environment.EnvironmentServer.StateSettableEnvironmentServer
-
- setCurStateTo(State) - Method in class burlap.oomdp.singleagent.environment.SimulatedEnvironment
-
- setCurStateTo(State) - Method in interface burlap.oomdp.singleagent.environment.StateSettableEnvironment
-
Sets the current state of the environment to the specified state.
- setCurTime(int) - Method in class burlap.behavior.singleagent.learning.actorcritic.critics.TimeIndexedTDLambda
-
Sets the time/depth of the current episode.
- setDataset(SARSData) - Method in class burlap.behavior.singleagent.learning.lspi.LSPI
-
Sets the SARS dataset this object will use for LSPI
- setDebugCode(int) - Method in class burlap.behavior.singleagent.learnfromdemo.mlirl.differentiableplanners.DifferentiableSparseSampling
-
Sets the debug code used for logging plan results with
DPrint
.
- setDebugCode(int) - Method in class burlap.behavior.singleagent.learnfromdemo.mlirl.MLIRL
-
Sets the debug code used for printing to the terminal
- setDebugCode(int) - Method in class burlap.behavior.singleagent.learnfromdemo.mlirl.MultipleIntentionsMLIRL
-
Sets the debug code used for printing to the terminal
- setDebugCode(int) - Method in class burlap.behavior.singleagent.MDPSolver
-
- setDebugCode(int) - Method in interface burlap.behavior.singleagent.MDPSolverInterface
-
Sets the debug code to be used by calls to
DPrint
- setDebugCode(int) - Method in class burlap.behavior.singleagent.planning.stochastic.sparsesampling.SparseSampling
-
Sets the debug code used for logging plan results with
DPrint
.
- setDebugCode(int) - Method in class burlap.oomdp.core.Domain
-
Sets the debug code used for printing debug messages.
- setDebugId(int) - Method in class burlap.oomdp.stochasticgames.World
-
Sets the debug code that is use for printing with
DPrint
.
- setDefaultFloorDiscretizingMultiple(double) - Method in class burlap.oomdp.statehashing.DiscretizingHashableStateFactory
-
Sets the default multiple to use for continuous attributes that do not have specific multiples set
for them.
- setDefaultFloorDiscretizingMultiple(double) - Method in class burlap.oomdp.statehashing.DiscretizingMaskedHashableStateFactory
-
Sets the default multiple to use for continuous attributes that do not have specific multiples set
for them.
- setDefaultReward(double) - Method in class burlap.domain.singleagent.lunarlander.LunarLanderRF
-
- setDefaultReward(double) - Method in class burlap.oomdp.singleagent.common.GoalBasedRF
-
- setDefaultValueFunctionAfterARollout(boolean) - Method in class burlap.behavior.singleagent.planning.stochastic.rtdp.BoundedRTDP
-
Use this method to set which value function--the lower bound or upper bound--to use after a planning rollout is complete.
- setDeterministicTransitionDynamics() - Method in class burlap.domain.singleagent.frostbite.FrostbiteDomain
-
Will set the domain to use deterministic action transitions.
- setDeterministicTransitionDynamics() - Method in class burlap.domain.singleagent.gridworld.GridWorldDomain
-
Will set the domain to use deterministic action transitions.
- setDiscValues(List<String>) - Method in class burlap.oomdp.core.Attribute
-
Sets a discrete attribute's categorical values
- setDiscValues(String[]) - Method in class burlap.oomdp.core.Attribute
-
Sets a discrete attribute's categorical values.
- setDiscValuesForRange(int, int, int) - Method in class burlap.oomdp.core.Attribute
-
- setDomain(Domain) - Method in class burlap.behavior.singleagent.learnfromdemo.apprenticeship.ApprenticeshipLearningRequest
-
- setDomain(Domain) - Method in class burlap.behavior.singleagent.learnfromdemo.IRLRequest
-
- setDomain(Domain) - Method in class burlap.behavior.singleagent.MDPSolver
-
- setDomain(Domain) - Method in interface burlap.behavior.singleagent.MDPSolverInterface
-
Sets the domain of this solver.
- setDomain(Domain) - Method in class burlap.oomdp.singleagent.environment.SimulatedEnvironment
-
- setDomain(Domain) - Method in class burlap.oomdp.singleagent.pomdp.SimulatedPOEnvironment
-
- setDomain(SGDomain) - Method in class burlap.oomdp.stochasticgames.World
-
- setDomain(Domain) - Method in class burlap.shell.BurlapShell
-
- setEnv(Environment) - Method in class burlap.shell.EnvironmentShell
-
- setEnvironment(Environment) - Method in class burlap.oomdp.singleagent.pomdp.BeliefAgent
-
Sets the POMDP environment
- setEnvironmentDelegate(Environment) - Method in interface burlap.oomdp.singleagent.environment.EnvironmentDelegation
-
- setEnvironmentDelegate(Environment) - Method in class burlap.oomdp.singleagent.environment.EnvironmentServer
-
- setEpisodeWeights(double[]) - Method in class burlap.behavior.singleagent.learnfromdemo.mlirl.MLIRLRequest
-
- setEpsilon(double) - Method in class burlap.behavior.policy.EpsilonGreedy
-
Sets the epsilon value, where epsilon is the probability of taking a random action.
- setEpsilon(double) - Method in class burlap.behavior.singleagent.learnfromdemo.apprenticeship.ApprenticeshipLearningRequest
-
- setEpsilon(double) - Method in class burlap.behavior.stochasticgames.agents.naiveq.history.SGQWActionHistoryFactory
-
Sets the epislon parmaeter (for epsilon greedy policy).
- setExernalTermination(TerminalFunction) - Method in class burlap.behavior.singleagent.options.Option
-
Sets what the external MDPs terminal function is that will cause this option to terminate if it enters those terminal states.
- setExit(State, int, int) - Static method in class burlap.domain.singleagent.blockdude.BlockDude
-
Sets the x and y position of the first exit object in the state.
- setExpectationCalculationProbabilityCutoff(double) - Method in class burlap.behavior.singleagent.options.Option
-
Sets the minimum probability of reaching a terminal state for it to be included in the options computed transition dynamics distribution.
- setExpectationHashingFactory(HashableStateFactory) - Method in class burlap.behavior.singleagent.options.Option
-
Sets the option to use the provided hashing factory for caching transition probability results.
- setExpertEpisodes(List<EpisodeAnalysis>) - Method in class burlap.behavior.singleagent.learnfromdemo.apprenticeship.ApprenticeshipLearningRequest
-
- setExpertEpisodes(List<EpisodeAnalysis>) - Method in class burlap.behavior.singleagent.learnfromdemo.IRLRequest
-
- setFd(FeatureDatabase) - Method in class burlap.behavior.singleagent.vfa.common.FDFeatureVectorGenerator
-
- setFeatureDatabase(FeatureDatabase) - Method in class burlap.behavior.singleagent.learning.lspi.LSPI
-
Sets the feature datbase defining state features
- setFeatureGenerator(StateToFeatureVectorGenerator) - Method in class burlap.behavior.singleagent.learnfromdemo.apprenticeship.ApprenticeshipLearningRequest
-
- setFeaturesAreForNextState(boolean) - Method in class burlap.behavior.singleagent.learnfromdemo.mlirl.commonrfs.LinearStateDifferentiableRF
-
Sets whether features for the reward function are generated from the next state or previous state.
- setFlag(int, int) - Static method in class burlap.debugtools.DebugFlags
-
Creates/sets a debug flag
- setForgetPreviousPlanResults(boolean) - Method in class burlap.behavior.singleagent.learnfromdemo.mlirl.differentiableplanners.DifferentiableSparseSampling
-
Sets whether previous planning results should be forgetten or resued in subsequent planning.
- setForgetPreviousPlanResults(boolean) - Method in class burlap.behavior.singleagent.planning.stochastic.sparsesampling.SparseSampling
-
Sets whether previous planning results should be forgetten or resued in subsequent planning.
- setFrameDelay(long) - Method in class burlap.oomdp.singleagent.common.VisualActionObserver
-
Sets how long to wait in ms for a state to be rendered before returning control the agent.
- setFrameDelay(long) - Method in class burlap.oomdp.stochasticgames.common.VisualWorldObserver
-
Sets how long to wait in ms for a state to be rendered before returning control the world.
- setGamma(double) - Method in class burlap.behavior.singleagent.learnfromdemo.apprenticeship.ApprenticeshipLearningRequest
-
- setGamma(double) - Method in class burlap.behavior.singleagent.learnfromdemo.IRLRequest
-
- setGamma(double) - Method in class burlap.behavior.singleagent.MDPSolver
-
- setGamma(double) - Method in interface burlap.behavior.singleagent.MDPSolverInterface
-
Sets gamma, the discount factor used by this solver
- setGoal(State, int, int, int, int) - Static method in class burlap.domain.stochasticgames.gridgame.GridGame
-
Sets a goal objects attribute values
- setGoalCondition(StateConditionTest) - Method in class burlap.behavior.singleagent.planning.stochastic.rtdp.BFSRTDP
-
Sets the goal state that causes the BFS-like pass to stop expanding when found.
- setGoalReward(double) - Method in class burlap.domain.singleagent.lunarlander.LunarLanderRF
-
- setGoalReward(double) - Method in class burlap.oomdp.singleagent.common.GoalBasedRF
-
- setGravity(double) - Method in class burlap.domain.singleagent.lunarlander.LunarLanderDomain.LLPhysicsParams
-
- setGravity(double) - Method in class burlap.domain.singleagent.lunarlander.LunarLanderDomain
-
Sets the gravity of the domain
- setH(int) - Method in class burlap.behavior.singleagent.learnfromdemo.mlirl.differentiableplanners.DifferentiableSparseSampling
-
Sets the height of the tree.
- setH(int) - Method in class burlap.behavior.singleagent.planning.stochastic.sparsesampling.SparseSampling
-
Sets the height of the tree.
- setHAndCByMDPError(double, double, int) - Method in class burlap.behavior.singleagent.planning.stochastic.sparsesampling.SparseSampling
-
Sets the height and number of transition dynamics samples in a way that ensure epsilon optimality.
- setHashCode(int, boolean) - Method in class burlap.oomdp.core.objects.ImmutableObjectInstance
-
- setHashCode(int) - Method in class burlap.oomdp.core.states.FixedSizeImmutableState
-
- setHashingFactory(HashableStateFactory) - Method in class burlap.behavior.singleagent.MDPSolver
-
- setHashingFactory(HashableStateFactory) - Method in interface burlap.behavior.singleagent.MDPSolverInterface
-
- setHelpText(String) - Method in class burlap.shell.BurlapShell
-
- setHorizontalWall(State, int, int, int, int, int) - Static method in class burlap.domain.stochasticgames.gridgame.GridGame
-
Sets the attribute values for a horizontal wall
- setIdentityScalar(double) - Method in class burlap.behavior.singleagent.learning.lspi.LSPI
-
Sets the initial LSPI identity matrix scalar used.
- setIgloo(State, int) - Static method in class burlap.domain.singleagent.frostbite.FrostbiteDomain
-
Sets the igloo building status
- setInternalRewardFunction(JointReward) - Method in class burlap.oomdp.stochasticgames.SGAgent
-
Internal reward functions are optional, but can be useful for purposes like reward shaping.
- setIs(InputStream) - Method in class burlap.shell.BurlapShell
-
- setIterationListData() - Method in class burlap.behavior.singleagent.auxiliary.EpisodeSequenceVisualizer
-
- setJointActionModel(JointActionModel) - Method in class burlap.oomdp.stochasticgames.SGDomain
-
Sets the joint action model associated with this domain.
- setJointPolicy(JointPolicy) - Method in class burlap.behavior.stochasticgames.PolicyFromJointPolicy
-
Sets the underlying joint policy
- setK(int) - Method in class burlap.behavior.singleagent.learnfromdemo.mlirl.MultipleIntentionsMLIRLRequest
-
Sets the number of clusters
- setLearningPolicy(Policy) - Method in class burlap.behavior.singleagent.learning.lspi.LSPI
-
- setLearningPolicy(Policy) - Method in class burlap.behavior.singleagent.learning.tdmethods.QLearning
-
Sets which policy this agent should use for learning.
- setLearningPolicy(Policy) - Method in class burlap.behavior.singleagent.learning.tdmethods.vfa.GradientDescentSarsaLam
-
Sets which policy this agent should use for learning.
- setLearningPolicy(PolicyFromJointPolicy) - Method in class burlap.behavior.stochasticgames.agents.maql.MultiAgentQLearning
-
Sets the learning policy to be followed by the agent.
- setLearningRate(LearningRate) - Method in class burlap.behavior.singleagent.learning.actorcritic.actor.BoltzmannActor
-
Sets the learning rate function to use.
- setLearningRate(LearningRate) - Method in class burlap.behavior.singleagent.learning.actorcritic.critics.TDLambda
-
Sets the learning rate function to use.
- setLearningRate(LearningRate) - Method in class burlap.behavior.singleagent.learning.tdmethods.vfa.GradientDescentSarsaLam
-
Sets the learning rate function to use.
- setLearningRate(LearningRate) - Method in class burlap.behavior.stochasticgames.agents.naiveq.SGNaiveQLAgent
-
- setLearningRateFunction(LearningRate) - Method in class burlap.behavior.singleagent.learning.tdmethods.QLearning
-
Sets the learning rate function to use
- setLims(double, double) - Method in class burlap.oomdp.core.Attribute
-
Sets the upper and lower bound limits for a bounded real attribute.
- setLocation(State, int, int, int) - Static method in class burlap.domain.singleagent.gridworld.GridWorldDomain
-
Sets the i'th location object to the specified x and y position.
- setLocation(State, int, int, int, int) - Static method in class burlap.domain.singleagent.gridworld.GridWorldDomain
-
Sets the i'th location object to the specified x and y position and location type.
- setMacroCellHorizontalCount(int) - Method in class burlap.domain.singleagent.gridworld.macro.MacroCellGridWorld
-
Sets the number of coumns of macro-cells (cells across the x-axis)
- setMacroCellVerticalCount(int) - Method in class burlap.domain.singleagent.gridworld.macro.MacroCellGridWorld
-
Sets the number of rows of macro-cells (cells across the y-axis)
- setManualAgent(String, ManualAgentsCommands.ManualSGAgent) - Method in class burlap.shell.command.world.ManualAgentsCommands
-
- setManualAgents(Map<String, ManualAgentsCommands.ManualSGAgent>) - Method in class burlap.shell.command.world.ManualAgentsCommands
-
- setMap(int[][]) - Method in class burlap.domain.singleagent.gridworld.GridWorldDomain
-
Set the map of the world.
- setMapToFourRooms() - Method in class burlap.domain.singleagent.gridworld.GridWorldDomain
-
Will set the map of the world to the classic Four Rooms map used the original options work (Sutton, R.S.
- setMaxCartSpeedToMaxWithMovementFromOneSideToOther() - Method in class burlap.domain.singleagent.cartpole.CartPoleDomain
-
Given the current action force, track length and masses, sets the max cart speed
to an upperbound of what is possible from moving from one side of the track to another.
- setMaxChange(double) - Method in class burlap.behavior.singleagent.learning.lspi.LSPI
-
- setMaxDelta(double) - Method in class burlap.behavior.singleagent.planning.stochastic.rtdp.RTDP
-
Sets the maximum delta state value update in a rollout that will cause planning to terminate
- setMaxDifference(double) - Method in class burlap.behavior.singleagent.planning.stochastic.rtdp.BoundedRTDP
-
Sets the max permitted difference in value function margin to permit planning termination.
- setMaxDim(int) - Method in class burlap.domain.stochasticgames.gridgame.GridGame
-
Sets the maximum dimension of the world; it's width and height.
- setMaxDynamicDepth(int) - Method in class burlap.behavior.singleagent.planning.stochastic.rtdp.RTDP
-
Sets the maximum depth of a rollout to use until it is prematurely temrinated to update the value function.
- setMaxGT(int) - Method in class burlap.domain.stochasticgames.gridgame.GridGame
-
Sets the maximum goal types
- setMaximumEpisodesForPlanning(int) - Method in class burlap.behavior.singleagent.learning.tdmethods.QLearning
-
- setMaximumEpisodesForPlanning(int) - Method in class burlap.behavior.singleagent.learning.tdmethods.vfa.GradientDescentSarsaLam
-
- setMaxIterations(int) - Method in class burlap.behavior.singleagent.learnfromdemo.apprenticeship.ApprenticeshipLearningRequest
-
- setMaxLearningSteps(int) - Method in class burlap.behavior.singleagent.learning.lspi.LSPI
-
- setMaxNumberOfRollouts(int) - Method in class burlap.behavior.singleagent.planning.stochastic.rtdp.BoundedRTDP
-
Sets the maximum number of rollouts permitted before planning is forced to terminate.
- setMaxNumPlanningIterations(int) - Method in class burlap.behavior.singleagent.learning.lspi.LSPI
-
- setMaxPlyrs(int) - Method in class burlap.domain.stochasticgames.gridgame.GridGame
-
Sets the max number of players
- setMaxQChangeForPlanningTerminaiton(double) - Method in class burlap.behavior.singleagent.learning.tdmethods.QLearning
-
- setMaxRolloutDepth(int) - Method in class burlap.behavior.singleagent.planning.stochastic.rtdp.BoundedRTDP
-
Sets the maximum rollout depth of any rollout.
- setMaxVFAWeightChangeForPlanningTerminaiton(double) - Method in class burlap.behavior.singleagent.learning.tdmethods.vfa.GradientDescentSarsaLam
-
- setMaxWT(int) - Method in class burlap.domain.stochasticgames.gridgame.GridGame
-
Sets the maximum number of wall types
- setMaxx(int) - Method in class burlap.domain.singleagent.blockdude.BlockDude
-
- setMaxy(int) - Method in class burlap.domain.singleagent.blockdude.BlockDude
-
- setMinNewStepsForLearningPI(int) - Method in class burlap.behavior.singleagent.learning.lspi.LSPI
-
Sets the minimum number of new learning observations before policy iteration is run again.
- setMinNumRolloutsWithSmallValueChange(int) - Method in class burlap.behavior.singleagent.planning.stochastic.rtdp.RTDP
-
Sets the minimum number of consecutive rollsouts with a value function change less than the maxDelta value that will cause RTDP
to stop.
- setName(String) - Method in class burlap.oomdp.core.objects.ImmutableObjectInstance
-
Sets the name of this object instance.
- setName(String) - Method in class burlap.oomdp.core.objects.MutableObjectInstance
-
Sets the name of this object instance.
- setName(String) - Method in interface burlap.oomdp.core.objects.ObjectInstance
-
Sets the name of this object instance.
- setName(String) - Method in class burlap.oomdp.statehashing.HashableObject
-
- setNextAction(GroundedSGAgentAction) - Method in class burlap.shell.command.world.ManualAgentsCommands.ManualSGAgent
-
- setNormalizeValues(boolean) - Method in class burlap.behavior.singleagent.vfa.common.ConcatenatedObjectFeatureVectorGenerator
-
Sets whether the object values are normalized in the returned feature vector
- setNumberOfLocationTypes(int) - Method in class burlap.domain.singleagent.gridworld.GridWorldDomain
-
Sets the number of possible location types to which a location object can belong.
- setNumEpisodesToStore(int) - Method in class burlap.behavior.singleagent.learning.actorcritic.ActorCritic
-
- setNumEpisodesToStore(int) - Method in class burlap.behavior.singleagent.learning.lspi.LSPI
-
- setNumEpisodesToStore(int) - Method in class burlap.behavior.singleagent.learning.modellearning.artdp.ARTDP
-
- setNumEpisodesToStore(int) - Method in class burlap.behavior.singleagent.learning.modellearning.rmax.PotentialShapedRMax
-
- setNumEpisodesToStore(int) - Method in class burlap.behavior.singleagent.learning.tdmethods.QLearning
-
- setNumEpisodesToStore(int) - Method in class burlap.behavior.singleagent.learning.tdmethods.vfa.GradientDescentSarsaLam
-
- setNumPasses(int) - Method in class burlap.behavior.singleagent.planning.stochastic.rtdp.RTDP
-
Sets the number of rollouts to perform when planning is started (unless the value function delta is small enough).
- setNumSamplesForPlanning(int) - Method in class burlap.behavior.singleagent.learning.lspi.LSPI
-
- setNumXCells(int) - Method in class burlap.behavior.singleagent.auxiliary.valuefunctionvis.common.PolicyGlyphPainter2D
-
Sets the number of states that will be rendered along a row
- setNumXCells(int) - Method in class burlap.behavior.singleagent.auxiliary.valuefunctionvis.common.StateValuePainter2D
-
Sets the number of states that will be rendered along a row
- setNumYCells(int) - Method in class burlap.behavior.singleagent.auxiliary.valuefunctionvis.common.PolicyGlyphPainter2D
-
Sets the number of states that will be rendered along a row
- setNumYCells(int) - Method in class burlap.behavior.singleagent.auxiliary.valuefunctionvis.common.StateValuePainter2D
-
Sets the number of states that will be rendered along a row
- setObjectClassAttributesToTile(String, StateGridder.AttributeSpecification...) - Method in class burlap.behavior.singleagent.auxiliary.StateGridder
-
Sets the attribute specifications to use for a single
ObjectClass
- setObjectClassMask(boolean, String...) - Method in class burlap.oomdp.statehashing.FixedSizeStateHashableStateFactory
-
Sets all masking value of the provided object classes.
- setObjectClassMask(String...) - Method in class burlap.oomdp.statehashing.ImmutableStateHashableStateFactory
-
Sets all objects of the provided object class to not be hashed, or used in an equality comparison.
- setObjectClassMask(boolean, String...) - Method in class burlap.oomdp.statehashing.ImmutableStateHashableStateFactory
-
Sets all masking value of the provided object classes.
- setObjectIdentiferDependence(boolean) - Method in class burlap.oomdp.core.Domain
-
Sets whether this domain's states are object identifier (name) dependent.
- setObjectMask(boolean, String...) - Method in class burlap.oomdp.statehashing.FixedSizeStateHashableStateFactory
-
Sets all masking value of the provided objects.
- setObjectMask(String...) - Method in class burlap.oomdp.statehashing.ImmutableStateHashableStateFactory
-
Sets all objects provided to not be hashed, or used in an equality comparison
- setObjectMask(boolean, String...) - Method in class burlap.oomdp.statehashing.ImmutableStateHashableStateFactory
-
Sets all masking value of the provided objects.
- setObjectParameters(String[]) - Method in class burlap.behavior.stochasticgames.agents.interfacing.singleagent.SGToSADomain.GroundedSObParamedAAActionWrapper
-
- setObjectParameters(String[]) - Method in interface burlap.oomdp.core.AbstractObjectParameterizedGroundedAction
-
- setObjectParameters(String[]) - Method in class burlap.oomdp.singleagent.ObjectParameterizedAction.ObjectParameterizedGroundedAction
-
- setObjectParameters(String[]) - Method in class burlap.oomdp.singleagent.pomdp.BeliefMDPGenerator.ObjectParameterizedGroundedBeliefAction
-
- setObjectParameters(String[]) - Method in class burlap.oomdp.stochasticgames.agentactions.ObParamSGAgentAction.GroundedObParamSGAgentAction
-
- setObjectsValue(String, String, T) - Method in class burlap.oomdp.core.states.FixedSizeImmutableState
-
- setObjectsValue(String, String, T) - Method in class burlap.oomdp.core.states.ImmutableState
-
- setObjectsValue(String, String, T) - Method in class burlap.oomdp.core.states.OOMDPState
-
Sets an object's value.
- setObjectsValue(String, String, T) - Method in interface burlap.oomdp.core.states.State
-
- setObjectsValue(String, String, T) - Method in class burlap.oomdp.singleagent.pomdp.beliefstate.tabular.TabularBeliefState
-
- setObjectsValue(String, String, T) - Method in class burlap.oomdp.statehashing.HashableState
-
- setObservationFunction(ObservationFunction) - Method in class burlap.oomdp.singleagent.pomdp.PODomain
-
- setObstacle(State, int, double, double, double, double) - Static method in class burlap.domain.singleagent.lunarlander.LunarLanderDomain
-
Sets an obstacles boundaries/position
- setObstacleInCell(int, int) - Method in class burlap.domain.singleagent.gridworld.GridWorldDomain
-
Sets a complete cell obstacle in the designated location.
- setOptionsFirst() - Method in class burlap.behavior.singleagent.planning.deterministic.uninformed.dfs.DFS
-
Sets the valueFunction to explore nodes generated by options first.
- setOs(PrintStream) - Method in class burlap.shell.BurlapShell
-
- setPad(State, double, double, double, double) - Static method in class burlap.domain.singleagent.lunarlander.LunarLanderDomain
-
Sets the first landing pad's boundaries/position
- setPad(State, int, double, double, double, double) - Static method in class burlap.domain.singleagent.lunarlander.LunarLanderDomain
-
Sets a landing pad's boundaries/position
- setPainter(Visualizer) - Method in class burlap.oomdp.singleagent.common.VisualActionObserver
-
- setParameter(int, double) - Method in class burlap.behavior.singleagent.learnfromdemo.mlirl.commonrfs.LinearStateActionDifferentiableRF
-
- setParameter(int, double) - Method in class burlap.behavior.singleagent.learnfromdemo.mlirl.commonrfs.LinearStateDifferentiableRF
-
- setParameter(int, double) - Method in class burlap.behavior.singleagent.learnfromdemo.mlirl.differentiableplanners.diffvinit.DiffVFRF
-
- setParameter(int, double) - Method in class burlap.behavior.singleagent.learnfromdemo.mlirl.differentiableplanners.diffvinit.LinearDiffRFVInit
-
- setParameter(int, double) - Method in class burlap.behavior.singleagent.learnfromdemo.mlirl.differentiableplanners.diffvinit.LinearStateDiffVF
-
- setParameter(int, double) - Method in class burlap.behavior.singleagent.learnfromdemo.mlirl.differentiableplanners.diffvinit.VanillaDiffVinit
-
- setParameter(int, double) - Method in class burlap.behavior.singleagent.vfa.common.LinearFVVFA
-
- setParameter(int, double) - Method in class burlap.behavior.singleagent.vfa.common.LinearVFA
-
- setParameter(int, double) - Method in interface burlap.behavior.singleagent.vfa.ParametricFunction
-
Sets the value of the ith parameter to given value
- setPayoff(int, int, double, double) - Method in class burlap.behavior.stochasticgames.agents.twoplayer.singlestage.equilibriumplayer.EquilibriumPlayingSGAgent.BimatrixTuple
-
Sets the payoffs for a given row and column.
- setPayout(int, double, String...) - Method in class burlap.domain.stochasticgames.normalform.SingleStageNormalFormGame
-
Sets the pay out that player number playerNumber
receives for a given strategy profile
- setPayout(int, double, int...) - Method in class burlap.domain.stochasticgames.normalform.SingleStageNormalFormGame
-
Sets the pay out that player number playerNumber
receives for a given strategy profile
- setPhysParams(LunarLanderDomain.LLPhysicsParams) - Method in class burlap.domain.singleagent.lunarlander.LunarLanderDomain.ActionIdle
-
- setPhysParams(LunarLanderDomain.LLPhysicsParams) - Method in class burlap.domain.singleagent.lunarlander.LunarLanderDomain.ActionThrust
-
- setPhysParams(LunarLanderDomain.LLPhysicsParams) - Method in class burlap.domain.singleagent.lunarlander.LunarLanderDomain.ActionTurn
-
- setPhysParams(LunarLanderDomain.LLPhysicsParams) - Method in class burlap.domain.singleagent.lunarlander.LunarLanderDomain
-
- setPlanner(Planner) - Method in class burlap.behavior.singleagent.learnfromdemo.apprenticeship.ApprenticeshipLearningRequest
-
- setPlanner(Planner) - Method in class burlap.behavior.singleagent.learnfromdemo.IRLRequest
-
- setPlanner(Planner) - Method in class burlap.behavior.singleagent.learnfromdemo.mlirl.MLIRLRequest
-
- setPlannerFactory(QGradientPlannerFactory) - Method in class burlap.behavior.singleagent.learnfromdemo.mlirl.MultipleIntentionsMLIRLRequest
-
Sets the
QGradientPlannerFactory
to use and also
sets this request object's valueFunction instance to a valueFunction generated from it, if it has not already been set.
- setPlannerReference(MADynamicProgramming) - Method in class burlap.behavior.stochasticgames.agents.madp.MADPPlannerFactory.ConstantMADPPlannerFactory
-
Changes the valueFunction reference
- setPlanningAndControlDepth(int) - Method in class burlap.behavior.singleagent.planning.vfa.fittedvi.FittedVI
-
- setPlanningCollector(SARSCollector) - Method in class burlap.behavior.singleagent.learning.lspi.LSPI
-
- setPlanningDepth(int) - Method in class burlap.behavior.singleagent.planning.vfa.fittedvi.FittedVI
-
Sets the Bellman operator depth used during planning.
- setPlatform(State, int, int, int, int, boolean) - Static method in class burlap.domain.singleagent.frostbite.FrostbiteDomain
-
Sets a platform position, size and status
- setPlotCISignificance(double) - Method in class burlap.behavior.singleagent.auxiliary.performance.LearningAlgorithmExperimenter
-
Sets the significance used for confidence intervals.
- setPlotCISignificance(double) - Method in class burlap.behavior.stochasticgames.auxiliary.performance.MultiAgentExperimenter
-
Sets the significance used for confidence intervals.
- setPlotRefreshDelay(int) - Method in class burlap.behavior.singleagent.auxiliary.performance.LearningAlgorithmExperimenter
-
Sets the delay in milliseconds between automatic plot refreshes
- setPlotRefreshDelay(int) - Method in class burlap.behavior.stochasticgames.auxiliary.performance.MultiAgentExperimenter
-
Sets the delay in milliseconds between automatic plot refreshes
- setPolicy(Policy) - Method in class burlap.behavior.singleagent.auxiliary.valuefunctionvis.PolicyRenderLayer
-
Sets the policy to render
- setPolicy(Policy) - Method in class burlap.behavior.singleagent.auxiliary.valuefunctionvis.ValueFunctionVisualizerGUI
-
Sets the policy to render
- setPolicy(SolverDerivedPolicy) - Method in class burlap.behavior.singleagent.learning.modellearning.artdp.ARTDP
-
Sets the policy to the provided one.
- setPolicy(PolicyFromJointPolicy) - Method in class burlap.behavior.stochasticgames.agents.madp.MultiAgentDPPlanningAgent
-
Sets the policy derived from this agents valueFunction to follow.
- setPolicyCount(int) - Method in class burlap.behavior.singleagent.learnfromdemo.apprenticeship.ApprenticeshipLearningRequest
-
- setPolicyToEvaluate(Policy) - Method in class burlap.behavior.singleagent.planning.stochastic.policyiteration.PolicyIteration
-
Sets the initial policy that will be evaluated when planning with policy iteration begins.
- setPolynomialDegree(double) - Method in class burlap.behavior.singleagent.auxiliary.valuefunctionvis.common.LandmarkColorBlendInterpolation
-
Sets the color blend to raise the normalized distance of values to the given degree.
- setPreference(int, double) - Method in class burlap.datastructures.BoltzmannDistribution
-
Sets the preference for the ith elemnt
- setPreferences(double[]) - Method in class burlap.datastructures.BoltzmannDistribution
-
Sets the input preferences
- setProbSucceedTransitionDynamics(double) - Method in class burlap.domain.singleagent.gridworld.GridWorldDomain
-
Sets the domain to use probabilistic transitions.
- setQInitFunction(ValueFunctionInitialization) - Method in class burlap.behavior.singleagent.learning.tdmethods.QLearning
-
Sets how to initialize Q-values for previously unexperienced state-action pairs.
- setQSourceMap(Map<String, QSourceForSingleAgent>) - Method in class burlap.behavior.stochasticgames.madynamicprogramming.AgentQSourceMap.HashMapAgentQSourceMap
-
Sets the Q-source hash map to be used.
- setQSourceProvider(MultiAgentQSourceProvider) - Method in class burlap.behavior.stochasticgames.madynamicprogramming.MAQSourcePolicy
-
- setQSourceProvider(MultiAgentQSourceProvider) - Method in class burlap.behavior.stochasticgames.madynamicprogramming.policies.ECorrelatedQJointPolicy
-
- setQSourceProvider(MultiAgentQSourceProvider) - Method in class burlap.behavior.stochasticgames.madynamicprogramming.policies.EGreedyJointPolicy
-
- setQSourceProvider(MultiAgentQSourceProvider) - Method in class burlap.behavior.stochasticgames.madynamicprogramming.policies.EGreedyMaxWellfare
-
- setQSourceProvider(MultiAgentQSourceProvider) - Method in class burlap.behavior.stochasticgames.madynamicprogramming.policies.EMinMaxPolicy
-
- setQValueInitializer(ValueFunctionInitialization) - Method in class burlap.behavior.stochasticgames.agents.naiveq.history.SGQWActionHistoryFactory
-
Sets the Q-value initialization function that will be used by the agent.
- setQValueInitializer(ValueFunctionInitialization) - Method in class burlap.behavior.stochasticgames.agents.naiveq.SGNaiveQLAgent
-
- setRandom(Random) - Method in class burlap.datastructures.StochasticTree
-
Sets the tree to use a specific random object when performing sampling
- setRandomGenerator(Random) - Method in class burlap.behavior.policy.RandomPolicy
-
Sets the random generator used for action selection.
- setRandomObject(Random) - Method in class burlap.domain.singleagent.mountaincar.MCRandomStateGenerator
-
Sets the random object used for generating states
- setRefreshDelay(int) - Method in class burlap.behavior.singleagent.auxiliary.performance.PerformancePlotter
-
sets the delay in milliseconds between automatic refreshes of the plots
- setRefreshDelay(int) - Method in class burlap.behavior.stochasticgames.auxiliary.performance.MultiAgentPerformancePlotter
-
sets the delay in milliseconds between automatic refreshes of the plots
- setRenderStyle(PolicyGlyphPainter2D.PolicyGlyphRenderStyle) - Method in class burlap.behavior.singleagent.auxiliary.valuefunctionvis.common.PolicyGlyphPainter2D
-
Sets the rendering style
- setRepaintOnActionInitiation(boolean) - Method in class burlap.oomdp.singleagent.common.VisualActionObserver
-
- setRepaintStateOnEnvironmentInteraction(boolean) - Method in class burlap.oomdp.singleagent.common.VisualActionObserver
-
- setRequest(MLIRLRequest) - Method in class burlap.behavior.singleagent.learnfromdemo.mlirl.MLIRL
-
- setReward(int, int, double) - Method in class burlap.domain.singleagent.gridworld.GridWorldRewardFunction
-
Sets the reward the agent will receive to transitioning to position x, y
- setRewardFunction(RewardFunction) - Method in class burlap.behavior.singleagent.learning.actorcritic.critics.TDLambda
-
Sets the reward function to use.
- setRf(DifferentiableRF) - Method in class burlap.behavior.singleagent.learnfromdemo.mlirl.MLIRLRequest
-
- setRf(RewardFunction) - Method in class burlap.behavior.singleagent.MDPSolver
-
- setRf(RewardFunction) - Method in interface burlap.behavior.singleagent.MDPSolverInterface
-
Sets the reward function used by this solver
- setRf(RewardFunction) - Method in class burlap.oomdp.singleagent.environment.SimulatedEnvironment
-
- setRf(RewardFunction) - Method in interface burlap.oomdp.singleagent.environment.TaskSettableEnvironment
-
- setRfDim(int) - Method in class burlap.behavior.singleagent.learnfromdemo.mlirl.differentiableplanners.diffvinit.LinearDiffRFVInit
-
- setRfFeaturesAreForNextState(boolean) - Method in class burlap.behavior.singleagent.learnfromdemo.mlirl.differentiableplanners.diffvinit.LinearDiffRFVInit
-
- setRfFvGen(StateToFeatureVectorGenerator) - Method in class burlap.behavior.singleagent.learnfromdemo.mlirl.differentiableplanners.diffvinit.LinearDiffRFVInit
-
- setRollOutPolicy(Policy) - Method in class burlap.behavior.singleagent.planning.stochastic.rtdp.RTDP
-
Sets the rollout policy to use.
- setRunRolloutsInRevere(boolean) - Method in class burlap.behavior.singleagent.planning.stochastic.rtdp.BoundedRTDP
-
Sets whether each rollout should be run in reverse after completion.
- setSamples(List<State>) - Method in class burlap.behavior.singleagent.planning.vfa.fittedvi.FittedVI
-
Sets the state samples to which the value function will be fit.
- setSemiWallPassableProbability(double) - Method in class burlap.domain.stochasticgames.gridgame.GridGame
-
Sets the probability that an agent can pass through a semi-wall.
- setSetRenderLayer(StateRenderLayer) - Method in class burlap.oomdp.visualizer.Visualizer
-
- setShell(EnvironmentShell) - Method in class burlap.oomdp.singleagent.explorer.TerminalExplorer
-
Deprecated.
- setSignificanceForCI(double) - Method in class burlap.behavior.singleagent.auxiliary.performance.PerformancePlotter
-
Sets the significance used for confidence intervals.
- setSignificanceForCI(double) - Method in class burlap.behavior.stochasticgames.auxiliary.performance.MultiAgentPerformancePlotter
-
Sets the significance used for confidence intervals.
- setSoftTieRenderStyleDelta(double) - Method in class burlap.behavior.singleagent.auxiliary.valuefunctionvis.common.PolicyGlyphPainter2D
-
Sets the soft difference between max actions to determine ties when the MAXACTIONSOFSOFTTIE render style is used.
- setSolver(MDPSolverInterface) - Method in class burlap.behavior.policy.BoltzmannQPolicy
-
- setSolver(MDPSolverInterface) - Method in class burlap.behavior.policy.EpsilonGreedy
-
- setSolver(MDPSolverInterface) - Method in class burlap.behavior.policy.GreedyDeterministicQPolicy
-
- setSolver(MDPSolverInterface) - Method in class burlap.behavior.policy.GreedyQPolicy
-
- setSolver(MDPSolverInterface) - Method in interface burlap.behavior.policy.SolverDerivedPolicy
-
Sets the valueFunction whose results affect this policy.
- setSolver(MDPSolverInterface) - Method in class burlap.behavior.singleagent.planning.deterministic.DDPlannerPolicy
-
- setSolver(MDPSolverInterface) - Method in class burlap.behavior.singleagent.planning.deterministic.SDPlannerPolicy
-
- setSolver(MDPSolverInterface) - Method in class burlap.behavior.singleagent.planning.stochastic.montecarlo.uct.UCTTreeWalkPolicy
-
- setSpp(StatePolicyPainter) - Method in class burlap.behavior.singleagent.auxiliary.valuefunctionvis.PolicyRenderLayer
-
Sets the state-wise policy painter
- setSpp(StatePolicyPainter) - Method in class burlap.behavior.singleagent.auxiliary.valuefunctionvis.ValueFunctionVisualizerGUI
-
Sets the state-wise policy painter
- setStartStateGenerator(StateGenerator) - Method in class burlap.behavior.singleagent.learnfromdemo.apprenticeship.ApprenticeshipLearningRequest
-
- setStateActionRenderLayer(StateActionRenderLayer, boolean) - Method in class burlap.oomdp.visualizer.Visualizer
-
- setStateContext(State) - Method in interface burlap.oomdp.auxiliary.stateconditiontest.StateConditionTestIterable
-
- setStateEnumerator(StateEnumerator) - Method in class burlap.oomdp.singleagent.pomdp.PODomain
-
Sets the
StateEnumerator
used by this domain to enumerate all underlying MDP states.
- setStateGenerator(StateGenerator) - Method in class burlap.oomdp.singleagent.environment.SimulatedEnvironment
-
- setStateMapping(StateMapping) - Method in class burlap.behavior.singleagent.options.Option
-
Sets this option to use a state mapping that maps from the source MDP states to another state representation that will be used by this option for making
action selections.
- setStateSelectionMode(BoundedRTDP.StateSelectionMode) - Method in class burlap.behavior.singleagent.planning.stochastic.rtdp.BoundedRTDP
-
Sets the state selection mode used when choosing next states to expand.
- setStatesToVisualize(Collection<State>) - Method in class burlap.behavior.singleagent.auxiliary.valuefunctionvis.PolicyRenderLayer
-
Sets the states to visualize
- setStatesToVisualize(Collection<State>) - Method in class burlap.behavior.singleagent.auxiliary.valuefunctionvis.ValueFunctionRenderLayer
-
Sets the states to visualize
- setStoredAbstraction(StateAbstraction) - Method in class burlap.behavior.stochasticgames.agents.naiveq.SGNaiveQFactory
-
Sets the factory to provide Q-learning algorithms with the given state abstraction.
- setStoredMapAbstraction(StateAbstraction) - Method in class burlap.behavior.stochasticgames.agents.naiveq.SGNaiveQLAgent
-
Sets the state abstraction that this agent will use
- setStrategy(Policy) - Method in class burlap.behavior.stochasticgames.agents.naiveq.SGNaiveQLAgent
-
Sets the Q-learning policy that this agent will use (e.g., epsilon greedy)
- SetStrategySGAgent - Class in burlap.behavior.stochasticgames.agents
-
A class for an agent who makes decisions by following a specified strategy and does not respond to the other player's actions.
- SetStrategySGAgent(SGDomain, Policy) - Constructor for class burlap.behavior.stochasticgames.agents.SetStrategySGAgent
-
Initializes for the given domain in which the agent will play and the strategy that they will follow.
- SetStrategySGAgent.SetStrategyAgentFactory - Class in burlap.behavior.stochasticgames.agents
-
- SetStrategySGAgent.SetStrategyAgentFactory(SGDomain, Policy) - Constructor for class burlap.behavior.stochasticgames.agents.SetStrategySGAgent.SetStrategyAgentFactory
-
- setSvp(StateValuePainter) - Method in class burlap.behavior.singleagent.auxiliary.valuefunctionvis.ValueFunctionRenderLayer
-
Sets the state-wise value function painter
- setSvp(StateValuePainter) - Method in class burlap.behavior.singleagent.auxiliary.valuefunctionvis.ValueFunctionVisualizerGUI
-
Sets the state-wise value function painter
- setSynchronizeJointActionSelectionAmongAgents(boolean) - Method in class burlap.behavior.stochasticgames.PolicyFromJointPolicy
-
Sets whether actions selection of this agent's policy should be synchronized with the action selection of other agents
following the same underlying joint policy.
- setTargetAgent(String) - Method in class burlap.behavior.stochasticgames.JointPolicy
-
Sets the target privledged agent from which this joint policy is defined.
- setTargetAgent(String) - Method in class burlap.behavior.stochasticgames.madynamicprogramming.policies.ECorrelatedQJointPolicy
-
- setTargetAgent(String) - Method in class burlap.behavior.stochasticgames.madynamicprogramming.policies.EGreedyJointPolicy
-
- setTargetAgent(String) - Method in class burlap.behavior.stochasticgames.madynamicprogramming.policies.EGreedyMaxWellfare
-
- setTargetAgent(String) - Method in class burlap.behavior.stochasticgames.madynamicprogramming.policies.EMinMaxPolicy
-
- setTemperature(double) - Method in class burlap.datastructures.BoltzmannDistribution
-
Sets the temperature value to use.
- setTerminalStates(Set<Integer>) - Method in class burlap.domain.singleagent.graphdefined.GraphTF
-
- setTerminateMapper(DirectOptionTerminateMapper) - Method in class burlap.behavior.singleagent.options.Option
-
Sets this option to determine its execution results using a direct terminal state mapping rather than actually executing each action selcted
by the option step by step.
- setTerminateOnTrue(boolean) - Method in class burlap.oomdp.auxiliary.common.SinglePFTF
-
Sets whether to be terminal state it is required for there to be a true grounded version of this class' propositional function
or whether it is required for there to be a false grounded version.
- setTf(TerminalFunction) - Method in class burlap.behavior.singleagent.MDPSolver
-
- setTf(TerminalFunction) - Method in interface burlap.behavior.singleagent.MDPSolverInterface
-
Sets the terminal state function used by this solver
- setTf(TerminalFunction) - Method in class burlap.oomdp.auxiliary.stateconditiontest.TFGoalCondition
-
- setTf(TerminalFunction) - Method in class burlap.oomdp.singleagent.environment.SimulatedEnvironment
-
- setTf(TerminalFunction) - Method in interface burlap.oomdp.singleagent.environment.TaskSettableEnvironment
-
- setTheTaskSpec(TaskSpec) - Method in class burlap.behavior.singleagent.interfaces.rlglue.RLGlueDomain
-
- setTHistory(double[]) - Method in class burlap.behavior.singleagent.learnfromdemo.apprenticeship.ApprenticeshipLearningRequest
-
- setThrustValue(double) - Method in class burlap.domain.singleagent.lunarlander.LunarLanderDomain.ActionThrust
-
- setToCorrectModel() - Method in class burlap.domain.singleagent.cartpole.CartPoleDomain
-
Sets to use the correct physics model by Florian.
- setToIncorrectClassicModel() - Method in class burlap.domain.singleagent.cartpole.CartPoleDomain
-
Sets to the use the classic model by Barto, Sutton, and Anderson, which has incorrect friction forces and gravity
in the wrong direction
- setToIncorrectClassicModelWithCorrectGravity() - Method in class burlap.domain.singleagent.cartpole.CartPoleDomain
-
Sets to use the classic model by Barto, Sutton, and Anderson which has incorrect friction forces, but will use
correct gravity.
- setToStandardLunarLander() - Method in class burlap.domain.singleagent.lunarlander.LunarLanderDomain
-
Sets the domain to use a standard set of physics and with a standard set of two thrust actions.
gravity = -0.2
xmin = 0
xmax = 100
ymin = 0
ymax = 50
max velocity component speed = 4
maximum angle of rotation = pi/4
change in angle from turning = pi/20
thrust1 force = 0.32
thrust2 force = 0.2 (opposite gravity)
- setTransition(int, int, int, double) - Method in class burlap.domain.singleagent.graphdefined.GraphDefinedDomain
-
Sets the probability p
for transitioning to state node tNode
after taking action number action
in state node srcNode
.
- setTransitionDynamics(Map<Integer, Map<Integer, Set<GraphDefinedDomain.NodeTransitionProbability>>>) - Method in class burlap.domain.singleagent.graphdefined.GraphDefinedDomain.GraphAction
-
- setTransitionDynamics(double[][]) - Method in class burlap.domain.singleagent.gridworld.GridWorldDomain
-
Will set the movement direction probabilities based on the action chosen.
- setType(Attribute.AttributeType) - Method in class burlap.oomdp.core.Attribute
-
Sets the type for this attribute.
- setup() - Method in class burlap.testing.TestBlockDude
-
- setup() - Method in class burlap.testing.TestGridWorld
-
- setup() - Method in class burlap.testing.TestHashing
-
- setup() - Method in class burlap.testing.TestImmutableState
-
- setup() - Method in class burlap.testing.TestPlanning
-
- setupForNewEpisode() - Method in class burlap.behavior.singleagent.auxiliary.performance.PerformancePlotter.Trial
-
Completes the last episode and sets up the datastructures for the next episode
- setupForNewEpisode() - Method in class burlap.behavior.stochasticgames.auxiliary.performance.MultiAgentPerformancePlotter.Trial
-
Completes the last episode and sets up the datastructures for the next episode
- setUpPlottingConfiguration(int, int, int, int, TrialMode, PerformanceMetric...) - Method in class burlap.behavior.singleagent.auxiliary.performance.LearningAlgorithmExperimenter
-
Setsup the plotting confiruation.
- setUpPlottingConfiguration(int, int, int, int, TrialMode, PerformanceMetric...) - Method in class burlap.behavior.stochasticgames.auxiliary.performance.MultiAgentExperimenter
-
Setsup the plotting confiruation.
- setUseFeatureWiseLearningRate(boolean) - Method in class burlap.behavior.singleagent.learning.tdmethods.vfa.GradientDescentSarsaLam
-
Sets whether learning rate polls should be based on the VFA state feature ids, or the OO-MDP state.
- setUseMaxHeap(boolean) - Method in class burlap.datastructures.HashIndexedHeap
-
Sets whether this heap is a max heap or a min heap
- setUseReplaceTraces(boolean) - Method in class burlap.behavior.singleagent.learning.tdmethods.vfa.GradientDescentSarsaLam
-
Sets whether to use replacing eligibility traces rather than accumulating traces.
- setUseSemiDeep(boolean) - Method in class burlap.domain.singleagent.blockdude.BlockDude
-
Sets whether generated domain's actions use semi-deep state copies or full deep copies.
- setUseVariableCSize(boolean) - Method in class burlap.behavior.singleagent.learnfromdemo.mlirl.differentiableplanners.DifferentiableSparseSampling
-
Sets whether the number of state transition samples (C) should be variable with respect to the depth of the node.
- setUseVariableCSize(boolean) - Method in class burlap.behavior.singleagent.planning.stochastic.sparsesampling.SparseSampling
-
Sets whether the number of state transition samples (C) should be variable with respect to the depth of the node.
- setUsingMaxMargin(boolean) - Method in class burlap.behavior.singleagent.learnfromdemo.apprenticeship.ApprenticeshipLearningRequest
-
- setV(DifferentiableSparseSampling.QAndQGradient) - Method in class burlap.behavior.singleagent.learnfromdemo.mlirl.differentiableplanners.DifferentiableSparseSampling.DiffStateNode
-
- setValue(HashableState, double) - Method in class burlap.behavior.stochasticgames.madynamicprogramming.MADynamicProgramming.BackupBasedQSource
-
Sets the value of the state in this objects value function map.
- setValue(String, String) - Method in class burlap.oomdp.core.objects.ImmutableObjectInstance
-
Sets the value of the attribute named attName for this object instance.
- setValue(String, double) - Method in class burlap.oomdp.core.objects.ImmutableObjectInstance
-
Sets the value of the attribute named attName for this object instance.
- setValue(String, int) - Method in class burlap.oomdp.core.objects.ImmutableObjectInstance
-
Sets the value of the attribute named attName for this object instance.
- setValue(String, boolean) - Method in class burlap.oomdp.core.objects.ImmutableObjectInstance
-
Sets the value of the attribute named attName for this object instance.
- setValue(String, int[]) - Method in class burlap.oomdp.core.objects.ImmutableObjectInstance
-
Sets the value of the attribute named attName for this object instance.
- setValue(String, double[]) - Method in class burlap.oomdp.core.objects.ImmutableObjectInstance
-
Sets the value of the attribute named attName for this object instance.
- setValue(String, String) - Method in class burlap.oomdp.core.objects.MutableObjectInstance
-
Sets the value of the attribute named attName for this object instance.
- setValue(String, double) - Method in class burlap.oomdp.core.objects.MutableObjectInstance
-
Sets the value of the attribute named attName for this object instance.
- setValue(String, int) - Method in class burlap.oomdp.core.objects.MutableObjectInstance
-
Sets the value of the attribute named attName for this object instance.
- setValue(String, boolean) - Method in class burlap.oomdp.core.objects.MutableObjectInstance
-
Sets the value of the attribute named attName for this object instance.
- setValue(String, int[]) - Method in class burlap.oomdp.core.objects.MutableObjectInstance
-
Sets the value of the attribute named attName for this object instance.
- setValue(String, double[]) - Method in class burlap.oomdp.core.objects.MutableObjectInstance
-
Sets the value of the attribute named attName for this object instance.
- setValue(String, T) - Method in interface burlap.oomdp.core.objects.ObjectInstance
-
- setValue(String, String) - Method in interface burlap.oomdp.core.objects.ObjectInstance
-
Sets the value of the attribute named attName for this object instance.
- setValue(String, double) - Method in interface burlap.oomdp.core.objects.ObjectInstance
-
Sets the value of the attribute named attName for this object instance.
- setValue(String, int) - Method in interface burlap.oomdp.core.objects.ObjectInstance
-
Sets the value of the attribute named attName for this object instance.
- setValue(String, boolean) - Method in interface burlap.oomdp.core.objects.ObjectInstance
-
Sets the value of the attribute named attName for this object instance.
- setValue(String, int[]) - Method in interface burlap.oomdp.core.objects.ObjectInstance
-
Sets the value of the attribute named attName for this object instance.
- setValue(String, double[]) - Method in interface burlap.oomdp.core.objects.ObjectInstance
-
Sets the value of the attribute named attName for this object instance.
- setValue(String, T) - Method in class burlap.oomdp.core.objects.OOMDPObjectInstance
-
Sets an object's value based on it's java.lang type.
- setValue(int) - Method in class burlap.oomdp.core.values.DiscreteValue
-
- setValue(double) - Method in class burlap.oomdp.core.values.DiscreteValue
-
- setValue(boolean) - Method in class burlap.oomdp.core.values.DiscreteValue
-
- setValue(String) - Method in class burlap.oomdp.core.values.DiscreteValue
-
- setValue(String) - Method in class burlap.oomdp.core.values.DoubleArrayValue
-
- setValue(int[]) - Method in class burlap.oomdp.core.values.DoubleArrayValue
-
- setValue(double[]) - Method in class burlap.oomdp.core.values.DoubleArrayValue
-
- setValue(String) - Method in class burlap.oomdp.core.values.IntArrayValue
-
- setValue(int[]) - Method in class burlap.oomdp.core.values.IntArrayValue
-
- setValue(double[]) - Method in class burlap.oomdp.core.values.IntArrayValue
-
- setValue(int) - Method in class burlap.oomdp.core.values.IntValue
-
- setValue(double) - Method in class burlap.oomdp.core.values.IntValue
-
- setValue(String) - Method in class burlap.oomdp.core.values.IntValue
-
- setValue(boolean) - Method in class burlap.oomdp.core.values.IntValue
-
- setValue(String) - Method in class burlap.oomdp.core.values.MultiTargetRelationalValue
-
- setValue(int) - Method in class burlap.oomdp.core.values.OOMDPValue
-
Sets the internal value representation using an int value
- setValue(double) - Method in class burlap.oomdp.core.values.OOMDPValue
-
Sets the internal value representation using a double value
- setValue(String) - Method in class burlap.oomdp.core.values.OOMDPValue
-
Sets the internal value representation using a string value
- setValue(boolean) - Method in class burlap.oomdp.core.values.OOMDPValue
-
Sets the internalvalue representation using a boolean value
- setValue(int[]) - Method in class burlap.oomdp.core.values.OOMDPValue
-
Sets the int array value.
- setValue(double[]) - Method in class burlap.oomdp.core.values.OOMDPValue
-
Sets the double array value.
- setValue(int) - Method in class burlap.oomdp.core.values.RealValue
-
- setValue(double) - Method in class burlap.oomdp.core.values.RealValue
-
- setValue(String) - Method in class burlap.oomdp.core.values.RealValue
-
- setValue(String) - Method in class burlap.oomdp.core.values.RelationalValue
-
- setValue(int) - Method in class burlap.oomdp.core.values.StringValue
-
- setValue(double) - Method in class burlap.oomdp.core.values.StringValue
-
- setValue(String) - Method in class burlap.oomdp.core.values.StringValue
-
- setValue(int) - Method in interface burlap.oomdp.core.values.Value
-
Sets the internal value representation using an int value
- setValue(double) - Method in interface burlap.oomdp.core.values.Value
-
Sets the internal value representation using a double value
- setValue(String) - Method in interface burlap.oomdp.core.values.Value
-
Sets the internal value representation using a string value
- setValue(boolean) - Method in interface burlap.oomdp.core.values.Value
-
Sets the internalvalue representation using a boolean value
- setValue(int[]) - Method in interface burlap.oomdp.core.values.Value
-
Sets the int array value.
- setValue(double[]) - Method in interface burlap.oomdp.core.values.Value
-
Sets the double array value.
- setValue(String, T) - Method in class burlap.oomdp.statehashing.HashableObject
-
- setValue(String, String) - Method in class burlap.oomdp.statehashing.HashableObject
-
- setValue(String, double) - Method in class burlap.oomdp.statehashing.HashableObject
-
- setValue(String, int) - Method in class burlap.oomdp.statehashing.HashableObject
-
- setValue(String, boolean) - Method in class burlap.oomdp.statehashing.HashableObject
-
- setValue(String, int[]) - Method in class burlap.oomdp.statehashing.HashableObject
-
- setValue(String, double[]) - Method in class burlap.oomdp.statehashing.HashableObject
-
- setValueForLeafNodes(ValueFunctionInitialization) - Method in class burlap.behavior.singleagent.learnfromdemo.mlirl.differentiableplanners.DifferentiableSparseSampling
-
- setValueForLeafNodes(ValueFunctionInitialization) - Method in class burlap.behavior.singleagent.planning.stochastic.sparsesampling.SparseSampling
-
- setValueFunctionInitialization(ValueFunctionInitialization) - Method in class burlap.behavior.singleagent.planning.stochastic.DynamicProgramming
-
Sets the value function initialization to use.
- setValueFunctionToLowerBound() - Method in class burlap.behavior.singleagent.planning.stochastic.rtdp.BoundedRTDP
-
Sets the value function to use to be the lower bound.
- setValueFunctionToUpperBound() - Method in class burlap.behavior.singleagent.planning.stochastic.rtdp.BoundedRTDP
-
Sets the value function to use to be the upper bound.
- setValueStringRenderingFormat(int, Color, int, float, float) - Method in class burlap.behavior.singleagent.auxiliary.valuefunctionvis.common.StateValuePainter2D
-
Sets the rendering format of the string displaying the value of each state.
- setVerticalWall(State, int, int, int, int, int) - Static method in class burlap.domain.stochasticgames.gridgame.GridGame
-
Sets the attribute values for a vertical wall
- setVGrad(DifferentiableSparseSampling.QAndQGradient) - Method in class burlap.behavior.singleagent.learnfromdemo.mlirl.differentiableplanners.DifferentiableSparseSampling.DiffStateNode
-
- setVInit(ValueFunctionInitialization) - Method in class burlap.behavior.singleagent.planning.vfa.fittedvi.FittedVI
-
Sets the value function initialization used at the start of planning.
- setVinitDim(int) - Method in class burlap.behavior.singleagent.learnfromdemo.mlirl.differentiableplanners.diffvinit.LinearDiffRFVInit
-
- setVinitFvGen(StateToFeatureVectorGenerator) - Method in class burlap.behavior.singleagent.learnfromdemo.mlirl.differentiableplanners.diffvinit.LinearDiffRFVInit
-
- setVisualizer(Visualizer) - Method in class burlap.shell.BurlapShell
-
- setVmax(double) - Method in class burlap.domain.singleagent.lunarlander.LunarLanderDomain.LLPhysicsParams
-
- setVmax(double) - Method in class burlap.domain.singleagent.lunarlander.LunarLanderDomain
-
Sets the maximum velocity of the agent (the agent cannot move faster than this value).
- setVmax(double) - Method in class burlap.domain.singleagent.mountaincar.MCRandomStateGenerator
-
Sets the maximum velocity that a generated state can have.
- setVmin(double) - Method in class burlap.domain.singleagent.mountaincar.MCRandomStateGenerator
-
Sets the minimum velocity that a generated state can have.
- setVRange(double, double) - Method in class burlap.domain.singleagent.mountaincar.MCRandomStateGenerator
-
Sets the random velocity range that a generated state can have.
- setW(World) - Method in class burlap.oomdp.stochasticgames.explorers.SGVisualExplorer
-
Sets the
World
associated with this visual explorer and shell.
- setWallInstance(ObjectInstance, int, int, int, int) - Static method in class burlap.domain.stochasticgames.gridgame.GridGame
-
Sets the attribute values for a wall instance
- setWelcomeMessage(String) - Method in class burlap.shell.BurlapShell
-
- setWorld(World) - Method in class burlap.shell.SGWorldShell
-
- setXmax(double) - Method in class burlap.domain.singleagent.lunarlander.LunarLanderDomain.LLPhysicsParams
-
- setXmax(double) - Method in class burlap.domain.singleagent.lunarlander.LunarLanderDomain
-
Sets the maximum x position of the lander (the agent cannot cross this boundary)
- setXmax(double) - Method in class burlap.domain.singleagent.mountaincar.MCRandomStateGenerator
-
Sets the maximum x-value that a generated state can have.
- setXmin(double) - Method in class burlap.domain.singleagent.lunarlander.LunarLanderDomain.LLPhysicsParams
-
- setXmin(double) - Method in class burlap.domain.singleagent.lunarlander.LunarLanderDomain
-
Sets the minimum x position of the lander (the agent cannot cross this boundary)
- setXmin(double) - Method in class burlap.domain.singleagent.mountaincar.MCRandomStateGenerator
-
Sets the minimum x-value that a generated state can have.
- setXRange(double, double) - Method in class burlap.domain.singleagent.mountaincar.MCRandomStateGenerator
-
Sets the random x-value range that a generated state can have.
- setXYAttByObjectClass(String, String, String, String) - Method in class burlap.behavior.singleagent.auxiliary.valuefunctionvis.common.PolicyGlyphPainter2D
-
Will set the x-y attributes to use for cell rendering to the x y attributes of the first object in the state of the designated classes.
- setXYAttByObjectClass(String, String, String, String) - Method in class burlap.behavior.singleagent.auxiliary.valuefunctionvis.common.StateValuePainter2D
-
Will set the x-y attributes to use for cell rendering to the x y attributes of the first object in the state of the designated classes.
- setXYAttByObjectReference(String, String, String, String) - Method in class burlap.behavior.singleagent.auxiliary.valuefunctionvis.common.PolicyGlyphPainter2D
-
Will set the x-y attributes to use for cell rendering to the x y attributes of the designated object references.
- setXYAttByObjectReference(String, String, String, String) - Method in class burlap.behavior.singleagent.auxiliary.valuefunctionvis.common.StateValuePainter2D
-
Will set the x-y attributes to use for cell rendering to the x y attributes of the designated object references.
- setYmax(double) - Method in class burlap.domain.singleagent.lunarlander.LunarLanderDomain.LLPhysicsParams
-
- setYmax(double) - Method in class burlap.domain.singleagent.lunarlander.LunarLanderDomain
-
Sets the maximum y position of the lander (the agent cannot cross this boundary)
- setYmin(double) - Method in class burlap.domain.singleagent.lunarlander.LunarLanderDomain.LLPhysicsParams
-
- setYmin(double) - Method in class burlap.domain.singleagent.lunarlander.LunarLanderDomain
-
Sets the minimum y position of the lander (the agent cannot cross this boundary)
- SGAgent - Class in burlap.oomdp.stochasticgames
-
This abstract class defines the the shell code and interface for creating agents
that can make decisions in mutli-agent stochastic game worlds.
- SGAgent() - Constructor for class burlap.oomdp.stochasticgames.SGAgent
-
- SGAgentAction - Class in burlap.oomdp.stochasticgames.agentactions
-
An abstract class for providing action definitions that are selectable by agents
(
SGAgent
) in a stochastic game.
- SGAgentAction(SGDomain, String) - Constructor for class burlap.oomdp.stochasticgames.agentactions.SGAgentAction
-
Initializes this single action to be for the given domain and with the given name.
- SGAgentType - Class in burlap.oomdp.stochasticgames
-
This class specifies the type of agent a stochastic games agent can be.
- SGAgentType(String, ObjectClass, List<SGAgentAction>) - Constructor for class burlap.oomdp.stochasticgames.SGAgentType
-
Creates a new agent type with a given name, object class describing the agent's world state, and actions available to the agent.
- SGBackupOperator - Interface in burlap.behavior.stochasticgames.madynamicprogramming
-
A stochastic games backup operator to be used in multi-agent Q-learning or value function planning.
- SGDomain - Class in burlap.oomdp.stochasticgames
-
This class is used to define Stochastic Games Domains.
- SGDomain() - Constructor for class burlap.oomdp.stochasticgames.SGDomain
-
- SGNaiveQFactory - Class in burlap.behavior.stochasticgames.agents.naiveq
-
- SGNaiveQFactory(SGDomain, double, double, double, HashableStateFactory) - Constructor for class burlap.behavior.stochasticgames.agents.naiveq.SGNaiveQFactory
-
Initializes the factory.
- SGNaiveQFactory(SGDomain, double, double, double, HashableStateFactory, StateAbstraction) - Constructor for class burlap.behavior.stochasticgames.agents.naiveq.SGNaiveQFactory
-
Initializes the factory.
- SGNaiveQLAgent - Class in burlap.behavior.stochasticgames.agents.naiveq
-
A Tabular Q-learning [1] algorithm for stochastic games formalisms.
- SGNaiveQLAgent(SGDomain, double, double, HashableStateFactory) - Constructor for class burlap.behavior.stochasticgames.agents.naiveq.SGNaiveQLAgent
-
Initializes with a default Q-value of 0 and a 0.1 epsilon greedy policy/strategy
- SGNaiveQLAgent(SGDomain, double, double, double, HashableStateFactory) - Constructor for class burlap.behavior.stochasticgames.agents.naiveq.SGNaiveQLAgent
-
Initializes with a default 0.1 epsilon greedy policy/strategy
- SGNaiveQLAgent(SGDomain, double, double, ValueFunctionInitialization, HashableStateFactory) - Constructor for class burlap.behavior.stochasticgames.agents.naiveq.SGNaiveQLAgent
-
Initializes with a default 0.1 epsilon greedy policy/strategy
- SGQWActionHistory - Class in burlap.behavior.stochasticgames.agents.naiveq.history
-
A Tabular Q-learning [1] algorithm for stochastic games formalisms that augments states with the actions each agent took in n
previous time steps.
- SGQWActionHistory(SGDomain, double, double, HashableStateFactory, int, int, ActionIdMap) - Constructor for class burlap.behavior.stochasticgames.agents.naiveq.history.SGQWActionHistory
-
Initializes the learning algorithm using 0.1 epsilon greedy learning strategy/policy
- SGQWActionHistory(SGDomain, double, double, HashableStateFactory, int) - Constructor for class burlap.behavior.stochasticgames.agents.naiveq.history.SGQWActionHistory
-
Initializes the learning algorithm using 0.1 epsilon greedy learning strategy/policy
- SGQWActionHistoryFactory - Class in burlap.behavior.stochasticgames.agents.naiveq.history
-
An agent factory for Q-learning with history agents.
- SGQWActionHistoryFactory(SGDomain, double, double, HashableStateFactory, int, int, ActionIdMap) - Constructor for class burlap.behavior.stochasticgames.agents.naiveq.history.SGQWActionHistoryFactory
-
Initializes the factory
- SGQWActionHistoryFactory(SGDomain, double, double, HashableStateFactory, int) - Constructor for class burlap.behavior.stochasticgames.agents.naiveq.history.SGQWActionHistoryFactory
-
Initializes the factory
- SGStateGenerator - Class in burlap.oomdp.stochasticgames
-
An abstract class defining the interface and common mechanism for generating State objects specifically for stochastic games domains.
- SGStateGenerator() - Constructor for class burlap.oomdp.stochasticgames.SGStateGenerator
-
- SGTerminalExplorer - Class in burlap.oomdp.stochasticgames.explorers
-
Deprecated.
- SGTerminalExplorer(SGDomain, State) - Constructor for class burlap.oomdp.stochasticgames.explorers.SGTerminalExplorer
-
Deprecated.
Initializes the explorer with a domain and action model
- SGTerminalExplorer(World) - Constructor for class burlap.oomdp.stochasticgames.explorers.SGTerminalExplorer
-
Deprecated.
- SGToSADomain - Class in burlap.behavior.stochasticgames.agents.interfacing.singleagent
-
This domain generator is used to produce single agent domain version of a stochastic games domain for an agent of a given type
(specified by an
SGAgentType
object or for a given list of stochastic games agent actions (
SGAgentAction
).
- SGToSADomain(SGDomain, SGAgentType) - Constructor for class burlap.behavior.stochasticgames.agents.interfacing.singleagent.SGToSADomain
-
Initializes.
- SGToSADomain(SGDomain, List<SGAgentAction>) - Constructor for class burlap.behavior.stochasticgames.agents.interfacing.singleagent.SGToSADomain
-
Initializes.
- SGToSADomain.GroundedSAAActionWrapper - Class in burlap.behavior.stochasticgames.agents.interfacing.singleagent
-
- SGToSADomain.GroundedSAAActionWrapper(Action, GroundedSGAgentAction) - Constructor for class burlap.behavior.stochasticgames.agents.interfacing.singleagent.SGToSADomain.GroundedSAAActionWrapper
-
- SGToSADomain.GroundedSObParamedAAActionWrapper - Class in burlap.behavior.stochasticgames.agents.interfacing.singleagent
-
- SGToSADomain.GroundedSObParamedAAActionWrapper(Action, GroundedSGAgentAction) - Constructor for class burlap.behavior.stochasticgames.agents.interfacing.singleagent.SGToSADomain.GroundedSObParamedAAActionWrapper
-
- SGToSADomain.SAActionWrapper - Class in burlap.behavior.stochasticgames.agents.interfacing.singleagent
-
A single agent action wrapper for a stochastic game action.
- SGToSADomain.SAActionWrapper(SGAgentAction, Domain) - Constructor for class burlap.behavior.stochasticgames.agents.interfacing.singleagent.SGToSADomain.SAActionWrapper
-
Initializes for a given stochastic games action.
- SGVisualExplorer - Class in burlap.oomdp.stochasticgames.explorers
-
This class allows you act as all of the agents in a stochastic game (controlled by a
World
object)
by choosing actions for each of them to take in specific states.
- SGVisualExplorer(SGDomain, Visualizer, State) - Constructor for class burlap.oomdp.stochasticgames.explorers.SGVisualExplorer
-
Initializes the data members for the visual explorer.
- SGVisualExplorer(SGDomain, Visualizer, State, int, int) - Constructor for class burlap.oomdp.stochasticgames.explorers.SGVisualExplorer
-
Initializes the data members for the visual explorer.
- SGVisualExplorer(SGDomain, World, Visualizer, int, int) - Constructor for class burlap.oomdp.stochasticgames.explorers.SGVisualExplorer
-
Initializes the data members for the visual explorer.
- SGWorldShell - Class in burlap.shell
-
- SGWorldShell(Domain, InputStream, PrintStream, World) - Constructor for class burlap.shell.SGWorldShell
-
- sh - Variable in class burlap.behavior.singleagent.learning.actorcritic.critics.TDLambda.StateEligibilityTrace
-
The hashed state with which the eligibility value is associated.
- sh - Variable in class burlap.behavior.singleagent.learning.tdmethods.SarsaLam.EligibilityTrace
-
The state for this trace
- sh - Variable in class burlap.behavior.singleagent.planning.stochastic.HashedTransitionProbability
-
- sh - Variable in class burlap.behavior.singleagent.planning.stochastic.rtdp.BoundedRTDP.StateSelectionAndExpectedGap
-
The selected state
- sh - Variable in class burlap.behavior.singleagent.planning.stochastic.sparsesampling.SparseSampling.HashedHeightState
-
The hashed state
- sh - Variable in class burlap.behavior.singleagent.planning.stochastic.valueiteration.PrioritizedSweeping.BPTRNode
-
- shape - Variable in class burlap.domain.singleagent.gridworld.GridWorldVisualizer.CellPainter
-
- ShapedRewardFunction - Class in burlap.behavior.singleagent.shaping
-
This abstract class is used to define shaped reward functions.
- ShapedRewardFunction(RewardFunction) - Constructor for class burlap.behavior.singleagent.shaping.ShapedRewardFunction
-
Initializes with the base objective task reward function.
- shell - Variable in class burlap.oomdp.singleagent.explorer.TerminalExplorer
-
Deprecated.
- shell - Variable in class burlap.oomdp.singleagent.explorer.VisualExplorer
-
- shell - Variable in class burlap.oomdp.stochasticgames.explorers.SGTerminalExplorer
-
Deprecated.
- shell - Variable in class burlap.oomdp.stochasticgames.explorers.SGVisualExplorer
-
- ShellCommand - Interface in burlap.shell.command
-
An interface for implementing shell commands.
- ShellObserver - Interface in burlap.shell
-
- ShellObserver.ShellCommandEvent - Class in burlap.shell
-
Stores information about a command event in various public data members.
- ShellObserver.ShellCommandEvent(String, ShellCommand, int) - Constructor for class burlap.shell.ShellObserver.ShellCommandEvent
-
Initializes.
- shouldAnnotateExecution - Variable in class burlap.behavior.singleagent.options.Option
-
Boolean indicating whether the last option execution recording annotates the selected actions with this option's name
- shouldAnnotateOptions - Variable in class burlap.behavior.singleagent.learning.tdmethods.QLearning
-
Whether decomposed options should have their primitive actions annotated with the options name in the returned
EpisodeAnalysis
objects.
- shouldAnnotateOptions - Variable in class burlap.behavior.singleagent.learning.tdmethods.vfa.GradientDescentSarsaLam
-
Whether decomposed options should have their primitive actions annotated with the options name in the returned
EpisodeAnalysis
objects.
- shouldDecomposeOptions - Variable in class burlap.behavior.singleagent.learning.tdmethods.QLearning
-
Whether options should be decomposed into actions in the returned
EpisodeAnalysis
objects.
- shouldDecomposeOptions - Variable in class burlap.behavior.singleagent.learning.tdmethods.vfa.GradientDescentSarsaLam
-
Whether options should be decomposed into actions in the returned
EpisodeAnalysis
objects.
- shouldRecordResults - Variable in class burlap.behavior.singleagent.options.Option
-
Boolean indicating whether the last option execution result should be saved
- shouldRereunPolicyIteration(EpisodeAnalysis) - Method in class burlap.behavior.singleagent.learning.lspi.LSPI
-
Returns whether LSPI should be rereun given the latest learning episode results.
- shouldRescaleValues - Variable in class burlap.behavior.singleagent.auxiliary.valuefunctionvis.StateValuePainter
-
Indicates whether this painter should scale its rendering of values to whatever it is told the minimum and maximum values are.
- showPolicy - Variable in class burlap.behavior.singleagent.auxiliary.valuefunctionvis.ValueFunctionVisualizerGUI
-
The button to enable the visualization of the policy
- shuffleGroundedActions(List<GroundedAction>, int, int) - Method in class burlap.behavior.singleagent.planning.deterministic.uninformed.dfs.DFS
-
Shuffles the order of actions on the index range [s, e)
- significance - Variable in class burlap.behavior.singleagent.auxiliary.performance.PerformancePlotter
-
the significance level used for confidence intervals.
- significance - Variable in class burlap.behavior.stochasticgames.auxiliary.performance.MultiAgentPerformancePlotter
-
the significance level used for confidence intervals.
- SimpleAction - Class in burlap.oomdp.singleagent.common
-
An abstract subclass of
Action
for actions that are not parameterized, are primitive,
and have no preconditions (applicable everywhere).
- SimpleAction() - Constructor for class burlap.oomdp.singleagent.common.SimpleAction
-
- SimpleAction(String, Domain) - Constructor for class burlap.oomdp.singleagent.common.SimpleAction
-
- SimpleAction.SimpleDeterministicAction - Class in burlap.oomdp.singleagent.common
-
A abstract class for deterministic actions that are not parameterized, are primitive,
and have no preconditions (applicable everywhere).
- SimpleAction.SimpleDeterministicAction() - Constructor for class burlap.oomdp.singleagent.common.SimpleAction.SimpleDeterministicAction
-
- SimpleAction.SimpleDeterministicAction(String, Domain) - Constructor for class burlap.oomdp.singleagent.common.SimpleAction.SimpleDeterministicAction
-
- SimpleGroundedAction - Class in burlap.oomdp.singleagent.common
-
- SimpleGroundedAction(Action) - Constructor for class burlap.oomdp.singleagent.common.SimpleGroundedAction
-
- SimpleGroundedSGAgentAction - Class in burlap.oomdp.stochasticgames.agentactions
-
- SimpleGroundedSGAgentAction(String, SGAgentAction) - Constructor for class burlap.oomdp.stochasticgames.agentactions.SimpleGroundedSGAgentAction
-
- SimpleHashableStateFactory - Class in burlap.oomdp.statehashing
-
- SimpleHashableStateFactory() - Constructor for class burlap.oomdp.statehashing.SimpleHashableStateFactory
-
Default constructor: object identifier independent and no hash code caching.
- SimpleHashableStateFactory(boolean) - Constructor for class burlap.oomdp.statehashing.SimpleHashableStateFactory
-
Initializes with no hash code caching.
- SimpleHashableStateFactory(boolean, boolean) - Constructor for class burlap.oomdp.statehashing.SimpleHashableStateFactory
-
Initializes.
- SimpleHashableStateFactory.AttClass - Enum in burlap.oomdp.statehashing
-
- SimpleHashableStateFactory.SimpleCachedHashableState - Class in burlap.oomdp.statehashing
-
- SimpleHashableStateFactory.SimpleCachedHashableState(State) - Constructor for class burlap.oomdp.statehashing.SimpleHashableStateFactory.SimpleCachedHashableState
-
- SimpleHashableStateFactory.SimpleHashableState - Class in burlap.oomdp.statehashing
-
- SimpleHashableStateFactory.SimpleHashableState(State) - Constructor for class burlap.oomdp.statehashing.SimpleHashableStateFactory.SimpleHashableState
-
- SimpleHashableStateFactory.SimpleHashableStateInterface - Interface in burlap.oomdp.statehashing
-
- SimpleSerializableState - Class in burlap.oomdp.stateserialization.simple
-
- SimpleSerializableState() - Constructor for class burlap.oomdp.stateserialization.simple.SimpleSerializableState
-
- SimpleSerializableState(State) - Constructor for class burlap.oomdp.stateserialization.simple.SimpleSerializableState
-
- SimpleSerializableStateFactory - Class in burlap.oomdp.stateserialization.simple
-
- SimpleSerializableStateFactory() - Constructor for class burlap.oomdp.stateserialization.simple.SimpleSerializableStateFactory
-
- SimpleSerializedObjectInstance - Class in burlap.oomdp.stateserialization.simple
-
- SimpleSerializedObjectInstance() - Constructor for class burlap.oomdp.stateserialization.simple.SimpleSerializedObjectInstance
-
- SimpleSerializedObjectInstance(ObjectInstance) - Constructor for class burlap.oomdp.stateserialization.simple.SimpleSerializedObjectInstance
-
- SimpleSerializedValue - Class in burlap.oomdp.stateserialization.simple
-
A serializable representation of
Value
objects.
- SimpleSerializedValue() - Constructor for class burlap.oomdp.stateserialization.simple.SimpleSerializedValue
-
- SimpleSerializedValue(Value) - Constructor for class burlap.oomdp.stateserialization.simple.SimpleSerializedValue
-
Creates a serializable representation for the given
Value
- SimpleSGAgentAction - Class in burlap.oomdp.stochasticgames.agentactions
-
This
SGAgentAction
definition defines a parameter-less agent action
that can be
executed in every state.
- SimpleSGAgentAction(SGDomain, String) - Constructor for class burlap.oomdp.stochasticgames.agentactions.SimpleSGAgentAction
-
Initializes this single action to be for the given domain and with the given name.
- simpleValueFunctionVis(ValueFunction, Policy) - Method in class burlap.tutorials.bpl.BasicBehavior
-
- SimulatedEnvironment - Class in burlap.oomdp.singleagent.environment
-
- SimulatedEnvironment(Domain, RewardFunction, TerminalFunction) - Constructor for class burlap.oomdp.singleagent.environment.SimulatedEnvironment
-
- SimulatedEnvironment(Domain, RewardFunction, TerminalFunction, State) - Constructor for class burlap.oomdp.singleagent.environment.SimulatedEnvironment
-
- SimulatedEnvironment(Domain, RewardFunction, TerminalFunction, StateGenerator) - Constructor for class burlap.oomdp.singleagent.environment.SimulatedEnvironment
-
- SimulatedPOEnvironment - Class in burlap.oomdp.singleagent.pomdp
-
- SimulatedPOEnvironment(PODomain, RewardFunction, TerminalFunction) - Constructor for class burlap.oomdp.singleagent.pomdp.SimulatedPOEnvironment
-
- SimulatedPOEnvironment(PODomain, RewardFunction, TerminalFunction, State) - Constructor for class burlap.oomdp.singleagent.pomdp.SimulatedPOEnvironment
-
- SimulatedPOEnvironment(PODomain, RewardFunction, TerminalFunction, StateGenerator) - Constructor for class burlap.oomdp.singleagent.pomdp.SimulatedPOEnvironment
-
- singleActionMap - Variable in class burlap.oomdp.stochasticgames.SGDomain
-
- SingleGoalPFRF - Class in burlap.oomdp.singleagent.common
-
This class defines a reward function that returns a goal reward when any grounded form of a propositional
function is true in the resulting state and a default non-goal reward otherwise.
- SingleGoalPFRF(PropositionalFunction) - Constructor for class burlap.oomdp.singleagent.common.SingleGoalPFRF
-
Initializes the reward function to return 1 when any grounded from of pf is true in the resulting
state.
- SingleGoalPFRF(PropositionalFunction, double, double) - Constructor for class burlap.oomdp.singleagent.common.SingleGoalPFRF
-
Initializes the reward function to return the specified goal reward when any grounded from of pf is true in the resulting
state and the specified non-goal reward otherwise.
- SinglePFSCT - Class in burlap.oomdp.auxiliary.stateconditiontest
-
A state condition class that returns true when ever any grounded version of a specified
propositional function is true in a state.
- SinglePFSCT(PropositionalFunction) - Constructor for class burlap.oomdp.auxiliary.stateconditiontest.SinglePFSCT
-
Initializes with the propositional function that is checked for state satisfaction
- SinglePFTF - Class in burlap.oomdp.auxiliary.common
-
This class defines a terminal function that terminates in states where there exists a grounded version of a specified
propositional function that is true in the state or alternatively, when there is a grounded version that is false in the state.
- SinglePFTF(PropositionalFunction) - Constructor for class burlap.oomdp.auxiliary.common.SinglePFTF
-
Initializes the propositional function that will cause the state to be terminal when any Grounded version of
pf is true.
- SinglePFTF(PropositionalFunction, boolean) - Constructor for class burlap.oomdp.auxiliary.common.SinglePFTF
-
Initializes the propositional function that will cause the state to be terminal when any Grounded version of
pf is true or alternatively false.
- SingleStageNormalFormGame - Class in burlap.domain.stochasticgames.normalform
-
This stochastic game domain generator provides methods to create N-player single stage games.
- SingleStageNormalFormGame(String[][], double[][][]) - Constructor for class burlap.domain.stochasticgames.normalform.SingleStageNormalFormGame
-
A constructor for bimatrix games with specified action names.
- SingleStageNormalFormGame(double[][], double[][]) - Constructor for class burlap.domain.stochasticgames.normalform.SingleStageNormalFormGame
-
A construtor for a bimatrix game where the row player payoffs and colum player payoffs are provided in two different 2D double matrices.
- SingleStageNormalFormGame(double[][]) - Constructor for class burlap.domain.stochasticgames.normalform.SingleStageNormalFormGame
-
A constructor for a bimatrix zero sum game.
- SingleStageNormalFormGame(String[][]) - Constructor for class burlap.domain.stochasticgames.normalform.SingleStageNormalFormGame
-
A constructor for games with a symmetric number of actions for each player.
- SingleStageNormalFormGame(List<List<String>>) - Constructor for class burlap.domain.stochasticgames.normalform.SingleStageNormalFormGame
-
A constructor for games with an asymmetric number of actions for each player.
- SingleStageNormalFormGame.ActionNameMap - Class in burlap.domain.stochasticgames.normalform
-
A wrapper for a HashMap from strings to ints used to map action names to their action index.
- SingleStageNormalFormGame.ActionNameMap() - Constructor for class burlap.domain.stochasticgames.normalform.SingleStageNormalFormGame.ActionNameMap
-
- SingleStageNormalFormGame.AgentPayoutFunction - Class in burlap.domain.stochasticgames.normalform
-
A class for defining a payout function for a single agent for each possible strategy profile.
- SingleStageNormalFormGame.AgentPayoutFunction() - Constructor for class burlap.domain.stochasticgames.normalform.SingleStageNormalFormGame.AgentPayoutFunction
-
- SingleStageNormalFormGame.NFGAgentAction - Class in burlap.domain.stochasticgames.normalform
-
A SingleAction class that uses the parent domain generator to determine which agent can take which actions and enforces that in the preconditions.
- SingleStageNormalFormGame.NFGAgentAction(SGDomain, String, SingleStageNormalFormGame.ActionNameMap[]) - Constructor for class burlap.domain.stochasticgames.normalform.SingleStageNormalFormGame.NFGAgentAction
-
- SingleStageNormalFormGame.SingleStageNormalFormJointReward - Class in burlap.domain.stochasticgames.normalform
-
A Joint Reward Function class that uses the parent domain generators payout matrix to determine payouts for any given strategy profile.
- SingleStageNormalFormGame.SingleStageNormalFormJointReward(int, SingleStageNormalFormGame.ActionNameMap[], SingleStageNormalFormGame.AgentPayoutFunction[]) - Constructor for class burlap.domain.stochasticgames.normalform.SingleStageNormalFormGame.SingleStageNormalFormJointReward
-
- SingleStageNormalFormGame.StrategyProfile - Class in burlap.domain.stochasticgames.normalform
-
A strategy profile represented as an array of action indices that is hashable.
- SingleStageNormalFormGame.StrategyProfile(int...) - Constructor for class burlap.domain.stochasticgames.normalform.SingleStageNormalFormGame.StrategyProfile
-
- size() - Method in class burlap.behavior.singleagent.learning.lspi.SARSData
-
The number of SARS tuples stored.
- size() - Method in class burlap.datastructures.HashedAggregator
-
Returns the number of keys stored.
- size - Variable in class burlap.datastructures.HashIndexedHeap
-
Number of objects in the heap
- size() - Method in class burlap.datastructures.HashIndexedHeap
-
Returns the size of the heap
- size() - Method in class burlap.datastructures.StochasticTree
-
Returns the number of objects in this tree
- size() - Method in class burlap.domain.stochasticgames.normalform.SingleStageNormalFormGame.ActionNameMap
-
- size() - Method in class burlap.oomdp.stochasticgames.JointAction
-
Returns the number of actions in this joint action.
- SIZEATTNAME - Static variable in class burlap.domain.singleagent.frostbite.FrostbiteDomain
-
Constant for the name of the size of a frozen platform
- softTieDelta - Variable in class burlap.behavior.singleagent.auxiliary.valuefunctionvis.common.PolicyGlyphPainter2D
-
The max probability difference from the most likely action for which an action that is not the most likely will still be rendered under the
MAXACTIONSOFTTIE rendering style.
- SoftTimeInverseDecayLR - Class in burlap.behavior.learningrate
-
Implements a learning rate decay schedule where the learning rate at time t is alpha_0 * (n_0 + 1) / (n_0 + t), where alpha_0 is the initial learning rate and n_0 is a parameter.
- SoftTimeInverseDecayLR(double, double) - Constructor for class burlap.behavior.learningrate.SoftTimeInverseDecayLR
-
Initializes with an initial learning rate and decay constant shift for a state independent learning rate.
- SoftTimeInverseDecayLR(double, double, double) - Constructor for class burlap.behavior.learningrate.SoftTimeInverseDecayLR
-
Initializes with an initial learning rate and decay constant shift (n_0) for a state independent learning rate that will decay to a value no smaller than minimumLearningRate
- SoftTimeInverseDecayLR(double, double, HashableStateFactory, boolean) - Constructor for class burlap.behavior.learningrate.SoftTimeInverseDecayLR
-
Initializes with an initial learning rate and decay constant shift (n_0) for a state or state-action (or state feature-action) dependent learning rate.
- SoftTimeInverseDecayLR(double, double, double, HashableStateFactory, boolean) - Constructor for class burlap.behavior.learningrate.SoftTimeInverseDecayLR
-
Initializes with an initial learning rate and decay constant shift (n_0) for a state or state-action (or state feature-action) dependent learning rate that will decay to a value no smaller than minimumLearningRate
If this learning rate function is to be used for state state features, rather than states,
then the hashing factory can be null;
- SoftTimeInverseDecayLR.MutableInt - Class in burlap.behavior.learningrate
-
A class for storing a mutable int value object
- SoftTimeInverseDecayLR.MutableInt(int) - Constructor for class burlap.behavior.learningrate.SoftTimeInverseDecayLR.MutableInt
-
- SoftTimeInverseDecayLR.StateWiseTimeIndex - Class in burlap.behavior.learningrate
-
A class for storing a time index for a state, or a time index for each action for a given state
- SoftTimeInverseDecayLR.StateWiseTimeIndex() - Constructor for class burlap.behavior.learningrate.SoftTimeInverseDecayLR.StateWiseTimeIndex
-
- solve(double[][], double[][]) - Method in class burlap.behavior.stochasticgames.agents.twoplayer.singlestage.equilibriumplayer.BimatrixEquilibriumSolver
-
Solves and caches the solution for the given bimatrix.
- solver - Variable in class burlap.behavior.stochasticgames.agents.twoplayer.singlestage.equilibriumplayer.EquilibriumPlayingSGAgent
-
The solution concept to be solved for the immediate rewards.
- SolverDerivedPolicy - Interface in burlap.behavior.policy
-
An interface for defining policies that refer to a
MDPSolverInterface
objects to defined the policy.
- solverInit(Domain, RewardFunction, TerminalFunction, double, HashableStateFactory) - Method in class burlap.behavior.singleagent.MDPSolver
-
- solverInit(Domain, RewardFunction, TerminalFunction, double, HashableStateFactory) - Method in interface burlap.behavior.singleagent.MDPSolverInterface
-
Initializes the solver with the common elements.
- somePFGroundingIsTrue(State) - Method in class burlap.oomdp.core.PropositionalFunction
-
- sortActionsWithOptionsFirst() - Method in class burlap.behavior.singleagent.planning.deterministic.uninformed.dfs.DFS
-
Reorders the planners action list so that options are in the front of the list.
- source - Variable in class burlap.oomdp.statehashing.HashableObject
-
- sourceAction - Variable in class burlap.behavior.singleagent.learning.modellearning.ModeledDomainGenerator.ModeledAction
-
The source action this action models
- sourceDomain - Variable in class burlap.behavior.singleagent.learning.modellearning.models.TabularModel
-
The source actual domain object for which actions will be modeled.
- sourceLearningRateFunction - Variable in class burlap.behavior.singleagent.vfa.fourier.FourierBasisLearningRateWrapper
-
- sourcePolicy - Variable in class burlap.behavior.policy.CachedPolicy
-
The source policy that gets cached
- sourcePolicy - Variable in class burlap.behavior.policy.DomainMappedPolicy
-
The source policy that will be mapped into a the target domain's actions.
- sourcePolicy - Variable in class burlap.behavior.singleagent.learning.modellearning.rmax.UnmodeledFavoredPolicy
-
- sourceRF - Variable in class burlap.behavior.singleagent.learning.modellearning.rmax.PotentialShapedRMax.PotentialShapedRMaxRF
-
The source reward function
- sp - Variable in class burlap.behavior.singleagent.learning.lspi.SARSData.SARS
-
The next state
- SparseSampling - Class in burlap.behavior.singleagent.planning.stochastic.sparsesampling
-
An implementation of the Sparse Sampling (SS) [1] planning algorithm.
- SparseSampling(Domain, RewardFunction, TerminalFunction, double, HashableStateFactory, int, int) - Constructor for class burlap.behavior.singleagent.planning.stochastic.sparsesampling.SparseSampling
-
Initializes.
- SparseSampling.HashedHeightState - Class in burlap.behavior.singleagent.planning.stochastic.sparsesampling
-
Tuple for a state and its height in a tree that can be hashed for quick retrieval.
- SparseSampling.HashedHeightState(HashableState, int) - Constructor for class burlap.behavior.singleagent.planning.stochastic.sparsesampling.SparseSampling.HashedHeightState
-
Initializes.
- SparseSampling.StateNode - Class in burlap.behavior.singleagent.planning.stochastic.sparsesampling
-
A class for state nodes.
- SparseSampling.StateNode(HashableState, int) - Constructor for class burlap.behavior.singleagent.planning.stochastic.sparsesampling.SparseSampling.StateNode
-
Creates a node for the given hased state at the given height
- SpecialExplorerAction - Interface in burlap.oomdp.singleagent.explorer
-
An interface for defining special non-domain actions to take in a visual explorer.
- specification - Variable in class burlap.behavior.singleagent.vfa.cmac.Tiling
-
A map from object class names to attribute tile specifications for attributes of that class
- specificObjectPainters - Variable in class burlap.oomdp.visualizer.StateRenderLayer
-
Map of painters that define how to paint specific objects; if an object it appears in both specific and general lists, the specific painter is used
- spp - Variable in class burlap.behavior.singleagent.auxiliary.valuefunctionvis.PolicyRenderLayer
-
Painter used to visualize the policy
- spp - Variable in class burlap.behavior.singleagent.auxiliary.valuefunctionvis.ValueFunctionVisualizerGUI
-
Painter used to visualize the policy
- sprime - Variable in class burlap.behavior.singleagent.learning.actorcritic.CritiqueResult
-
The state to which the agent transitioned for when it took action a in state s.
- sPrimeActionFeatures - Variable in class burlap.behavior.singleagent.learning.lspi.LSPI.SSFeatures
-
Next state-action features.
- src - Variable in class burlap.oomdp.auxiliary.common.ConstantStateGenerator
-
- srcAction - Variable in class burlap.behavior.stochasticgames.agents.interfacing.singleagent.SGToSADomain.SAActionWrapper
-
- srcAction - Variable in class burlap.domain.singleagent.tabularized.TabulatedDomainWrapper.ActionWrapper
-
- srcRFIsNextStateIndependent - Variable in class burlap.oomdp.singleagent.pomdp.BeliefMDPGenerator.BeliefRF
-
A boolean flag indicating whether the POMDP reward function is independent of the next state transition.
- srcState - Variable in class burlap.oomdp.stochasticgames.common.ConstantSGStateGenerator
-
- srender - Variable in class burlap.oomdp.visualizer.Visualizer
-
- start() - Method in class burlap.debugtools.MyTimer
-
Starts the timer if it is not running.
- start() - Method in class burlap.shell.BurlapShell
-
- startExperiment() - Method in class burlap.behavior.singleagent.auxiliary.performance.LearningAlgorithmExperimenter
-
Starts the experiment and runs all trails for all agents.
- startExperiment() - Method in class burlap.behavior.stochasticgames.auxiliary.performance.MultiAgentExperimenter
-
Starts the experiment and runs all trails for all agents.
- startGUI() - Method in class burlap.behavior.singleagent.auxiliary.performance.PerformancePlotter
-
Launches the GUI and automatic refresh thread.
- startGUI() - Method in class burlap.behavior.stochasticgames.auxiliary.performance.MultiAgentPerformancePlotter
-
Launches the GUI and automatic refresh thread.
- startLiveStatePolling(long) - Method in class burlap.oomdp.singleagent.explorer.VisualExplorer
-
Starts a thread that polls this explorer's
Environment
every
msPollDelay milliseconds for its current state and updates the visualizer to that state.
- startNewAgent(String) - Method in class burlap.behavior.singleagent.auxiliary.performance.PerformancePlotter
-
Informs the plotter that data collecton for a new agent should begin.
- startNewExperiment() - Method in interface burlap.behavior.singleagent.auxiliary.performance.ExperimentalEnvironment
-
- startNewTrial() - Method in class burlap.behavior.singleagent.auxiliary.performance.PerformancePlotter
-
Informs the plotter that a new trial of the current agent is beginning.
- startNewTrial() - Method in class burlap.behavior.stochasticgames.auxiliary.performance.MultiAgentPerformancePlotter.DatasetsAndTrials
-
Creates a new trial object and adds it to the end of the list of trials.
- startNewTrial() - Method in class burlap.behavior.stochasticgames.auxiliary.performance.MultiAgentPerformancePlotter
-
Initializes the datastructures for a new trial.
- startStateGenerator - Variable in class burlap.behavior.singleagent.learnfromdemo.apprenticeship.ApprenticeshipLearningRequest
-
The initial state generator that models the initial states from which the expert trajectories were drawn
- state - Variable in class burlap.behavior.singleagent.planning.stochastic.montecarlo.uct.UCTStateNode
-
The (hashed) state this node wraps
- State - Interface in burlap.oomdp.core.states
-
A State instance is used to define the state of an environment or an observation from the environment.
- StateAbstraction - Interface in burlap.oomdp.auxiliary
-
An interface for taking an input state and returning a simpler abstracted state representation.
- StateActionRenderLayer - Class in burlap.oomdp.visualizer
-
A class for rendering state-action events.
- StateActionRenderLayer() - Constructor for class burlap.oomdp.visualizer.StateActionRenderLayer
-
- stateActionWeights - Variable in class burlap.behavior.singleagent.vfa.common.LinearFVVFA
-
The function weights when performing Q-value function approximation.
- StateConditionTest - Interface in burlap.oomdp.auxiliary.stateconditiontest
-
And interface for defining classes that check for certain conditions in states.
- StateConditionTestIterable - Interface in burlap.oomdp.auxiliary.stateconditiontest
-
An extension of the StateConditionTest that is iterable.
- stateConsole - Variable in class burlap.oomdp.singleagent.explorer.VisualExplorer
-
- stateConsole - Variable in class burlap.oomdp.stochasticgames.explorers.SGVisualExplorer
-
- stateDepthIndex - Variable in class burlap.behavior.singleagent.planning.stochastic.montecarlo.uct.UCT
-
- StateEnumerator - Class in burlap.behavior.singleagent.auxiliary
-
For some algorithms, it is useful to have an explicit unique state identifier for each possible state and the hashcode of a state cannot reliably give
a unique number.
- StateEnumerator(Domain, HashableStateFactory) - Constructor for class burlap.behavior.singleagent.auxiliary.StateEnumerator
-
Constructs
- stateEnumerator - Variable in class burlap.oomdp.singleagent.pomdp.beliefstate.tabular.TabularBeliefState
-
A state enumerator for determining the index of MDP states in the belief vector.
- stateEnumerator - Variable in class burlap.oomdp.singleagent.pomdp.PODomain
-
The underlying MDP state enumerator
- StateFeature - Class in burlap.behavior.singleagent.vfa
-
A class for associating a state feature identifier with a value of that state feature
- StateFeature(int, double) - Constructor for class burlap.behavior.singleagent.vfa.StateFeature
-
Initializes.
- stateForId(int) - Method in class burlap.oomdp.singleagent.pomdp.beliefstate.tabular.TabularBeliefState
-
Returns the corresponding MDP state for the provided unique identifier.
- stateFromObservation(Domain, Observation) - Static method in class burlap.behavior.singleagent.interfaces.rlglue.RLGlueDomain
-
Creates a BURLAP
State
from a RLGlue
Observation
.
- StateGenerator - Interface in burlap.oomdp.auxiliary
-
An interface for generating State objects.
- stateGenerator - Variable in class burlap.oomdp.singleagent.environment.SimulatedEnvironment
-
- stateGenerator - Variable in class burlap.oomdp.singleagent.interfaces.rlglue.RLGlueEnvironment
-
The state generator for generating states for each episode
- StateGridder - Class in burlap.behavior.singleagent.auxiliary
-
This class is used for creating a grid of states over a state space domain.
- StateGridder() - Constructor for class burlap.behavior.singleagent.auxiliary.StateGridder
-
- StateGridder.AttributeSpecification - Class in burlap.behavior.singleagent.auxiliary
-
Class for specifying the grid along a single attribute.
- StateGridder.AttributeSpecification(String, double, double, int) - Constructor for class burlap.behavior.singleagent.auxiliary.StateGridder.AttributeSpecification
-
Initializes.
- StateGridder.AttributeSpecification(Attribute, int) - Constructor for class burlap.behavior.singleagent.auxiliary.StateGridder.AttributeSpecification
-
- stateHash(State) - Method in class burlap.behavior.singleagent.MDPSolver
-
A shorthand method for hashing a state.
- stateHash(State) - Method in class burlap.behavior.stochasticgames.agents.naiveq.history.SGQWActionHistory
-
- stateHash - Variable in class burlap.behavior.stochasticgames.agents.naiveq.history.SGQWActionHistoryFactory
-
The state hashing factory the Q-learning algorithm will use
- stateHash - Variable in class burlap.behavior.stochasticgames.agents.naiveq.SGNaiveQFactory
-
The state hashing factory the Q-learning algorithm will use
- stateHash(State) - Method in class burlap.behavior.stochasticgames.agents.naiveq.SGNaiveQLAgent
-
First abstracts state s, and then returns the
HashableState
object for the abstracted state.
- StateJSONParser - Class in burlap.oomdp.legacy
-
A StateParser class that uses the JSON file format and can can convert states to JSON strings (and back from them) for any possible input domain.
- StateJSONParser(Domain) - Constructor for class burlap.oomdp.legacy.StateJSONParser
-
Initializes with a given domain object.
- stateMapping - Variable in class burlap.behavior.singleagent.options.Option
-
An option state mapping to use to map from a source MDP state representation to a representation that this option will use
for action selection.
- StateMapping - Interface in burlap.oomdp.auxiliary
-
A state mapping interface that maps one state into another state.
- stateNodeConstructor - Variable in class burlap.behavior.singleagent.planning.stochastic.montecarlo.uct.UCT
-
- stateNodes - Variable in class burlap.behavior.singleagent.learning.modellearning.models.TabularModel
-
A mapping from (hashed) states to state nodes that store transition statistics
- StateParser - Interface in burlap.oomdp.legacy
-
This interface is used to converting states to parsable string representations and parsing those string representations back into states.
- StatePolicyPainter - Interface in burlap.behavior.singleagent.auxiliary.valuefunctionvis
-
An interface for painting a representation of the policy for a specific state onto a 2D Graphics context.
- StateReachability - Class in burlap.behavior.singleagent.auxiliary
-
This class provides methods for finding the set of reachable states from a source state.
- StateReachability() - Constructor for class burlap.behavior.singleagent.auxiliary.StateReachability
-
- StateRenderLayer - Class in burlap.oomdp.visualizer
-
This class provides 2D visualization of states by being provided a set of classes that can paint
ObjectInstnaces to the canvas as well as classes that can paint general domain information.
- StateRenderLayer() - Constructor for class burlap.oomdp.visualizer.StateRenderLayer
-
- StateRenderLayer.ObjectPainterAndClassNamePair - Class in burlap.oomdp.visualizer
-
A pair of the name of an object class to paint, and the
ObjectPainter
to
use to paint it.
- StateRenderLayer.ObjectPainterAndClassNamePair(String, ObjectPainter) - Constructor for class burlap.oomdp.visualizer.StateRenderLayer.ObjectPainterAndClassNamePair
-
- stateRepresentations - Variable in class burlap.behavior.stochasticgames.agents.naiveq.SGNaiveQLAgent
-
A map from hashed states to the internal state representation for the states stored in the q-table.
- StateResetSpecialAction - Class in burlap.oomdp.singleagent.explorer
-
- StateResetSpecialAction(Environment) - Constructor for class burlap.oomdp.singleagent.explorer.StateResetSpecialAction
-
Initializes.
- states - Variable in class burlap.behavior.stochasticgames.GameAnalysis
-
The sequence of states
- states - Variable in class burlap.behavior.stochasticgames.madynamicprogramming.dpplanners.MAValueIteration
-
The set of states that have been found
- statesEqual(State, State) - Method in class burlap.oomdp.statehashing.FixedSizeStateHashableStateFactory
-
- statesEqual(State, State) - Method in class burlap.oomdp.statehashing.ImmutableStateHashableStateFactory
-
- statesEqual(State, State) - Method in class burlap.oomdp.statehashing.SimpleHashableStateFactory
-
Returns true if the two input states are equal.
- stateSequence - Variable in class burlap.behavior.singleagent.EpisodeAnalysis
-
The sequence of states observed
- StateSettableEnvironment - Interface in burlap.oomdp.singleagent.environment
-
An interface to be used with
Environment
instances that allows
the environment to have its set set to a client specified state.
- statesToStateNodes - Variable in class burlap.behavior.singleagent.planning.stochastic.montecarlo.uct.UCT
-
- statesToVisualize - Variable in class burlap.behavior.singleagent.auxiliary.valuefunctionvis.PolicyRenderLayer
-
The states to visualize
- statesToVisualize - Variable in class burlap.behavior.singleagent.auxiliary.valuefunctionvis.ValueFunctionRenderLayer
-
The states to visualize
- statesToVisualize - Variable in class burlap.behavior.singleagent.auxiliary.valuefunctionvis.ValueFunctionVisualizerGUI
-
- stateTilings - Variable in class burlap.behavior.singleagent.vfa.cmac.CMACFeatureDatabase
-
For each tiling, a map from state tiles to an integer representing their feature identifier
- StateToFeatureVectorGenerator - Interface in burlap.behavior.singleagent.vfa
-
Many functions approximation techniques require a fixed feature vector to work and in many cases, using abstract features from
the state attributes is useful.
- stateToString(State) - Method in class burlap.domain.singleagent.cartpole.SerializableCartPoleStateFactory.CartPoleStateParser
-
- stateToString(State) - Static method in class burlap.domain.singleagent.cartpole.SerializableCartPoleStateFactory
-
- stateToString(State) - Method in class burlap.domain.singleagent.cartpole.SerializableInvertedPendulumStateFactory.InvertedPendulumStateParser
-
- stateToString(State) - Static method in class burlap.domain.singleagent.cartpole.SerializableInvertedPendulumStateFactory
-
- stateToString(State) - Method in class burlap.domain.singleagent.frostbite.SerializableFrostbiteStateFactory.FrostbiteStateParser
-
- stateToString(State) - Static method in class burlap.domain.singleagent.frostbite.SerializableFrostbiteStateFactory
-
- stateToString(State) - Method in class burlap.domain.singleagent.gridworld.SerializableGridWorldStateFactory.GridWorldStateParser
-
- stateToString(State) - Static method in class burlap.domain.singleagent.gridworld.SerializableGridWorldStateFactory
-
- stateToString(State) - Method in class burlap.domain.singleagent.lunarlander.SerializableLunarLanderStateFactory.LunarLanderStateParser
-
- stateToString(State) - Static method in class burlap.domain.singleagent.lunarlander.SerializableLunarLanderStateFactory
-
- stateToString(State) - Method in class burlap.domain.singleagent.mountaincar.SerializableMountainCarStateFactory.MountainCarStateParser
-
- stateToString(State) - Static method in class burlap.domain.singleagent.mountaincar.SerializableMountainCarStateFactory
-
- stateToString(State) - Method in class burlap.oomdp.legacy.StateJSONParser
-
- stateToString(State) - Method in interface burlap.oomdp.legacy.StateParser
-
Converts state s into a parsable string representation.
- stateToString(State) - Method in class burlap.oomdp.legacy.StateYAMLParser
-
- stateTransitionsAreModeled(State) - Method in class burlap.behavior.singleagent.learning.modellearning.Model
-
Indicates whether this model "knows" the transition dynamics from the given input state for all applicable actions.
- stateTransitionsAreModeled(State) - Method in class burlap.behavior.singleagent.learning.modellearning.models.TabularModel
-
- StateValuePainter - Class in burlap.behavior.singleagent.auxiliary.valuefunctionvis
-
An abstract class for defining the interface and common methods to paint the representation of the value function for a specific state onto
a 2D graphics context.
- StateValuePainter() - Constructor for class burlap.behavior.singleagent.auxiliary.valuefunctionvis.StateValuePainter
-
- StateValuePainter2D - Class in burlap.behavior.singleagent.auxiliary.valuefunctionvis.common
-
A class for rendering the value of states as colored 2D cells on the canvas.
- StateValuePainter2D() - Constructor for class burlap.behavior.singleagent.auxiliary.valuefunctionvis.common.StateValuePainter2D
-
- StateValuePainter2D(ColorBlend) - Constructor for class burlap.behavior.singleagent.auxiliary.valuefunctionvis.common.StateValuePainter2D
-
Initializes the value painter.
- stateWeights - Variable in class burlap.behavior.singleagent.vfa.common.LinearFVVFA
-
The function weights when performing state value function approximation.
- stateWiseMap - Variable in class burlap.behavior.learningrate.ExponentialDecayLR
-
The state dependent or state-action dependent learning rates
- stateWiseMap - Variable in class burlap.behavior.learningrate.SoftTimeInverseDecayLR
-
The state dependent or state-action dependent learning rate time indices
- StateYAMLParser - Class in burlap.oomdp.legacy
-
A StateParser class that uses the YAML file format and can can convert states to YAML strings (and back from them) for any possible input domain.
- StateYAMLParser(Domain) - Constructor for class burlap.oomdp.legacy.StateYAMLParser
-
Initializes with a a given domain object.
- StaticDomainPainter - Interface in burlap.behavior.singleagent.auxiliary.valuefunctionvis
-
An interface for painting general domain information to a 2D graphics context.
- StaticPainter - Interface in burlap.oomdp.visualizer
-
This class paints general properties of a state/domain that may not be represented
by any specific object instance data.
- staticPainters - Variable in class burlap.oomdp.visualizer.StateRenderLayer
-
list of static painters that pain static non-object defined properties of the domain
- StaticRepeatedGameActionModel - Class in burlap.oomdp.stochasticgames.common
-
This action model can be used to take a single stage game, and cause it to repeat itself.
- StaticRepeatedGameActionModel() - Constructor for class burlap.oomdp.stochasticgames.common.StaticRepeatedGameActionModel
-
- StaticWeightedAStar - Class in burlap.behavior.singleagent.planning.deterministic.informed.astar
-
Statically weighted A* [1] implementation.
- StaticWeightedAStar(Domain, RewardFunction, StateConditionTest, HashableStateFactory, Heuristic, double) - Constructor for class burlap.behavior.singleagent.planning.deterministic.informed.astar.StaticWeightedAStar
-
Initializes the valueFunction.
- stepEpisode - Variable in class burlap.behavior.singleagent.auxiliary.performance.PerformancePlotter.Trial
-
Stores the steps by episode
- stepEpisode - Variable in class burlap.behavior.stochasticgames.auxiliary.performance.MultiAgentPerformancePlotter.Trial
-
Stores the steps by episode
- stepEpisodeSeries - Variable in class burlap.behavior.singleagent.auxiliary.performance.PerformancePlotter.AgentDatasets
-
Most recent trial's steps per step episode data
- stepEpisodeSeries - Variable in class burlap.behavior.stochasticgames.auxiliary.performance.MultiAgentPerformancePlotter.AgentDatasets
-
Most recent trial's steps per step episode data
- stepIncrement(double) - Method in class burlap.behavior.singleagent.auxiliary.performance.PerformancePlotter.Trial
-
Updates all datastructures with the reward received from the last step
- stepIncrement(double) - Method in class burlap.behavior.stochasticgames.auxiliary.performance.MultiAgentPerformancePlotter.Trial
-
Updates all datastructures with the reward received from the last step
- StochasticTree<T> - Class in burlap.datastructures
-
A class for performing sampling of a set of objects at O(lg(n)) time.
- StochasticTree() - Constructor for class burlap.datastructures.StochasticTree
-
Initializes with an empty tree.
- StochasticTree(List<Double>, List<T>) - Constructor for class burlap.datastructures.StochasticTree
-
Initializes a tree for objects with the given weights
- StochasticTree.STNode - Class in burlap.datastructures
-
A class for storing a stochastic tree node.
- StochasticTree.STNode(T, double, StochasticTree<T>.STNode) - Constructor for class burlap.datastructures.StochasticTree.STNode
-
Initializes a leaf node with the given weight and parent
- StochasticTree.STNode(double) - Constructor for class burlap.datastructures.StochasticTree.STNode
-
Initializes a node with a weight only
- StochasticTree.STNode(double, StochasticTree<T>.STNode) - Constructor for class burlap.datastructures.StochasticTree.STNode
-
Initializes a node with a given weight and parent node
- stop() - Method in class burlap.debugtools.MyTimer
-
Stops the timer.
- stopLivePolling() - Method in class burlap.oomdp.singleagent.explorer.VisualExplorer
-
Stops this class from live polling this explorer's
Environment
.
- stopPlanning() - Method in class burlap.behavior.singleagent.planning.stochastic.montecarlo.uct.UCT
-
Returns true if rollouts and planning should cease.
- stopReachabilityFromTerminalStates - Variable in class burlap.behavior.singleagent.learnfromdemo.mlirl.differentiableplanners.DifferentiableVI
-
When the reachability analysis to find the state space is performed, a breadth first search-like pass
(spreading over all stochastic transitions) is performed.
- stopReachabilityFromTerminalStates - Variable in class burlap.behavior.singleagent.planning.stochastic.valueiteration.ValueIteration
-
When the reachability analysis to find the state space is performed, a breadth first search-like pass
(spreading over all stochastic transitions) is performed.
- storage - Variable in class burlap.datastructures.HashedAggregator
-
The backing hash map
- storedAbstraction - Variable in class burlap.behavior.stochasticgames.agents.naiveq.SGNaiveQFactory
-
The state abstract the Q-learning algorithm will use
- storedMapAbstraction - Variable in class burlap.behavior.stochasticgames.agents.naiveq.SGNaiveQLAgent
-
A state abstraction to use.
- stringRep - Variable in class burlap.domain.singleagent.cartpole.SerializableCartPoleStateFactory.SerializableCartPoleState
-
- stringRep - Variable in class burlap.domain.singleagent.cartpole.SerializableInvertedPendulumStateFactory.SerializableInvertedPendulumState
-
- stringRep - Variable in class burlap.domain.singleagent.frostbite.SerializableFrostbiteStateFactory.SerializableFrostbiteState
-
- stringRep - Variable in class burlap.domain.singleagent.gridworld.SerializableGridWorldStateFactory.SerializableGridWorldState
-
- stringRep - Variable in class burlap.domain.singleagent.lunarlander.SerializableLunarLanderStateFactory.SerializableLunarLanderState
-
- stringRep - Variable in class burlap.domain.singleagent.mountaincar.SerializableMountainCarStateFactory.SerializableMountainCarState
-
- stringToState(String) - Method in class burlap.domain.singleagent.cartpole.SerializableCartPoleStateFactory.CartPoleStateParser
-
- stringToState(Domain, String) - Static method in class burlap.domain.singleagent.cartpole.SerializableCartPoleStateFactory
-
- stringToState(String) - Method in class burlap.domain.singleagent.cartpole.SerializableInvertedPendulumStateFactory.InvertedPendulumStateParser
-
- stringToState(Domain, String) - Static method in class burlap.domain.singleagent.cartpole.SerializableInvertedPendulumStateFactory
-
- stringToState(String) - Method in class burlap.domain.singleagent.frostbite.SerializableFrostbiteStateFactory.FrostbiteStateParser
-
- stringToState(Domain, String) - Static method in class burlap.domain.singleagent.frostbite.SerializableFrostbiteStateFactory
-
- stringToState(String) - Method in class burlap.domain.singleagent.gridworld.SerializableGridWorldStateFactory.GridWorldStateParser
-
- stringToState(Domain, String) - Static method in class burlap.domain.singleagent.gridworld.SerializableGridWorldStateFactory
-
- stringToState(String) - Method in class burlap.domain.singleagent.lunarlander.SerializableLunarLanderStateFactory.LunarLanderStateParser
-
- stringToState(Domain, String) - Static method in class burlap.domain.singleagent.lunarlander.SerializableLunarLanderStateFactory
-
- stringToState(String) - Method in class burlap.domain.singleagent.mountaincar.SerializableMountainCarStateFactory.MountainCarStateParser
-
- stringToState(Domain, String) - Static method in class burlap.domain.singleagent.mountaincar.SerializableMountainCarStateFactory
-
- stringToState(String) - Method in class burlap.oomdp.legacy.StateJSONParser
-
- stringToState(String) - Method in interface burlap.oomdp.legacy.StateParser
-
Converts a string into a State object assuming the string representation was produced using this state parser.
- stringToState(String) - Method in class burlap.oomdp.legacy.StateYAMLParser
-
- stringVal - Variable in class burlap.oomdp.core.values.StringValue
-
The string value
- StringValue - Class in burlap.oomdp.core.values
-
This class provides a value for a string.
- StringValue(Attribute) - Constructor for class burlap.oomdp.core.values.StringValue
-
Initializes for a given attribute.
- StringValue(StringValue) - Constructor for class burlap.oomdp.core.values.StringValue
-
Initializes from an existing value.
- StringValue(Attribute, String) - Constructor for class burlap.oomdp.core.values.StringValue
-
- subgoalReward - Variable in class burlap.behavior.singleagent.options.support.LocalSubgoalRF
-
Defines the reward returned for transitions to subgoal states; default 0.
- subgoalStateTest - Variable in class burlap.behavior.singleagent.options.support.LocalSubgoalRF
-
Defines he set of subgoal states for the option
- subgoalStateTest - Variable in class burlap.behavior.singleagent.options.support.LocalSubgoalTF
-
Defines he set of subgoal states for the option
- successorStates - Variable in class burlap.behavior.singleagent.planning.stochastic.montecarlo.uct.UCTActionNode
-
The possible successor states.
- sumReturn - Variable in class burlap.behavior.singleagent.planning.stochastic.montecarlo.uct.UCTActionNode
-
The sum return observed for this action node
- SupervisedVFA - Interface in burlap.behavior.singleagent.planning.vfa.fittedvi
-
An interface for learning value function approximation via a supervised learning algorithm.
- SupervisedVFA.SupervisedVFAInstance - Class in burlap.behavior.singleagent.planning.vfa.fittedvi
-
A pair for a state and it's target value function value.
- SupervisedVFA.SupervisedVFAInstance(State, double) - Constructor for class burlap.behavior.singleagent.planning.vfa.fittedvi.SupervisedVFA.SupervisedVFAInstance
-
Initializes
- svp - Variable in class burlap.behavior.singleagent.auxiliary.valuefunctionvis.ValueFunctionRenderLayer
-
Painter used to visualize the value function
- svp - Variable in class burlap.behavior.singleagent.auxiliary.valuefunctionvis.ValueFunctionVisualizerGUI
-
Painter used to visualize the value function
- synchronizeJointActionSelectionAmongAgents - Variable in class burlap.behavior.stochasticgames.PolicyFromJointPolicy
-