- p - Variable in class burlap.behavior.singleagent.planning.HashedTransitionProbability
-
- p - Variable in class burlap.oomdp.core.TransitionProbability
-
the probability of transitioning to state s
- p0 - Variable in class burlap.oomdp.stochasticgames.tournament.common.AllPairWiseSameTypeMS
-
- p1 - Variable in class burlap.oomdp.stochasticgames.tournament.common.AllPairWiseSameTypeMS
-
- PADCLASS - Static variable in class burlap.domain.singleagent.lunarlander.LunarLanderDomain
-
Constant for the name of the goal landing pad OO-MDP class
- paint(Graphics2D, float, float) - Method in interface burlap.behavior.singleagent.auxiliary.valuefunctionvis.StaticDomainPainter
-
Use to paint general domain information to a 2D graphics context.
- paint(Graphics2D, State, float, float) - Method in class burlap.domain.singleagent.gridworld.GridWorldVisualizer.MapPainter
-
- paint(Graphics2D, State, float, float) - Method in class burlap.domain.singleagent.gridworld.macro.MacroCellVisualizer.MacroCellRewardWeightPainter
-
- paint(Graphics2D, State, float, float) - Method in class burlap.domain.singleagent.mountaincar.MountainCarVisualizer.HillPainter
-
- paint(Graphics) - Method in class burlap.oomdp.visualizer.MultiLayerRenderer
-
- paint(Graphics2D, State, float, float) - Method in interface burlap.oomdp.visualizer.StaticPainter
-
Paints general state information not to graphics context g2
- painter - Variable in class burlap.behavior.singleagent.EpisodeSequenceVisualizer
-
- painter - Variable in class burlap.behavior.stochasticgame.GameSequenceVisualizer
-
- painter - Variable in class burlap.oomdp.singleagent.common.VisualActionObserver
-
The visualizer that will render states
- painter - Variable in class burlap.oomdp.singleagent.explorer.VisualExplorer
-
- painter - Variable in class burlap.oomdp.stochasticgames.common.VisualWorldObserver
-
The visualizer that will render states
- paintGlyph(Graphics2D, float, float, float, float) - Method in interface burlap.behavior.singleagent.auxiliary.valuefunctionvis.common.ActionGlyphPainter
-
Called to paint a glyph in the rectangle defined by the top left origin (x,y) with the given width and height.
- paintGlyph(Graphics2D, float, float, float, float) - Method in class burlap.behavior.singleagent.auxiliary.valuefunctionvis.common.ArrowActionGlyph
-
- paintObject(Graphics2D, State, ObjectInstance, float, float) - Method in class burlap.domain.singleagent.blockdude.BlockDudeVisualizer.AgentPainter
-
- paintObject(Graphics2D, State, ObjectInstance, float, float) - Method in class burlap.domain.singleagent.blockdude.BlockDudeVisualizer.BlockPainter
-
- paintObject(Graphics2D, State, ObjectInstance, float, float) - Method in class burlap.domain.singleagent.blockdude.BlockDudeVisualizer.BricksPainter
-
- paintObject(Graphics2D, State, ObjectInstance, float, float) - Method in class burlap.domain.singleagent.blockdude.BlockDudeVisualizer.ExitPainter
-
- paintObject(Graphics2D, State, ObjectInstance, float, float) - Method in class burlap.domain.singleagent.blocksworld.BlocksWorldVisualizer.BlockPainter
-
- paintObject(Graphics2D, State, ObjectInstance, float, float) - Method in class burlap.domain.singleagent.cartpole.CartPoleVisualizer.CartPoleObjectPainter
-
- paintObject(Graphics2D, State, ObjectInstance, float, float) - Method in class burlap.domain.singleagent.cartpole.InvertedPendulumVisualizer.PendulumObjectPainter
-
- paintObject(Graphics2D, State, ObjectInstance, float, float) - Method in class burlap.domain.singleagent.frostbite.FrostbiteVisualizer.AgentPainter
-
- paintObject(Graphics2D, State, ObjectInstance, float, float) - Method in class burlap.domain.singleagent.frostbite.FrostbiteVisualizer.IglooPainter
-
- paintObject(Graphics2D, State, ObjectInstance, float, float) - Method in class burlap.domain.singleagent.frostbite.FrostbiteVisualizer.PlatformPainter
-
- paintObject(Graphics2D, State, ObjectInstance, float, float) - Method in class burlap.domain.singleagent.gridworld.GridWorldVisualizer.CellPainter
-
- paintObject(Graphics2D, State, ObjectInstance, float, float) - Method in class burlap.domain.singleagent.gridworld.GridWorldVisualizer.LocationPainter
-
- paintObject(Graphics2D, State, ObjectInstance, float, float) - Method in class burlap.domain.singleagent.lunarlander.LLVisualizer.AgentPainter
-
- paintObject(Graphics2D, State, ObjectInstance, float, float) - Method in class burlap.domain.singleagent.lunarlander.LLVisualizer.ObstaclePainter
-
- paintObject(Graphics2D, State, ObjectInstance, float, float) - Method in class burlap.domain.singleagent.lunarlander.LLVisualizer.PadPainter
-
- paintObject(Graphics2D, State, ObjectInstance, float, float) - Method in class burlap.domain.singleagent.mountaincar.MountainCarVisualizer.AgentPainter
-
- paintObject(Graphics2D, State, ObjectInstance, float, float) - Method in interface burlap.oomdp.visualizer.ObjectPainter
-
Paints object instance ob to graphics context g2
- paintStatePolicy(Graphics2D, State, Policy, float, float) - Method in class burlap.behavior.singleagent.auxiliary.valuefunctionvis.common.PolicyGlyphPainter2D
-
- paintStatePolicy(Graphics2D, State, Policy, float, float) - Method in interface burlap.behavior.singleagent.auxiliary.valuefunctionvis.StatePolicyPainter
-
Paints a representation of the given policy for a specific state to a 2D graphics context.
- paintStateValue(Graphics2D, State, double, float, float) - Method in class burlap.behavior.singleagent.auxiliary.valuefunctionvis.common.StateValuePainter2D
-
- paintStateValue(Graphics2D, State, double, float, float) - Method in class burlap.behavior.singleagent.auxiliary.valuefunctionvis.StateValuePainter
-
Paints the representation of a value function for a specific state.
- parameterClasses - Variable in class burlap.oomdp.core.PropositionalFunction
-
- parameterClasses - Variable in class burlap.oomdp.singleagent.Action
-
The object classes each parameter of this action can accept; empty list for a parameter-less action (which is the default)
- ParameterNaiveActionIdMap - Class in burlap.behavior.stochasticgame.agents.naiveq.history
-
An action to int map that takes the list of possible action names in a domain and assigns and int value to them.
- ParameterNaiveActionIdMap(Domain) - Constructor for class burlap.behavior.stochasticgame.agents.naiveq.history.ParameterNaiveActionIdMap
-
Initializes a mapping from the names of all actions in a given domain to an int value.
- parameterOrderGroup - Variable in class burlap.oomdp.core.PropositionalFunction
-
- parameterOrderGroup - Variable in class burlap.oomdp.singleagent.Action
-
Specifies the parameter order group each parameter.
- parameterOrderGroups - Variable in class burlap.oomdp.stochasticgames.SingleAction
-
- parameters - Variable in class burlap.behavior.singleagent.learnbydemo.mlirl.differentiableplanners.diffvinit.DifferentiableVInit.ParamedDiffVInit
-
The parameters of the reward functions.
- parameters - Variable in class burlap.behavior.singleagent.learnbydemo.mlirl.support.DifferentiableRF
-
The parameters of the reward functions.
- parametersAreObjects() - Method in class burlap.oomdp.core.AbstractGroundedAction
-
Returns true if all parameters (if any) for this action represent OO-MDP objects in a state; false otherwise.
- parametersAreObjects() - Method in class burlap.oomdp.singleagent.Action
-
Returns true if all parameters (if any) for this action represent OO-MDP objects in a state; false otherwise.
- parametersAreObjects() - Method in class burlap.oomdp.singleagent.GroundedAction
-
- parametersAreObjects() - Method in class burlap.oomdp.stochasticgames.GroundedSingleAction
-
- parametersAreObjects() - Method in class burlap.oomdp.stochasticgames.JointAction
-
- parametersAreObjects() - Method in class burlap.oomdp.stochasticgames.SingleAction
-
Returns true if all parameters (if any) for this action represent OO-MDP objects in a state; false otherwise.
- parameterTypes - Variable in class burlap.oomdp.stochasticgames.SingleAction
-
- params - Variable in class burlap.oomdp.core.AbstractGroundedAction
-
Object parameters of the action (if any)
- params - Variable in class burlap.oomdp.core.GroundedProp
-
- params - Variable in class burlap.oomdp.singleagent.interfaces.rlglue.RLGlueEnvironment.ActionIndexParameterization
-
The parameters of the action specified as indecies into a state
- parseCommand(State, String) - Method in class burlap.oomdp.singleagent.explorer.TerminalExplorer
-
Parses a command and returns the resulted modified state
- parseCommand(State, String) - Method in class burlap.oomdp.stochasticgames.explorers.SGTerminalExplorer
-
Parses a command and returns the resulted modified state
- parseEpisodeFiles(String) - Method in class burlap.behavior.singleagent.EpisodeSequenceVisualizer
-
- parseFileIntoEA(String, Domain, StateParser) - Static method in class burlap.behavior.singleagent.EpisodeAnalysis
-
Reads an episode that was written to a file and turns into an EpisodeAnalysis object.
- parseFileIntoGA(String, SGDomain, StateParser) - Static method in class burlap.behavior.stochasticgame.GameAnalysis
-
Reads a game that was written to a file and turns into a
GameAnalysis
object.
- parseFilesIntoEAList(String, Domain, StateParser) - Static method in class burlap.behavior.singleagent.EpisodeAnalysis
-
Takes a path to a directory containing .episode files and reads them all into a
List
of
EpisodeAnalysis
objects.
- parseIntoSingleActions(String) - Method in class burlap.oomdp.stochasticgames.explorers.SGVisualExplorer
-
- parseIntoString(StateParser) - Method in class burlap.behavior.singleagent.EpisodeAnalysis
-
Converts this episode into a string representation.
- parseIntoString(StateParser) - Method in class burlap.behavior.stochasticgame.GameAnalysis
-
Converts this game into a string representation.
- parseStringIntoEA(String, Domain, StateParser) - Static method in class burlap.behavior.singleagent.EpisodeAnalysis
-
Parses a string representation of an episode into an EpisodeAnalysis object.
- parseStringIntoGameAnalysis(String, SGDomain, StateParser) - Static method in class burlap.behavior.stochasticgame.GameAnalysis
-
- payouts - Variable in class burlap.domain.stochasticgames.normalform.SingleStageNormalFormGame
-
The pay out function for each player.
- peek() - Method in class burlap.datastructures.HashIndexedHeap
-
Returns a pointer to the head of the heap without removing it
- peekAtLearningRate(State, AbstractGroundedAction) - Method in class burlap.behavior.learningrate.ConstantLR
-
- peekAtLearningRate(int) - Method in class burlap.behavior.learningrate.ConstantLR
-
- peekAtLearningRate(State, AbstractGroundedAction) - Method in class burlap.behavior.learningrate.ExponentialDecayLR
-
- peekAtLearningRate(int) - Method in class burlap.behavior.learningrate.ExponentialDecayLR
-
- peekAtLearningRate(State, AbstractGroundedAction) - Method in interface burlap.behavior.learningrate.LearningRate
-
A method for looking at the current learning rate for a state-action pair without having it altered.
- peekAtLearningRate(int) - Method in interface burlap.behavior.learningrate.LearningRate
-
A method for looking at the current learning rate for a state (-action) feature without having it altered.
- peekAtLearningRate(State, AbstractGroundedAction) - Method in class burlap.behavior.learningrate.SoftTimeInverseDecayLR
-
- peekAtLearningRate(int) - Method in class burlap.behavior.learningrate.SoftTimeInverseDecayLR
-
- peekAtLearningRate(State, AbstractGroundedAction) - Method in class burlap.behavior.singleagent.vfa.fourier.FourierBasisLearningRateWrapper
-
- peekAtLearningRate(int) - Method in class burlap.behavior.singleagent.vfa.fourier.FourierBasisLearningRateWrapper
-
- percolateWeightChange(StochasticTree<T>.STNode, double) - Method in class burlap.datastructures.StochasticTree
-
A recursive method for percolating a weight change of a node
- performAction(State, String[]) - Method in class burlap.domain.singleagent.blockdude.BlockDude.MoveAction
-
- performAction(State, String[]) - Method in class burlap.domain.singleagent.blockdude.BlockDude.MoveUpAction
-
- performAction(State, String[]) - Method in class burlap.domain.singleagent.blockdude.BlockDude.PickupAction
-
- performAction(State, String[]) - Method in class burlap.domain.singleagent.blockdude.BlockDude.PutdownAction
-
- performAction(State, String) - Method in class burlap.oomdp.singleagent.Action
-
Performs this action in the specified state using the specified parameters and returns the resulting state.
- performAction(State, String[]) - Method in class burlap.oomdp.singleagent.Action
-
Performs this action in the specified state using the specified parameters and returns the resulting state.
- performActionHelper(State, String[]) - Method in class burlap.behavior.singleagent.interfaces.rlglue.RLGlueWrappedDomainGenerator.RLGlueActionWrapper
-
- performActionHelper(State, String[]) - Method in class burlap.behavior.singleagent.learning.modellearning.ModeledDomainGenerator.ModeledAction
-
- performActionHelper(State, String[]) - Method in class burlap.behavior.singleagent.learning.modellearning.ModeledDomainGenerator.RMaxStateAction
-
- performActionHelper(State, String[]) - Method in class burlap.behavior.singleagent.options.Option
-
- performActionHelper(State, String[]) - Method in class burlap.behavior.stochasticgame.agents.interfacing.singleagent.SGToSADomain.SAActionWrapper
-
- performActionHelper(State, String[]) - Method in class burlap.domain.singleagent.blockdude.BlockDude.MoveAction
-
- performActionHelper(State, String[]) - Method in class burlap.domain.singleagent.blockdude.BlockDude.MoveUpAction
-
- performActionHelper(State, String[]) - Method in class burlap.domain.singleagent.blockdude.BlockDude.PickupAction
-
- performActionHelper(State, String[]) - Method in class burlap.domain.singleagent.blockdude.BlockDude.PutdownAction
-
- performActionHelper(State, String[]) - Method in class burlap.domain.singleagent.blocksworld.BlocksWorld.StackAction
-
- performActionHelper(State, String[]) - Method in class burlap.domain.singleagent.blocksworld.BlocksWorld.UnstackAction
-
- performActionHelper(State, String[]) - Method in class burlap.domain.singleagent.cartpole.CartPoleDomain.MovementAction
-
- performActionHelper(State, String[]) - Method in class burlap.domain.singleagent.cartpole.InvertedPendulum.ForceAction
-
- performActionHelper(State, String[]) - Method in class burlap.domain.singleagent.frostbite.FrostbiteDomain.ActionIdle
-
- performActionHelper(State, String[]) - Method in class burlap.domain.singleagent.frostbite.FrostbiteDomain.MovementAction
-
- performActionHelper(State, String[]) - Method in class burlap.domain.singleagent.graphdefined.GraphDefinedDomain.GraphAction
-
- performActionHelper(State, String[]) - Method in class burlap.domain.singleagent.gridworld.GridWorldDomain.MovementAction
-
- performActionHelper(State, String[]) - Method in class burlap.domain.singleagent.lunarlander.LunarLanderDomain.ActionIdle
-
- performActionHelper(State, String[]) - Method in class burlap.domain.singleagent.lunarlander.LunarLanderDomain.ActionThrust
-
- performActionHelper(State, String[]) - Method in class burlap.domain.singleagent.lunarlander.LunarLanderDomain.ActionTurn
-
- performActionHelper(State, String[]) - Method in class burlap.domain.singleagent.tabularized.TabulatedDomainWrapper.ActionWrapper
-
- performActionHelper(State, String[]) - Method in class burlap.oomdp.singleagent.Action
-
This method determines what happens when an action is applied in the given state with the given parameters.
- performActionHelper(State, String[]) - Method in class burlap.oomdp.singleagent.common.NullAction
-
- PerformanceMetric - Enum in burlap.behavior.singleagent.auxiliary.performance
-
- PerformancePlotter - Class in burlap.behavior.singleagent.auxiliary.performance
-
This class is an action observer used to collect and plot performance data of a learning agent either by itself or against another learning agent.
- PerformancePlotter(String, RewardFunction, int, int, int, int, TrialMode, PerformanceMetric...) - Constructor for class burlap.behavior.singleagent.auxiliary.performance.PerformancePlotter
-
Initializes a performance plotter.
- PerformancePlotter.AgentDatasets - Class in burlap.behavior.singleagent.auxiliary.performance
-
A datastructure for maintain the plot series data in the current agent
- PerformancePlotter.AgentDatasets(String) - Constructor for class burlap.behavior.singleagent.auxiliary.performance.PerformancePlotter.AgentDatasets
-
Initializes the datastructures for an agent with the given name
- PerformancePlotter.MutableBoolean - Class in burlap.behavior.singleagent.auxiliary.performance
-
A class for a mutable boolean
- PerformancePlotter.MutableBoolean(boolean) - Constructor for class burlap.behavior.singleagent.auxiliary.performance.PerformancePlotter.MutableBoolean
-
Initializes with the given Boolean value
- PerformancePlotter.Trial - Class in burlap.behavior.singleagent.auxiliary.performance
-
A datastructure for maintaining all the metric stats for a single trial.
- PerformancePlotter.Trial() - Constructor for class burlap.behavior.singleagent.auxiliary.performance.PerformancePlotter.Trial
-
- performBackup(State, String, Map<String, AgentType>, AgentQSourceMap) - Method in class burlap.behavior.stochasticgame.mavaluefunction.backupOperators.CoCoQ
-
- performBackup(State, String, Map<String, AgentType>, AgentQSourceMap) - Method in class burlap.behavior.stochasticgame.mavaluefunction.backupOperators.CorrelatedQ
-
- performBackup(State, String, Map<String, AgentType>, AgentQSourceMap) - Method in class burlap.behavior.stochasticgame.mavaluefunction.backupOperators.MaxQ
-
- performBackup(State, String, Map<String, AgentType>, AgentQSourceMap) - Method in class burlap.behavior.stochasticgame.mavaluefunction.backupOperators.MinMaxQ
-
- performBackup(State, String, Map<String, AgentType>, AgentQSourceMap) - Method in interface burlap.behavior.stochasticgame.mavaluefunction.SGBackupOperator
-
- performBellmanUpdateOn(StateHashTuple) - Method in class burlap.behavior.singleagent.learnbydemo.mlirl.differentiableplanners.DifferentiableVFPlanner
-
Overrides the superclass method to perform a Boltzmann backup operator
instead of a Bellman backup operator.
- performBellmanUpdateOn(State) - Method in class burlap.behavior.singleagent.planning.ValueFunctionPlanner
-
Performs a Bellman value function update on the provided state.
- performBellmanUpdateOn(StateHashTuple) - Method in class burlap.behavior.singleagent.planning.ValueFunctionPlanner
-
Performs a Bellman value function update on the provided (hashed) state.
- performDPValueGradientUpdateOn(StateHashTuple) - Method in class burlap.behavior.singleagent.learnbydemo.mlirl.differentiableplanners.DifferentiableVFPlanner
-
Performs the Boltzmann value function gradient backup for the given
StateHashTuple
.
- performedInitialPlan - Variable in class burlap.behavior.singleagent.planning.stochastic.rtdp.BFSRTDP
-
indicates whether the BFS-like pass has already been performed.
- performFixedPolicyBellmanUpdateOn(State, Policy) - Method in class burlap.behavior.singleagent.planning.ValueFunctionPlanner
-
Performs a fixed-policy Bellman value function update (i.e., policy evaluation) on the provided state.
- performFixedPolicyBellmanUpdateOn(StateHashTuple, Policy) - Method in class burlap.behavior.singleagent.planning.ValueFunctionPlanner
-
Performs a fixed-policy Bellman value function update (i.e., policy evaluation) on the provided (hashed) state.
- performInitialPassFromState(State) - Method in class burlap.behavior.singleagent.planning.stochastic.rtdp.BFSRTDP
-
Performs a BFS-like pass to either all reachable states or to depth at which a goal state is found and then performs the Bellman update on all those states.
- performIRL() - Method in class burlap.behavior.singleagent.learnbydemo.mlirl.MLIRL
-
Runs gradient ascent.
- performIRL() - Method in class burlap.behavior.singleagent.learnbydemo.mlirl.MultipleIntentionsMLIRL
-
Performs multiple intention inverse reinforcement learning.
- performJointAction(State, JointAction) - Method in class burlap.oomdp.stochasticgames.JointActionModel
-
- performOrderedBellmanUpdates(List<StateHashTuple>) - Method in class burlap.behavior.singleagent.planning.stochastic.rtdp.RTDP
-
Performs ordered Bellman updates on the list of (hashed) states provided to it.
- performReachabilityFrom(State) - Method in class burlap.behavior.singleagent.learnbydemo.mlirl.differentiableplanners.DifferentiableVI
-
This method will find all reachable states that will be used by the
DifferentiableVI.runVI()
method and will cache all the transition dynamics.
- performReachabilityFrom(State) - Method in class burlap.behavior.singleagent.planning.stochastic.policyiteration.PolicyIteration
-
This method will find all reachable states that will be used when computing the value function.
- performReachabilityFrom(State) - Method in class burlap.behavior.singleagent.planning.stochastic.valueiteration.PrioritizedSweeping
-
- performReachabilityFrom(State) - Method in class burlap.behavior.singleagent.planning.stochastic.valueiteration.ValueIteration
-
This method will find all reachable states that will be used by the
ValueIteration.runVI()
method and will cache all the transition dynamics.
- performRecahabilityAnalysisFrom(State) - Method in class burlap.behavior.singleagent.planning.stochastic.rtdp.BFSRTDP
-
Finds either all reachable states from si or all states up to the depth that the first goal state is found from si.
- performStateReachabilityFrom(State) - Method in class burlap.behavior.stochasticgame.mavaluefunction.vfplanners.MAValueIteration
-
Finds and stores all states that are reachable from input state s.
- pf - Variable in class burlap.oomdp.core.GroundedProp
-
- PFATEXIT - Static variable in class burlap.domain.singleagent.blockdude.BlockDude
-
Name for the propositional function that tests whether the agent is at an exit
- PFATLOCATION - Static variable in class burlap.domain.singleagent.gridworld.GridWorldDomain
-
Constant for the name of the at location propositional function
- pfClass - Variable in class burlap.oomdp.core.PropositionalFunction
-
optional; allows propositional functions to be grouped by class names
- PFCLEAR - Static variable in class burlap.domain.singleagent.blocksworld.BlocksWorld
-
Constant for the propositional function "clear" name
- pfCopy(Domain, PropositionalFunction) - Method in class burlap.oomdp.singleagent.environment.DomainEnvironmentWrapper
-
- PFFeatureVectorGenerator - Class in burlap.behavior.singleagent.vfa.common
-
- PFFeatureVectorGenerator(Domain) - Constructor for class burlap.behavior.singleagent.vfa.common.PFFeatureVectorGenerator
-
Initializes using all propositional functions that belong to the domain
- PFFeatureVectorGenerator(List<PropositionalFunction>) - Constructor for class burlap.behavior.singleagent.vfa.common.PFFeatureVectorGenerator
-
Initializes using the list of given propositional functions.
- PFFeatureVectorGenerator(PropositionalFunction[]) - Constructor for class burlap.behavior.singleagent.vfa.common.PFFeatureVectorGenerator
-
Initializes using the array of given propositional functions.
- PFHOLDINGBLOCK - Static variable in class burlap.domain.singleagent.blockdude.BlockDude
-
Name for the propositional function that tests whether the agent is holding a block.
- PFIGLOOBUILT - Static variable in class burlap.domain.singleagent.frostbite.FrostbiteDomain
-
Constant for the name of the propositional function "igloo is built"
- PFINPGOAL - Static variable in class burlap.domain.stochasticgames.gridgame.GridGame
-
A constant for the name of a propositional function that evaluates whether an agent is in a personal goal location for just them.
- PFINUGOAL - Static variable in class burlap.domain.stochasticgames.gridgame.GridGame
-
A constant for the name of a propositional function that evaluates whether an agent is in a universal goal location.
- PFINWATER - Static variable in class burlap.domain.singleagent.frostbite.FrostbiteDomain
-
Constant for the name of the propositional function "agent is in water"
- PFONBLOCK - Static variable in class burlap.domain.singleagent.blocksworld.BlocksWorld
-
Constant for the propositional function "on" name
- PFONGROUND - Static variable in class burlap.domain.singleagent.lunarlander.LunarLanderDomain
-
Constant for the name of the propositional function that indicates whether the agent/lander is on the ground
- PFONICE - Static variable in class burlap.domain.singleagent.frostbite.FrostbiteDomain
-
Constant for the name of the propositional function "agent is on ice"
- PFONPAD - Static variable in class burlap.domain.singleagent.lunarlander.LunarLanderDomain
-
Constant for the name of the propositional function that indicates whether the agent/lander is on a landing pad
- PFONPLATFORM - Static variable in class burlap.domain.singleagent.frostbite.FrostbiteDomain
-
Constant for the name of the propositional function "agent is on platform"
- PFONTABLE - Static variable in class burlap.domain.singleagent.blocksworld.BlocksWorld
-
Constant for the propositional function "on table" name
- PFPLATFORMACTIVE - Static variable in class burlap.domain.singleagent.frostbite.FrostbiteDomain
-
Constant for the name of the propositional function "platform is active"
- pfsToUse - Variable in class burlap.behavior.singleagent.vfa.common.PFFeatureVectorGenerator
-
- PFTOUCHSURFACE - Static variable in class burlap.domain.singleagent.lunarlander.LunarLanderDomain
-
Constant for the name of the propositional function that indicates whether the agent/lander is touching
an obstacle surface.
- PFTPAD - Static variable in class burlap.domain.singleagent.lunarlander.LunarLanderDomain
-
Constant for the name of the propositional function that indicates whether the agent/lander is *touching* a landing pad.
- PFWALLEAST - Static variable in class burlap.domain.singleagent.gridworld.GridWorldDomain
-
Constant for the name of the wall to east propositional function
- PFWALLNORTH - Static variable in class burlap.domain.singleagent.gridworld.GridWorldDomain
-
Constant for the name of the wall to north propositional function
- PFWALLSOUTH - Static variable in class burlap.domain.singleagent.gridworld.GridWorldDomain
-
Constant for the name of the wall to south propositional function
- PFWALLWEST - Static variable in class burlap.domain.singleagent.gridworld.GridWorldDomain
-
Constant for the name of the wall to west propositional function
- phiConstructor(List<ActionFeaturesQuery>, int) - Method in class burlap.behavior.singleagent.learning.lspi.LSPI
-
Constructs the state-action feature vector as a SimpleMatrix
.
- physParams - Variable in class burlap.domain.singleagent.cartpole.CartPoleDomain
-
An object specifying the physics parameters for the cart pole domain.
- physParams - Variable in class burlap.domain.singleagent.cartpole.InvertedPendulum.ForceAction
-
The physics parameters to use
- physParams - Variable in class burlap.domain.singleagent.cartpole.InvertedPendulum
-
- physParams - Variable in class burlap.domain.singleagent.lunarlander.LunarLanderDomain
-
An object for holding the physics parameters of this domain.
- physParams - Variable in class burlap.domain.singleagent.mountaincar.MountainCar
-
The physics parameters for mountain car.
- pickupBlock(State, int) - Static method in class burlap.domain.singleagent.blockdude.BlockDude
-
Modifies state s to be the result of the pick up action.
- planContainsOption(SearchNode) - Method in class burlap.behavior.singleagent.planning.deterministic.DeterministicPlanner
-
Returns true if a solution path uses an option in its solution.
- planEndNode(SearchNode) - Method in class burlap.behavior.singleagent.planning.deterministic.informed.astar.IDAStar
-
Returns true if the search node wraps a goal state.
- planFromState(State) - Method in class burlap.behavior.singleagent.learnbydemo.mlirl.differentiableplanners.DifferentiableSparseSampling
-
- planFromState(State) - Method in class burlap.behavior.singleagent.learnbydemo.mlirl.differentiableplanners.DifferentiableVI
-
- planFromState(State) - Method in class burlap.behavior.singleagent.learning.actorcritic.ActorCritic
-
- planFromState(State) - Method in class burlap.behavior.singleagent.learning.lspi.LSPI
-
- planFromState(State) - Method in class burlap.behavior.singleagent.learning.modellearning.artdp.ARTDP.ARTDPPlanner
-
- planFromState(State) - Method in class burlap.behavior.singleagent.learning.modellearning.artdp.ARTDP
-
- planFromState(State) - Method in class burlap.behavior.singleagent.learning.modellearning.rmax.PotentialShapedRMax
-
- planFromState(State) - Method in class burlap.behavior.singleagent.learning.tdmethods.QLearning
-
- planFromState(State) - Method in class burlap.behavior.singleagent.learning.tdmethods.vfa.GradientDescentSarsaLam
-
- planFromState(State) - Method in class burlap.behavior.singleagent.planning.deterministic.informed.astar.DynamicWeightedAStar
-
This method is being overridden because to avoid reopening closed states that are not actually better due to the dynamic
h weight, the reopen check needs to be based on the g score, note the f score
- planFromState(State) - Method in class burlap.behavior.singleagent.planning.deterministic.informed.astar.IDAStar
-
- planFromState(State) - Method in class burlap.behavior.singleagent.planning.deterministic.informed.BestFirst
-
- planFromState(State) - Method in class burlap.behavior.singleagent.planning.deterministic.uninformed.bfs.BFS
-
- planFromState(State) - Method in class burlap.behavior.singleagent.planning.deterministic.uninformed.dfs.DFS
-
- planFromState(State) - Method in class burlap.behavior.singleagent.planning.deterministic.uninformed.dfs.LimitedMemoryDFS
-
- planFromState(State) - Method in class burlap.behavior.singleagent.planning.OOMDPPlanner
-
This method will cause the planner to begin planning from the specified initial state
- planFromState(State) - Method in class burlap.behavior.singleagent.planning.stochastic.montecarlo.uct.UCT
-
- planFromState(State) - Method in class burlap.behavior.singleagent.planning.stochastic.policyiteration.PolicyIteration
-
- planFromState(State) - Method in class burlap.behavior.singleagent.planning.stochastic.rtdp.BFSRTDP
-
- planFromState(State) - Method in class burlap.behavior.singleagent.planning.stochastic.rtdp.BoundedRTDP
-
- planFromState(State) - Method in class burlap.behavior.singleagent.planning.stochastic.rtdp.RTDP
-
- planFromState(State) - Method in class burlap.behavior.singleagent.planning.stochastic.sparsesampling.SparseSampling
-
- planFromState(State) - Method in class burlap.behavior.singleagent.planning.stochastic.valueiteration.PrioritizedSweeping
-
- planFromState(State) - Method in class burlap.behavior.singleagent.planning.stochastic.valueiteration.ValueIteration
-
- planFromState(State) - Method in class burlap.behavior.singleagent.planning.ValueFunctionPlanner
-
- planFromState(State) - Method in class burlap.behavior.singleagent.planning.ValueFunctionPlanner.StaticVFPlanner
-
- planFromState(State) - Method in class burlap.behavior.singleagent.planning.vfa.fittedvi.FittedVI
-
- planFromState(State) - Method in class burlap.behavior.stochasticgame.mavaluefunction.MAValueFunctionPlanner
-
Calling this method causes planning to be performed from State s.
- planFromState(State) - Method in class burlap.behavior.stochasticgame.mavaluefunction.vfplanners.MAValueIteration
-
- planHasDupilicateStates(SearchNode) - Method in class burlap.behavior.singleagent.planning.deterministic.DeterministicPlanner
-
Returns true if a solution path visits the same state multiple times.
- planner - Variable in class burlap.behavior.singleagent.auxiliary.valuefunctionvis.ValueFunctionRenderLayer
-
The QComputable planner to use for finding the value function
- planner - Variable in class burlap.behavior.singleagent.learnbydemo.IRLRequest
-
The planning algorithm used to compute the policy for a given reward function
- planner - Variable in class burlap.behavior.stochasticgame.agents.mavf.MultiAgentVFPlanningAgent
-
The planner this agent will use to estiamte the value function and thereby determine its policy.
- PlannerDerivedPolicy - Interface in burlap.behavior.singleagent.planning
-
An interface for defining policies that refer to a planner object to produce
the policy
- plannerFactory - Variable in class burlap.behavior.singleagent.learnbydemo.mlirl.MultipleIntentionsMLIRLRequest
-
- plannerFactory - Variable in class burlap.behavior.stochasticgame.agents.mavf.MAVFPlanAgentFactory
-
- plannerInit(Domain, RewardFunction, TerminalFunction, double, StateHashFactory) - Method in class burlap.behavior.singleagent.planning.OOMDPPlanner
-
Initializes the planner with the common planning elements
- plannerReferece - Variable in class burlap.behavior.stochasticgame.agents.mavf.MAVFPlannerFactory.ConstantMAVFPlannerFactory
-
- planningCollector - Variable in class burlap.behavior.singleagent.learning.lspi.LSPI
-
- planningDepth - Variable in class burlap.behavior.singleagent.planning.vfa.fittedvi.FittedVI
-
The
SparseSampling
planning depth used
for computing Bellman operators during value iteration.
- planningStarted - Variable in class burlap.behavior.stochasticgame.mavaluefunction.MAValueFunctionPlanner
-
Whether planning has begun or not.
- PLATFORMCLASS - Static variable in class burlap.domain.singleagent.frostbite.FrostbiteDomain
-
Constant for the name of the obstacle OO-MDP class
- pLayer - Variable in class burlap.behavior.singleagent.auxiliary.valuefunctionvis.ValueFunctionVisualizerGUI
-
The policy renderer
- plotCISignificance - Variable in class burlap.behavior.singleagent.auxiliary.performance.LearningAlgorithmExperimenter
-
The signficance value for the confidence interval in the plots.
- plotCISignificance - Variable in class burlap.behavior.stochasticgame.auxiliary.performance.MultiAgentExperimenter
-
The signficance value for the confidence interval in the plots.
- plotRefresh - Variable in class burlap.behavior.singleagent.auxiliary.performance.LearningAlgorithmExperimenter
-
The delay in milliseconds between autmatic refreshes of the plots
- plotRefresh - Variable in class burlap.behavior.stochasticgame.auxiliary.performance.MultiAgentExperimenter
-
The delay in milliseconds between autmatic refreshes of the plots
- plotter - Variable in class burlap.behavior.singleagent.auxiliary.performance.LearningAlgorithmExperimenter
-
The PerformancePlotter used to collect and plot results
- plotter - Variable in class burlap.behavior.stochasticgame.auxiliary.performance.MultiAgentExperimenter
-
The performance plotter object
- poleFriction - Variable in class burlap.domain.singleagent.cartpole.CartPoleDomain.CPPhysicsParams
-
The friction between the pole and the joint on the cart.
- poleLength - Variable in class burlap.domain.singleagent.cartpole.InvertedPendulum.IPPhysicsParams
-
The length of the pole
- poleMass - Variable in class burlap.domain.singleagent.cartpole.CartPoleDomain.CPPhysicsParams
-
The mass of the pole.
- poleMass - Variable in class burlap.domain.singleagent.cartpole.InvertedPendulum.IPPhysicsParams
-
The mass of the pole.
- policy - Variable in class burlap.behavior.singleagent.auxiliary.valuefunctionvis.PolicyRenderLayer
-
The policy to use for visualizing the policy
- policy - Variable in class burlap.behavior.singleagent.learning.modellearning.artdp.ARTDP
-
the policy to follow
- Policy - Class in burlap.behavior.singleagent
-
This abstract class is used to store a policy for a domain that can be queried and perform common operations with the policy.
- Policy() - Constructor for class burlap.behavior.singleagent.Policy
-
- policy - Variable in class burlap.behavior.stochasticgame.agents.mavf.MAVFPlanAgentFactory
-
- policy - Variable in class burlap.behavior.stochasticgame.agents.mavf.MultiAgentVFPlanningAgent
-
The policy dervied from a joint policy derived from the planner's value function estimate that this agent will follow.
- policy - Variable in class burlap.behavior.stochasticgame.agents.naiveq.SGNaiveQLAgent
-
The policy this agent follows
- policy - Variable in class burlap.behavior.stochasticgame.agents.SetStrategyAgent
-
The policy encoding the strategy this agent will follow
- policy - Variable in class burlap.behavior.stochasticgame.agents.SetStrategyAgent.SetStrategyAgentFactory
-
The strategy this agent will follow
- Policy.ActionProb - Class in burlap.behavior.singleagent
-
Class for storing an action and probability tuple.
- Policy.ActionProb(AbstractGroundedAction, double) - Constructor for class burlap.behavior.singleagent.Policy.ActionProb
-
Initializes the action, probability tuple.
- Policy.PolicyUndefinedException - Exception in burlap.behavior.singleagent
-
RuntimeException to be thrown when a Policy is queried for a state in which the policy is undefined.
- Policy.PolicyUndefinedException() - Constructor for exception burlap.behavior.singleagent.Policy.PolicyUndefinedException
-
- Policy.RandomPolicy - Class in burlap.behavior.singleagent
-
A uniform random policy for single agent domains.
- Policy.RandomPolicy(Domain) - Constructor for class burlap.behavior.singleagent.Policy.RandomPolicy
-
Initializes by copying all the primitive actions references defined for the domain into an internal action
list for this policy.
- Policy.RandomPolicy(List<Action>) - Constructor for class burlap.behavior.singleagent.Policy.RandomPolicy
-
Initializes by copying all the actions references defined in the provided list into an internal action
list for this policy.
- policyCount - Variable in class burlap.behavior.singleagent.learnbydemo.apprenticeship.ApprenticeshipLearningRequest
-
The maximum number of times a policy is rolled out and evaluated
- PolicyDefinedSubgoalOption - Class in burlap.behavior.singleagent.options
-
This is a subgoal option whose initiation states are defined by the state in which the policy is defined.
- PolicyDefinedSubgoalOption(String, Policy, StateConditionTest) - Constructor for class burlap.behavior.singleagent.options.PolicyDefinedSubgoalOption
-
Initializes.
- PolicyFromJointPolicy - Class in burlap.behavior.stochasticgame
-
This class defines a single agent's policy that is derived from a joint policy.
- PolicyFromJointPolicy(JointPolicy) - Constructor for class burlap.behavior.stochasticgame.PolicyFromJointPolicy
-
Initializes with the underlying joint polciy
- PolicyFromJointPolicy(JointPolicy, boolean) - Constructor for class burlap.behavior.stochasticgame.PolicyFromJointPolicy
-
Initializes with the underlying joint polciy and whether actions should be synchronized with other agents following the same underlying joint policy.
- PolicyFromJointPolicy(String, JointPolicy) - Constructor for class burlap.behavior.stochasticgame.PolicyFromJointPolicy
-
Initializes with the acting agent name whose actions from the underlying joint policy will be returned.
- PolicyFromJointPolicy(String, JointPolicy, boolean) - Constructor for class burlap.behavior.stochasticgame.PolicyFromJointPolicy
-
Initializes with the acting agent name whose actions from the underlying joint policy will be returned and
whether actions should be synchronized with other agents following the same underlying joint policy.
- PolicyGlyphPainter2D - Class in burlap.behavior.singleagent.auxiliary.valuefunctionvis.common
-
An class for rendering the policy for states by painting different glyphs for different actions.
- PolicyGlyphPainter2D() - Constructor for class burlap.behavior.singleagent.auxiliary.valuefunctionvis.common.PolicyGlyphPainter2D
-
- PolicyGlyphPainter2D.PolicyGlyphRenderStyle - Enum in burlap.behavior.singleagent.auxiliary.valuefunctionvis.common
-
MAXACTION paints only glphys for only those actions that have the highest likelihood
MAXACTIONSOFTTIE paints the glyphs for all actions whose likelihood is within some threshold of the most likely action
DISTSCALED paints glyphs for all actions and scales them by the likelihood of the action
- PolicyIteration - Class in burlap.behavior.singleagent.planning.stochastic.policyiteration
-
- PolicyIteration(Domain, RewardFunction, TerminalFunction, double, StateHashFactory, double, int, int) - Constructor for class burlap.behavior.singleagent.planning.stochastic.policyiteration.PolicyIteration
-
Initializes the planner.
- PolicyIteration(Domain, RewardFunction, TerminalFunction, double, StateHashFactory, double, double, int, int) - Constructor for class burlap.behavior.singleagent.planning.stochastic.policyiteration.PolicyIteration
-
Initializes the planner.
- PolicyRenderLayer - Class in burlap.behavior.singleagent.auxiliary.valuefunctionvis
-
- PolicyRenderLayer(Collection<State>, StatePolicyPainter, Policy) - Constructor for class burlap.behavior.singleagent.auxiliary.valuefunctionvis.PolicyRenderLayer
-
- poll() - Method in class burlap.datastructures.HashIndexedHeap
-
Returns a pointer to the head of the heap and removes it
- poll() - Method in class burlap.datastructures.StochasticTree
-
Samples an element according to a probability defined by the relative weight of objects, removes it from the tree, and returns it.
- pollLearningRate(int, State, AbstractGroundedAction) - Method in class burlap.behavior.learningrate.ConstantLR
-
- pollLearningRate(int, int) - Method in class burlap.behavior.learningrate.ConstantLR
-
- pollLearningRate(int, State, AbstractGroundedAction) - Method in class burlap.behavior.learningrate.ExponentialDecayLR
-
- pollLearningRate(int, int) - Method in class burlap.behavior.learningrate.ExponentialDecayLR
-
- pollLearningRate(int, State, AbstractGroundedAction) - Method in interface burlap.behavior.learningrate.LearningRate
-
A method for returning the learning rate for a given state action pair and then decaying the learning rate as defined by this class.
- pollLearningRate(int, int) - Method in interface burlap.behavior.learningrate.LearningRate
-
A method for returning the learning rate for a given state (-action) feature and then decaying the learning rate as defined by this class.
- pollLearningRate(int, State, AbstractGroundedAction) - Method in class burlap.behavior.learningrate.SoftTimeInverseDecayLR
-
- pollLearningRate(int, int) - Method in class burlap.behavior.learningrate.SoftTimeInverseDecayLR
-
- pollLearningRate(int, State, AbstractGroundedAction) - Method in class burlap.behavior.singleagent.vfa.fourier.FourierBasisLearningRateWrapper
-
- pollLearningRate(int, int) - Method in class burlap.behavior.singleagent.vfa.fourier.FourierBasisLearningRateWrapper
-
- polyDegree - Variable in class burlap.behavior.singleagent.auxiliary.valuefunctionvis.common.LandmarkColorBlendInterpolation
-
The power to raise the normalized distance
- postPlanPrep() - Method in class burlap.behavior.singleagent.planning.deterministic.informed.astar.AStar
-
- postPlanPrep() - Method in class burlap.behavior.singleagent.planning.deterministic.informed.astar.DynamicWeightedAStar
-
- postPlanPrep() - Method in class burlap.behavior.singleagent.planning.deterministic.informed.BestFirst
-
This method is called at the end of the
BestFirst.planFromState(State)
method and can be used clean up any special
data structures needed by the subclass.
- potential - Variable in class burlap.behavior.singleagent.learning.modellearning.rmax.PotentialShapedRMax.PotentialShapedRMaxRF
-
The state potential function
- PotentialFunction - Interface in burlap.behavior.singleagent.shaping.potential
-
Defines an interface for reward potential functions.
- potentialFunction - Variable in class burlap.behavior.singleagent.shaping.potential.PotentialShapedRF
-
The potential function that can be used to return the potential reward from input states.
- PotentialShapedRF - Class in burlap.behavior.singleagent.shaping.potential
-
This class is used to implement Potential-based reward shaping [1] which is guaranteed to preserve the optimal policy.
- PotentialShapedRF(RewardFunction, PotentialFunction, double) - Constructor for class burlap.behavior.singleagent.shaping.potential.PotentialShapedRF
-
Initializes the shaping with the objective reward function, the potential function, and the discount of the MDP.
- PotentialShapedRMax - Class in burlap.behavior.singleagent.learning.modellearning.rmax
-
Potential Shaped RMax [1] is a generalization of RMax in which a potential-shaped reward function is used to provide less (but still admissible)
optimistic views of unknown state transitions.
- PotentialShapedRMax(Domain, RewardFunction, TerminalFunction, double, StateHashFactory, double, int, double, int) - Constructor for class burlap.behavior.singleagent.learning.modellearning.rmax.PotentialShapedRMax
-
Initializes for a tabular model, VI planner, and standard RMax paradigm
- PotentialShapedRMax(Domain, RewardFunction, TerminalFunction, double, StateHashFactory, PotentialFunction, int, double, int) - Constructor for class burlap.behavior.singleagent.learning.modellearning.rmax.PotentialShapedRMax
-
Initializes for a tabular model, VI planner, and potential shaped function.
- PotentialShapedRMax(Domain, RewardFunction, TerminalFunction, double, StateHashFactory, PotentialFunction, Model, ModelPlanner.ModelPlannerGenerator) - Constructor for class burlap.behavior.singleagent.learning.modellearning.rmax.PotentialShapedRMax
-
Initializes for a given model, model planner, and potential shaped function.
- PotentialShapedRMax.PotentialShapedRMaxRF - Class in burlap.behavior.singleagent.learning.modellearning.rmax
-
This class is a special version of a potential shaped reward function that does not remove the potential value for transitions to states with uknown action transitions
that are followed.
- PotentialShapedRMax.PotentialShapedRMaxRF(RewardFunction, PotentialFunction) - Constructor for class burlap.behavior.singleagent.learning.modellearning.rmax.PotentialShapedRMax.PotentialShapedRMaxRF
-
Initializes.
- PotentialShapedRMax.PotentialShapedRMaxTerminal - Class in burlap.behavior.singleagent.learning.modellearning.rmax
-
A Terminal function that treats transitions to RMax fictious nodes as terminal states as well as what the model reports as terminal states.
- PotentialShapedRMax.PotentialShapedRMaxTerminal(TerminalFunction) - Constructor for class burlap.behavior.singleagent.learning.modellearning.rmax.PotentialShapedRMax.PotentialShapedRMaxTerminal
-
Initializes with a modeled terminal function
- PotentialShapedRMax.RMaxPotential - Class in burlap.behavior.singleagent.learning.modellearning.rmax
-
A potential function for vanilla RMax; all states have a potential value of R_max/(1-gamma)
- PotentialShapedRMax.RMaxPotential(double, double) - Constructor for class burlap.behavior.singleagent.learning.modellearning.rmax.PotentialShapedRMax.RMaxPotential
-
Initializes for a given maximum reward and discount factor.
- PotentialShapedRMax.RMaxPotential(double) - Constructor for class burlap.behavior.singleagent.learning.modellearning.rmax.PotentialShapedRMax.RMaxPotential
-
Initializes using the given maximum value function value
- potentialValue(State) - Method in class burlap.behavior.singleagent.learning.modellearning.rmax.PotentialShapedRMax.RMaxPotential
-
- potentialValue(State) - Method in interface burlap.behavior.singleagent.shaping.potential.PotentialFunction
-
Returns the reward potential from the given state.
- predictedValue - Variable in class burlap.behavior.singleagent.vfa.ApproximationResult
-
The predicted valued
- preferenceLength() - Method in class burlap.datastructures.BoltzmannDistribution
-
Returns the number of elements on which there are preferences
- preferences - Variable in class burlap.behavior.singleagent.learning.actorcritic.actor.BoltzmannActor
-
A map from (hashed) states to Policy nodes; the latter of which contains the action preferences
for each applicable action in the state.
- preferences - Variable in class burlap.datastructures.BoltzmannDistribution
-
The preference values to turn into probabilities
- prePlanPrep() - Method in class burlap.behavior.singleagent.planning.deterministic.informed.astar.AStar
-
- prePlanPrep() - Method in class burlap.behavior.singleagent.planning.deterministic.informed.astar.DynamicWeightedAStar
-
- prePlanPrep() - Method in class burlap.behavior.singleagent.planning.deterministic.informed.BestFirst
-
This method is called at the start of the
BestFirst.planFromState(State)
method and can be used initialize any special
data structures needed by the subclass.
- PrimitiveOption - Class in burlap.behavior.singleagent.options
-
This class is just an option wrapper of a standard primitive action.
- PrimitiveOption(Action) - Constructor for class burlap.behavior.singleagent.options.PrimitiveOption
-
Creates an option wrapper for a given primitive action.
- printDebug - Variable in class burlap.behavior.singleagent.interfaces.rlglue.RLGlueAgentShell
-
Whether to print debug statements.
- printOutResults() - Method in class burlap.oomdp.stochasticgames.tournament.Tournament
-
Prints the tournament results by agent index and their cumulative reward received in the tournament.
- printState(State) - Method in class burlap.oomdp.singleagent.explorer.TerminalExplorer
-
Prints the state s to the terminal.
- printState(State) - Method in class burlap.oomdp.stochasticgames.explorers.SGTerminalExplorer
-
Prints the given state to the terminal.
- PrioritizedSearchNode - Class in burlap.behavior.singleagent.planning.deterministic.informed
-
An extension of the
SearchNode
class that includes
a priority value.
- PrioritizedSearchNode(StateHashTuple, double) - Constructor for class burlap.behavior.singleagent.planning.deterministic.informed.PrioritizedSearchNode
-
Initializes a PrioritizedSearchNode for a given (hashed) input state and priority value.
- PrioritizedSearchNode(StateHashTuple, GroundedAction, SearchNode, double) - Constructor for class burlap.behavior.singleagent.planning.deterministic.informed.PrioritizedSearchNode
-
Constructs a SearchNode for the input state and priority and sets the generating action and back pointer to the provided elements.
- PrioritizedSearchNode.PSNComparator - Class in burlap.behavior.singleagent.planning.deterministic.informed
-
A class for comparing the priority of two PrioritizedSearchNodes.
- PrioritizedSearchNode.PSNComparator() - Constructor for class burlap.behavior.singleagent.planning.deterministic.informed.PrioritizedSearchNode.PSNComparator
-
- PrioritizedSweeping - Class in burlap.behavior.singleagent.planning.stochastic.valueiteration
-
An implementation of Prioritized Sweeping as DP planning algorithm as described by Li and Littman [1].
- PrioritizedSweeping(Domain, RewardFunction, TerminalFunction, double, StateHashFactory, double, int) - Constructor for class burlap.behavior.singleagent.planning.stochastic.valueiteration.PrioritizedSweeping
-
Initializes
- PrioritizedSweeping.BPTR - Class in burlap.behavior.singleagent.planning.stochastic.valueiteration
-
A back pointer and its max action probability of transition.
- PrioritizedSweeping.BPTR(PrioritizedSweeping.BPTRNode, StateHashTuple) - Constructor for class burlap.behavior.singleagent.planning.stochastic.valueiteration.PrioritizedSweeping.BPTR
-
Stores back pointer information.
- PrioritizedSweeping.BPTRNode - Class in burlap.behavior.singleagent.planning.stochastic.valueiteration
-
A node for state thar contains a list of its back pointers, their max probability of transition to this state, and the priority of this nodes state.
- PrioritizedSweeping.BPTRNode(StateHashTuple) - Constructor for class burlap.behavior.singleagent.planning.stochastic.valueiteration.PrioritizedSweeping.BPTRNode
-
Creates a back pointer for the given state with no back pointers and a priority of Double.MAX_VALUE (ensures one sweep of the state space to start)
- PrioritizedSweeping.BPTRNodeComparator - Class in burlap.behavior.singleagent.planning.stochastic.valueiteration
-
Comparator for the the priority of BPTRNodes
- PrioritizedSweeping.BPTRNodeComparator() - Constructor for class burlap.behavior.singleagent.planning.stochastic.valueiteration.PrioritizedSweeping.BPTRNodeComparator
-
- priority - Variable in class burlap.behavior.singleagent.planning.deterministic.informed.PrioritizedSearchNode
-
The priority of the node used to order it for expansion.
- priority - Variable in class burlap.behavior.singleagent.planning.stochastic.valueiteration.PrioritizedSweeping.BPTRNode
-
- priorityCompare - Variable in class burlap.datastructures.HashIndexedHeap
-
A comparator to compare objects
- priorityNodes - Variable in class burlap.behavior.singleagent.planning.stochastic.valueiteration.PrioritizedSweeping
-
The priority queue of states
- probabilityOfTermination(State, String[]) - Method in class burlap.behavior.singleagent.options.DeterminisitcTerminationOption
-
- probabilityOfTermination(State, String[]) - Method in class burlap.behavior.singleagent.options.MacroAction
-
- probabilityOfTermination(State, String[]) - Method in class burlap.behavior.singleagent.options.Option
-
Returns the probability that this option (executed with the given parameters) will terminate in the given state
- probabilityOfTermination(State, String[]) - Method in class burlap.behavior.singleagent.options.PolicyDefinedSubgoalOption
-
- probabilityOfTermination(State, String[]) - Method in class burlap.behavior.singleagent.options.PrimitiveOption
-
- probabiltiy - Variable in class burlap.domain.singleagent.graphdefined.GraphDefinedDomain.NodeTransitionProbibility
-
The probability of transitioning to the resulting state
- probs - Variable in class burlap.datastructures.BoltzmannDistribution
-
The output probabilities
- produceRandomOffset(boolean[], double[]) - Method in class burlap.behavior.singleagent.vfa.cmac.FVCMACFeatureDatabase
-
Creates and returns a random tiling offset for the given widths and required dimensions.
- produceUniformTilingsOffset(boolean[], double[], int, int) - Method in class burlap.behavior.singleagent.vfa.cmac.FVCMACFeatureDatabase
-
Creates and returns an offset that is uniformly spaced from other tilings.
- propFunctionMap - Variable in class burlap.oomdp.core.Domain
-
- propFunctions - Variable in class burlap.oomdp.core.Domain
-
- PropositionalFunction - Class in burlap.oomdp.core
-
The propositional function class defines evaluations of object instances in an OO-MDP state and are part of the definition for an OO-MDP domain.
- PropositionalFunction(String, Domain, String) - Constructor for class burlap.oomdp.core.PropositionalFunction
-
Initializes a propositional function with the given name, domain and parameter object classes.
- PropositionalFunction(String, Domain, String, String) - Constructor for class burlap.oomdp.core.PropositionalFunction
-
Initializes a propositional function with the given name, domain, parameter object classes, and propositional function class name.
- PropositionalFunction(String, Domain, String[]) - Constructor for class burlap.oomdp.core.PropositionalFunction
-
Initializes a propositional function with the given name, domain and parameter object classes.
- PropositionalFunction(String, Domain, String[], String) - Constructor for class burlap.oomdp.core.PropositionalFunction
-
Initializes a propositional function with the given name, domain, parameter object classes, and propositional function class name.
- PropositionalFunction(String, Domain, String[], String[]) - Constructor for class burlap.oomdp.core.PropositionalFunction
-
Initializes a propositional function with the given name, domain, parameter object classes, and the parameter order groups of the parameters.
- PropositionalFunction(String, Domain, String[], String[], String) - Constructor for class burlap.oomdp.core.PropositionalFunction
-
Initializes a propositional function with the given name, domain, parameter object classes,
the parameter order groups of the parameters, and the propositional function class name.
- propositionalFunctions - Variable in class burlap.domain.singleagent.gridworld.macro.MacroCellGridWorld.LinearInPFRewardFunction
-
- propViewer - Variable in class burlap.behavior.singleagent.EpisodeSequenceVisualizer
-
- propViewer - Variable in class burlap.behavior.stochasticgame.GameSequenceVisualizer
-
- propViewer - Variable in class burlap.oomdp.singleagent.explorer.VisualExplorer
-
- pSelection - Variable in class burlap.behavior.singleagent.Policy.ActionProb
-
The probability of the action being selected.
- put(int, double) - Method in class burlap.behavior.singleagent.vfa.WeightGradient
-
Adds the partial derivative for a given weight
- put(String, int) - Method in class burlap.domain.stochasticgames.normalform.SingleStageNormalFormGame.ActionNameMap
-
- putdownBlock(State, int) - Static method in class burlap.domain.singleagent.blockdude.BlockDude
-
Modifies state s to put down the block the agent is holding.