A B C D E F G H I J K L M N O P Q R S T U V W X Y Z 

L

lambda - Variable in class burlap.behavior.singleagent.learning.actorcritic.critics.TDLambda
Indicates the strength of eligibility traces.
lambda - Variable in class burlap.behavior.singleagent.learning.tdmethods.SarsaLam
the strength of eligibility traces (0 for one step, 1 for full propagation)
lambda - Variable in class burlap.behavior.singleagent.learning.tdmethods.vfa.GradientDescentSarsaLam
the strength of eligibility traces (0 for one step, 1 for full propagation)
LandmarkColorBlendInterpolation - Class in burlap.behavior.singleagent.auxiliary.valuefunctionvis.common
A ColorBlend instance that takes as input a set of "landmark" Color objects and corresponding values between them.
LandmarkColorBlendInterpolation() - Constructor for class burlap.behavior.singleagent.auxiliary.valuefunctionvis.common.LandmarkColorBlendInterpolation
 
LandmarkColorBlendInterpolation(double) - Constructor for class burlap.behavior.singleagent.auxiliary.valuefunctionvis.common.LandmarkColorBlendInterpolation
Initializes the color blend with a power to raise the normalized distance of values.
landmarkColors - Variable in class burlap.behavior.singleagent.auxiliary.valuefunctionvis.common.LandmarkColorBlendInterpolation
The landmark colors
landmarkValues - Variable in class burlap.behavior.singleagent.auxiliary.valuefunctionvis.common.LandmarkColorBlendInterpolation
The value position of each landmark
lastAction - Variable in class burlap.behavior.singleagent.vfa.common.LinearVFA
 
lastColPlayerPayoff - Variable in class burlap.behavior.stochasticgames.agents.twoplayer.singlestage.equilibriumplayer.BimatrixEquilibriumSolver
The last cached column player payoff matrix
lastColsStrategy - Variable in class burlap.behavior.stochasticgames.agents.twoplayer.singlestage.equilibriumplayer.BimatrixEquilibriumSolver
The last cached column player strategy
lastComputedCumR - Variable in class burlap.behavior.singleagent.planning.deterministic.informed.astar.AStar
Store the most recent cumulative reward received to some node.
lastComputedDepth - Variable in class burlap.behavior.singleagent.planning.deterministic.informed.astar.DynamicWeightedAStar
maintains the depth of the last explored node
lastCumulativeReward - Variable in class burlap.behavior.singleagent.options.Option
the cumulative reward received during the last execution of this option
lastEpisode - Variable in class burlap.behavior.singleagent.auxiliary.performance.PerformancePlotter
The last episode at which the plot's series data was updated
lastEpisode - Variable in class burlap.behavior.stochasticgames.auxiliary.performance.MultiAgentPerformancePlotter
The last episode at which the plot's series data was updated
lastJointAction - Variable in class burlap.oomdp.stochasticgames.World
 
LastJointActionCommand - Class in burlap.shell.command.world
A ShellCommand for printing the last joint action taken in a World.
LastJointActionCommand() - Constructor for class burlap.shell.command.world.LastJointActionCommand
 
lastNumSteps - Variable in class burlap.behavior.singleagent.options.Option
How many steps were taken in the options last execution
lastOpponentMove - Variable in class burlap.behavior.stochasticgames.agents.twoplayer.repeatedsinglestage.TitForTat
The last opponent's move
lastOptionExecutionResults - Variable in class burlap.behavior.singleagent.options.Option
Stores the last execution results of an option from the initiation state to the state in which it terminated
lastPollTime - Variable in class burlap.behavior.learningrate.ExponentialDecayLR
The last agent time at which they polled the learning rate
lastPollTime - Variable in class burlap.behavior.learningrate.SoftTimeInverseDecayLR
The last agent time at which they polled the learning rate
lastReward - Variable in class burlap.behavior.singleagent.interfaces.rlglue.RLGlueEnvironmentInterface
The last reward received
lastReward - Variable in class burlap.behavior.stochasticgames.agents.interfacing.singleagent.LearningAgentToSGAgentInterface
The last reward received by this agent
lastReward - Variable in class burlap.oomdp.singleagent.environment.SimulatedEnvironment
The last reward generated from this environment.
lastReward - Variable in class burlap.oomdp.singleagent.explorer.VisualExplorer
 
lastRewards - Variable in class burlap.oomdp.stochasticgames.World
 
lastRowPlayerPayoff - Variable in class burlap.behavior.stochasticgames.agents.twoplayer.singlestage.equilibriumplayer.BimatrixEquilibriumSolver
The last cached row player payoff matrix
lastRowStrategy - Variable in class burlap.behavior.stochasticgames.agents.twoplayer.singlestage.equilibriumplayer.BimatrixEquilibriumSolver
The last cached row player strategy
lastState - Variable in class burlap.behavior.singleagent.vfa.cmac.CMACFeatureDatabase
 
lastState - Variable in class burlap.behavior.singleagent.vfa.common.LinearFVVFA
 
lastState - Variable in class burlap.behavior.singleagent.vfa.common.LinearVFA
 
lastStateOnPathIsNew(PrioritizedSearchNode) - Method in class burlap.behavior.singleagent.planning.deterministic.informed.astar.IDAStar
Returns true if the search node has not be visited previously on the current search path.
lastSyncedState - Variable in class burlap.behavior.stochasticgames.JointPolicy
The last state in which synchronized actions were queried.
lastSynchronizedJointAction - Variable in class burlap.behavior.stochasticgames.JointPolicy
The last synchronized joint action that was selected
lastTimeStepUpdate - Variable in class burlap.behavior.singleagent.auxiliary.performance.PerformancePlotter
The last time step at which the plots' series data was updated
lastTimeStepUpdate - Variable in class burlap.behavior.stochasticgames.auxiliary.performance.MultiAgentPerformancePlotter
The last time step at which the plots' series data was updated
lastWeights - Variable in class burlap.behavior.singleagent.learning.lspi.LSPI
The last weight values set from LSTDQ
LATTNAME - Static variable in class burlap.domain.singleagent.lunarlander.LunarLanderDomain
Constant for the name of the left boundary attribute for rectangular obstacles and landing pads
launchThread() - Method in class burlap.behavior.singleagent.auxiliary.performance.PerformancePlotter
Launches the automatic plot refresh thread.
launchThread() - Method in class burlap.behavior.stochasticgames.auxiliary.performance.MultiAgentPerformancePlotter
Launches the automatic plot refresh thread.
leafNodeInit - Variable in class burlap.behavior.singleagent.planning.vfa.fittedvi.FittedVI
This class computes the Bellman operator by using an instance of SparseSampling and setting its leaf nodes values to the current value function approximation.
LearningAgent - Interface in burlap.behavior.singleagent.learning
This is the standard interface for defining an agent that learns how to behave in the world through experience.
learningAgent - Variable in class burlap.behavior.stochasticgames.agents.interfacing.singleagent.LearningAgentToSGAgentInterface
The single agent LearningAgent that will be learning in this stochastic game as if the other players are part of the environment.
LearningAgentFactory - Interface in burlap.behavior.singleagent.learning
A factory interface for generating learning agents.
LearningAgentToSGAgentInterface - Class in burlap.behavior.stochasticgames.agents.interfacing.singleagent
A stochastic games SGAgent that takes as input a single agent LearningAgent to handle behavior.
LearningAgentToSGAgentInterface(SGDomain, LearningAgent) - Constructor for class burlap.behavior.stochasticgames.agents.interfacing.singleagent.LearningAgentToSGAgentInterface
Initializes.
LearningAgentToSGAgentInterface.ActionReference - Class in burlap.behavior.stochasticgames.agents.interfacing.singleagent
A wrapper that maintains a reference to a GroundedSGAgentAction or null.
LearningAgentToSGAgentInterface.ActionReference() - Constructor for class burlap.behavior.stochasticgames.agents.interfacing.singleagent.LearningAgentToSGAgentInterface.ActionReference
 
LearningAgentToSGAgentInterface.StateReference - Class in burlap.behavior.stochasticgames.agents.interfacing.singleagent
A wrapper that maintains a reference to a State or null.
LearningAgentToSGAgentInterface.StateReference() - Constructor for class burlap.behavior.stochasticgames.agents.interfacing.singleagent.LearningAgentToSGAgentInterface.StateReference
 
LearningAlgorithmExperimenter - Class in burlap.behavior.singleagent.auxiliary.performance
This class is used to simplify the comparison of different learning algorithms.
LearningAlgorithmExperimenter(Environment, int, int, LearningAgentFactory...) - Constructor for class burlap.behavior.singleagent.auxiliary.performance.LearningAlgorithmExperimenter
Initializes.
learningPolicy - Variable in class burlap.behavior.singleagent.learning.lspi.LSPI
learningPolicy - Variable in class burlap.behavior.singleagent.learning.tdmethods.QLearning
The learning policy to use.
learningPolicy - Variable in class burlap.behavior.singleagent.learning.tdmethods.vfa.GradientDescentSarsaLam
The learning policy to use.
learningPolicy - Variable in class burlap.behavior.stochasticgames.agents.maql.MAQLFactory
 
learningPolicy - Variable in class burlap.behavior.stochasticgames.agents.maql.MultiAgentQLearning
The learning policy to be followed
learningRate - Variable in class burlap.behavior.learningrate.ConstantLR
 
LearningRate - Interface in burlap.behavior.learningrate
Provides an interface for different methods of learning rate decay schedules.
learningRate(int) - Method in class burlap.behavior.learningrate.SoftTimeInverseDecayLR
 
learningRate - Variable in class burlap.behavior.singleagent.learnfromdemo.mlirl.MLIRL
The gradient ascent learning rate
learningRate - Variable in class burlap.behavior.singleagent.learning.actorcritic.actor.BoltzmannActor
The learning rate used to update action preferences in response to critiques.
learningRate - Variable in class burlap.behavior.singleagent.learning.actorcritic.critics.TDLambda
The learning rate function that affects how quickly the estimated value function changes.
learningRate - Variable in class burlap.behavior.singleagent.learning.tdmethods.QLearning
The learning rate function used.
learningRate - Variable in class burlap.behavior.singleagent.learning.tdmethods.vfa.GradientDescentSarsaLam
A learning rate function to use
learningRate - Variable in class burlap.behavior.stochasticgames.agents.maql.MAQLFactory
 
learningRate - Variable in class burlap.behavior.stochasticgames.agents.maql.MultiAgentQLearning
The learning rate for updating Q-values
learningRate - Variable in class burlap.behavior.stochasticgames.agents.naiveq.history.SGQWActionHistoryFactory
The learning rate the Q-learning algorithm will use
learningRate - Variable in class burlap.behavior.stochasticgames.agents.naiveq.SGNaiveQFactory
The learning rate the Q-learning algorithm will use
learningRate - Variable in class burlap.behavior.stochasticgames.agents.naiveq.SGNaiveQLAgent
the learning rate
legacyParseFileIntoEA(String, Domain, StateParser) - Static method in class burlap.behavior.singleagent.EpisodeAnalysis
Parses a legacy BURLAP 1.0 string representation from a file and turns it into an EpisodeAnalysis object.
legacyParseFileIntoGA(String, SGDomain, StateParser) - Static method in class burlap.behavior.stochasticgames.GameAnalysis
Using the legacy BURLAP 1.0 format, reads a game that was written to a file and turns into a GameAnalysis object.
legacyParseFilesIntoEAList(String, Domain, StateParser) - Static method in class burlap.behavior.singleagent.EpisodeAnalysis
Parses episode files using the legacy BURLAP 1.0 string representation.
legacyParseStringIntoEA(String, Domain, StateParser) - Static method in class burlap.behavior.singleagent.EpisodeAnalysis
Parses a legacy BURLAP 1.0 string representation and turns it into an EpisodeAnalysis object.
legacyParseStringIntoGameAnalysis(String, SGDomain, StateParser) - Static method in class burlap.behavior.stochasticgames.GameAnalysis
Using the legacy BURLAP 1.0 GameAnalysis format, parses a string representing a GameAnalysis object into an actual GameAnalysis object.
LimitedMemoryDFS - Class in burlap.behavior.singleagent.planning.deterministic.uninformed.dfs
This is a modified version of DFS that maintains a memory of the last n states it has previously expanded.
LimitedMemoryDFS(Domain, StateConditionTest, HashableStateFactory, int, boolean, boolean, int) - Constructor for class burlap.behavior.singleagent.planning.deterministic.uninformed.dfs.LimitedMemoryDFS
Constructor for memory limited DFS
LinearDiffRFVInit - Class in burlap.behavior.singleagent.learnfromdemo.mlirl.differentiableplanners.diffvinit
A class for creating a DifferentiableRF and a DifferentiableVInit when the reward function and value function initialization are linear functions over some set of features.
LinearDiffRFVInit(StateToFeatureVectorGenerator, StateToFeatureVectorGenerator, int, int) - Constructor for class burlap.behavior.singleagent.learnfromdemo.mlirl.differentiableplanners.diffvinit.LinearDiffRFVInit
Initializes a linear reward function for a given feature vector of a given dimension and linear value function initialization for a given feature vector and set of dimensions.
LinearDiffRFVInit(StateToFeatureVectorGenerator, StateToFeatureVectorGenerator, int, int, boolean) - Constructor for class burlap.behavior.singleagent.learnfromdemo.mlirl.differentiableplanners.diffvinit.LinearDiffRFVInit
Initializes a linear reward function for a given feature vector of a given dimension and linear value function initialization for a given feature vector and set of dimensions.
LinearFVVFA - Class in burlap.behavior.singleagent.vfa.common
This class can be used to perform linear value function approximation, either for a states or state-actions (Q-values).
LinearFVVFA(StateToFeatureVectorGenerator, double) - Constructor for class burlap.behavior.singleagent.vfa.common.LinearFVVFA
Initializes.
LinearStateActionDifferentiableRF - Class in burlap.behavior.singleagent.learnfromdemo.mlirl.commonrfs
A class for defining a state-action linear DifferentiableRF.
LinearStateActionDifferentiableRF(StateToFeatureVectorGenerator, int, GroundedAction...) - Constructor for class burlap.behavior.singleagent.learnfromdemo.mlirl.commonrfs.LinearStateActionDifferentiableRF
Initializes.
LinearStateDifferentiableRF - Class in burlap.behavior.singleagent.learnfromdemo.mlirl.commonrfs
A class for defining a linear state DifferentiableRF.
LinearStateDifferentiableRF(StateToFeatureVectorGenerator, int) - Constructor for class burlap.behavior.singleagent.learnfromdemo.mlirl.commonrfs.LinearStateDifferentiableRF
Initializes.
LinearStateDifferentiableRF(StateToFeatureVectorGenerator, int, boolean) - Constructor for class burlap.behavior.singleagent.learnfromdemo.mlirl.commonrfs.LinearStateDifferentiableRF
Initializes.
LinearStateDiffVF - Class in burlap.behavior.singleagent.learnfromdemo.mlirl.differentiableplanners.diffvinit
A class for defining a (differentiable) linear function over state features for value function initialization.
LinearStateDiffVF(StateToFeatureVectorGenerator, int) - Constructor for class burlap.behavior.singleagent.learnfromdemo.mlirl.differentiableplanners.diffvinit.LinearStateDiffVF
Initializes with the state feature vector generator over which the linear function is defined and the dimensionality of it.
LinearVFA - Class in burlap.behavior.singleagent.vfa.common
This class is used for general purpose linear VFA.
LinearVFA(FeatureDatabase) - Constructor for class burlap.behavior.singleagent.vfa.common.LinearVFA
Initializes with a feature database; the default weight value will be zero
LinearVFA(FeatureDatabase, double) - Constructor for class burlap.behavior.singleagent.vfa.common.LinearVFA
Initializes
ListActionsCommand - Class in burlap.shell.command.env
A ShellCommand for listing the set of applicable actions given the current environment observation.
ListActionsCommand() - Constructor for class burlap.shell.command.env.ListActionsCommand
 
listen - Variable in class burlap.domain.singleagent.pomdp.tiger.TigerDomain.TigerRF
The reward for listening
listenAccuracy - Variable in class burlap.domain.singleagent.pomdp.tiger.TigerDomain
The probability of hearing accurately where the tiger is
listenAccuracy - Variable in class burlap.domain.singleagent.pomdp.tiger.TigerDomain.TigerObservations
 
ListPropFunctions - Class in burlap.shell.command.env
A ShellCommand for listing the true (or false) GroundedProp's given the current environment observation.
ListPropFunctions() - Constructor for class burlap.shell.command.env.ListPropFunctions
 
lld - Variable in class burlap.domain.singleagent.lunarlander.LLVisualizer.AgentPainter
 
lld - Variable in class burlap.domain.singleagent.lunarlander.LLVisualizer.ObstaclePainter
 
lld - Variable in class burlap.domain.singleagent.lunarlander.LLVisualizer.PadPainter
 
LLSARSA() - Static method in class burlap.tutorials.scd.ContinuousDomainTutorial
 
LLVisualizer - Class in burlap.domain.singleagent.lunarlander
Class for creating a 2D visualizer for a LunarLanderDomain Domain.
LLVisualizer() - Constructor for class burlap.domain.singleagent.lunarlander.LLVisualizer
 
LLVisualizer.AgentPainter - Class in burlap.domain.singleagent.lunarlander
Object painter for a lunar lander agent class.
LLVisualizer.AgentPainter(LunarLanderDomain.LLPhysicsParams) - Constructor for class burlap.domain.singleagent.lunarlander.LLVisualizer.AgentPainter
 
LLVisualizer.ObstaclePainter - Class in burlap.domain.singleagent.lunarlander
Object painter for obstacles of a lunar lander domain, rendered as black rectangles.
LLVisualizer.ObstaclePainter(LunarLanderDomain.LLPhysicsParams) - Constructor for class burlap.domain.singleagent.lunarlander.LLVisualizer.ObstaclePainter
 
LLVisualizer.PadPainter - Class in burlap.domain.singleagent.lunarlander
Object painter for landing pads of a lunar lander domain, rendered as blue rectangles.
LLVisualizer.PadPainter(LunarLanderDomain.LLPhysicsParams) - Constructor for class burlap.domain.singleagent.lunarlander.LLVisualizer.PadPainter
 
load() - Method in class burlap.oomdp.singleagent.interfaces.rlglue.RLGlueEnvironment
Loads this environment into RLGlue
load(String, String) - Method in class burlap.oomdp.singleagent.interfaces.rlglue.RLGlueEnvironment
Loads this environment into RLGLue with the specified host address and port
loadAgent() - Method in class burlap.behavior.singleagent.interfaces.rlglue.RLGlueEnvironmentInterface
Loads this RLGlue AgentInterface into RLGlue and runs its event loop in a separate thread.
loadAgent(String, String) - Method in class burlap.behavior.singleagent.interfaces.rlglue.RLGlueEnvironmentInterface
Loads this RLGlue AgentInterface into RLGlue using the specified host address and port nd runs its event loop in a separate thread.
LocalSubgoalRF - Class in burlap.behavior.singleagent.options.support
It is typical for options to be defined for following policies to subgoals and it is often useful to use a planning or learning algorithm to define these policies, in which case a subgoal reward function for the option would need to be specified.
LocalSubgoalRF(StateConditionTest) - Constructor for class burlap.behavior.singleagent.options.support.LocalSubgoalRF
Initializes with a given set of subgoal states.
LocalSubgoalRF(StateConditionTest, double, double) - Constructor for class burlap.behavior.singleagent.options.support.LocalSubgoalRF
Initializes with a given set of subgoal states, a default reward and a subgoal reward.
LocalSubgoalRF(StateConditionTest, StateConditionTest) - Constructor for class burlap.behavior.singleagent.options.support.LocalSubgoalRF
Initializes with a set of states in which an option is applicable and which the agent should not enter and a set of subgoal states
LocalSubgoalRF(StateConditionTest, StateConditionTest, double, double, double) - Constructor for class burlap.behavior.singleagent.options.support.LocalSubgoalRF
Initializes
LocalSubgoalTF - Class in burlap.behavior.singleagent.options.support
It is typical for options to be defined for following policies to subgoals and it is often useful to use a planning or learning algorithm to define these policies, in which case a terminal function for the option would need to be specified in order to learn or plan for its policy.
LocalSubgoalTF(StateConditionTest) - Constructor for class burlap.behavior.singleagent.options.support.LocalSubgoalTF
Initializes with a set of subgoal states.
LocalSubgoalTF(StateConditionTest, StateConditionTest) - Constructor for class burlap.behavior.singleagent.options.support.LocalSubgoalTF
Initializes with a set of states in which the option is applicable and the options subgoal states.
logbase(double, double) - Static method in class burlap.behavior.singleagent.planning.stochastic.sparsesampling.SparseSampling
Retuns the log value at the given bases.
logLikelihood() - Method in class burlap.behavior.singleagent.learnfromdemo.mlirl.MLIRL
Computes and returns the log-likelihood of all expert trajectories under the current reward function parameters.
logLikelihoodGradient() - Method in class burlap.behavior.singleagent.learnfromdemo.mlirl.MLIRL
Computes and returns the gradient of the log-likelihood of all trajectories
logLikelihoodOfTrajectory(EpisodeAnalysis, double) - Method in class burlap.behavior.singleagent.learnfromdemo.mlirl.MLIRL
Computes and returns the log-likelihood of the given trajectory under the current reward function parameters and weights it by the given weight.
logPolicyGrad(State, GroundedAction) - Method in class burlap.behavior.singleagent.learnfromdemo.mlirl.MLIRL
Computes and returns the gradient of the Boltzmann policy for the given state and action.
logSum(double[], double, double) - Static method in class burlap.behavior.singleagent.learnfromdemo.mlirl.support.BoltzmannPolicyGradient
Computes the log sum of exponentiated Q-values (Scaled by beta)
lostReward - Variable in class burlap.domain.singleagent.frostbite.FrostbiteRF
 
lowerBoundV - Variable in class burlap.behavior.singleagent.planning.stochastic.rtdp.BoundedRTDP
The lower bound value function
lowerLim - Variable in class burlap.oomdp.core.Attribute
lowest value for a non-relational attribute
lowerVal - Variable in class burlap.behavior.singleagent.auxiliary.StateGridder.AttributeSpecification
The lower value of the attribute on the gird
lowerVInit - Variable in class burlap.behavior.singleagent.planning.stochastic.rtdp.BoundedRTDP
The lowerbound value function initialization
lsActions - Variable in class burlap.shell.command.world.ManualAgentsCommands
 
lsAgents - Variable in class burlap.shell.command.world.ManualAgentsCommands
 
LSPI - Class in burlap.behavior.singleagent.learning.lspi
This class implements the optimized version of last squares policy iteration [1] (runs in quadratic time of the number of state features).
LSPI(Domain, double, FeatureDatabase) - Constructor for class burlap.behavior.singleagent.learning.lspi.LSPI
Initializes.
LSPI(Domain, double, FeatureDatabase, SARSData) - Constructor for class burlap.behavior.singleagent.learning.lspi.LSPI
Initializes.
LSPI.SSFeatures - Class in burlap.behavior.singleagent.learning.lspi
Pair of the the state-action features and the next state-action features.
LSPI.SSFeatures(List<ActionFeaturesQuery>, List<ActionFeaturesQuery>) - Constructor for class burlap.behavior.singleagent.learning.lspi.LSPI.SSFeatures
Initializes.
LSTDQ() - Method in class burlap.behavior.singleagent.learning.lspi.LSPI
Runs LSTDQ on this object's current SARSData dataset.
LunarLanderDomain - Class in burlap.domain.singleagent.lunarlander
This domain generator is used for producing a Lunar Lander domain, based on the classic Lunar Lander Atari game (1979).
LunarLanderDomain() - Constructor for class burlap.domain.singleagent.lunarlander.LunarLanderDomain
Initializes with no thrust actions set.
LunarLanderDomain.ActionIdle - Class in burlap.domain.singleagent.lunarlander
An action class for having the agent idle (its current velocity and the force of gravity will be all that acts on the lander).
LunarLanderDomain.ActionIdle(String, Domain, LunarLanderDomain.LLPhysicsParams) - Constructor for class burlap.domain.singleagent.lunarlander.LunarLanderDomain.ActionIdle
Initializes the idle action.
LunarLanderDomain.ActionThrust - Class in burlap.domain.singleagent.lunarlander
An action class for exerting a thrust.
LunarLanderDomain.ActionThrust(String, Domain, double, LunarLanderDomain.LLPhysicsParams) - Constructor for class burlap.domain.singleagent.lunarlander.LunarLanderDomain.ActionThrust
Initializes a thrust action for a given thrust force
LunarLanderDomain.ActionTurn - Class in burlap.domain.singleagent.lunarlander
An action class for turning the lander
LunarLanderDomain.ActionTurn(String, Domain, double, LunarLanderDomain.LLPhysicsParams) - Constructor for class burlap.domain.singleagent.lunarlander.LunarLanderDomain.ActionTurn
Creates a turn action for the indicated direction.
LunarLanderDomain.LLPhysicsParams - Class in burlap.domain.singleagent.lunarlander
A class for holding the physics parameters
LunarLanderDomain.LLPhysicsParams() - Constructor for class burlap.domain.singleagent.lunarlander.LunarLanderDomain.LLPhysicsParams
 
LunarLanderDomain.OnPadPF - Class in burlap.domain.singleagent.lunarlander
A propositional function that evaluates to true if the agent has landed on the top surface of a landing pad.
LunarLanderDomain.OnPadPF(String, Domain) - Constructor for class burlap.domain.singleagent.lunarlander.LunarLanderDomain.OnPadPF
Initializes to be evaluated on an agent object and landing pad object.
LunarLanderDomain.TouchGroundPF - Class in burlap.domain.singleagent.lunarlander
A propositional function that evaluates to true if the agent is touching the ground.
LunarLanderDomain.TouchGroundPF(String, Domain) - Constructor for class burlap.domain.singleagent.lunarlander.LunarLanderDomain.TouchGroundPF
Initializes to be evaluated on an agent object.
LunarLanderDomain.TouchPadPF - Class in burlap.domain.singleagent.lunarlander
A propositional function that evaluates to true if the agent is touching any part of the landing pad, including its side boundaries.
LunarLanderDomain.TouchPadPF(String, Domain) - Constructor for class burlap.domain.singleagent.lunarlander.LunarLanderDomain.TouchPadPF
Initializes to be evaluated on an agent object and landing pad object.
LunarLanderDomain.TouchSurfacePF - Class in burlap.domain.singleagent.lunarlander
A propositional function that evaluates to true if the agent is touching any part of an obstacle, including its side boundaries.
LunarLanderDomain.TouchSurfacePF(String, Domain) - Constructor for class burlap.domain.singleagent.lunarlander.LunarLanderDomain.TouchSurfacePF
Initializes to be evaluated on an agent object and obstacle object.
LunarLanderRF - Class in burlap.domain.singleagent.lunarlander
A reward function for the LunarLanderDomain.
LunarLanderRF(Domain) - Constructor for class burlap.domain.singleagent.lunarlander.LunarLanderRF
Initializes with default reward values (move through air = -1; collision = -100; land on pad = +1000)
LunarLanderRF(Domain, double, double, double) - Constructor for class burlap.domain.singleagent.lunarlander.LunarLanderRF
Initializes with custom reward condition values.
LunarLanderTF - Class in burlap.domain.singleagent.lunarlander
LunarLanderTF(Domain) - Constructor for class burlap.domain.singleagent.lunarlander.LunarLanderTF
Initializes.
A B C D E F G H I J K L M N O P Q R S T U V W X Y Z