- lambda - Variable in class burlap.behavior.singleagent.interfaces.rlglue.common.RLGlueCMACSarsaLambdaFactory
-
The lambda value with default value of 0.5
- lambda - Variable in class burlap.behavior.singleagent.interfaces.rlglue.common.RLGlueSARSALambdaFactory
-
The lambda value with default value of 0.5
- lambda - Variable in class burlap.behavior.singleagent.learning.actorcritic.critics.TDLambda
-
Indicates the strength of eligibility traces.
- lambda - Variable in class burlap.behavior.singleagent.learning.tdmethods.SarsaLam
-
the strength of eligibility traces (0 for one step, 1 for full propagation)
- lambda - Variable in class burlap.behavior.singleagent.learning.tdmethods.vfa.GradientDescentSarsaLam
-
the strength of eligibility traces (0 for one step, 1 for full propagation)
- LandmarkColorBlendInterpolation - Class in burlap.behavior.singleagent.auxiliary.valuefunctionvis.common
-
A
ColorBlend
instance that takes as input a set of "landmark"
Color
objects and corresponding values between them.
- LandmarkColorBlendInterpolation() - Constructor for class burlap.behavior.singleagent.auxiliary.valuefunctionvis.common.LandmarkColorBlendInterpolation
-
- LandmarkColorBlendInterpolation(double) - Constructor for class burlap.behavior.singleagent.auxiliary.valuefunctionvis.common.LandmarkColorBlendInterpolation
-
Initializes the color blend with a power to raise the normalized distance of values.
- landmarkColors - Variable in class burlap.behavior.singleagent.auxiliary.valuefunctionvis.common.LandmarkColorBlendInterpolation
-
The landmark colors
- landmarkValues - Variable in class burlap.behavior.singleagent.auxiliary.valuefunctionvis.common.LandmarkColorBlendInterpolation
-
The value position of each landmark
- lastAction - Variable in class burlap.oomdp.singleagent.explorer.TerminalExplorer
-
- lastAction - Variable in class burlap.oomdp.singleagent.explorer.VisualExplorer
-
- lastColPlayerPayoff - Variable in class burlap.behavior.stochasticgame.agents.twoplayer.singlestage.equilibriumplayer.BimatrixEquilibriumSolver
-
The last cached column player payoff matrix
- lastColsStrategy - Variable in class burlap.behavior.stochasticgame.agents.twoplayer.singlestage.equilibriumplayer.BimatrixEquilibriumSolver
-
The last cached column player strategy
- lastComputedCumR - Variable in class burlap.behavior.singleagent.planning.deterministic.informed.astar.AStar
-
Store the most recent cumulative reward received to some node.
- lastComputedDepth - Variable in class burlap.behavior.singleagent.planning.deterministic.informed.astar.DynamicWeightedAStar
-
maintains the depth of the last explored node
- lastCumulativeReward - Variable in class burlap.behavior.singleagent.options.Option
-
the cumulative reward received during the last execution of this option
- lastEpisode - Variable in class burlap.behavior.singleagent.auxiliary.performance.PerformancePlotter
-
The last episode at which the plot's series data was updated
- lastEpisode - Variable in class burlap.behavior.stochasticgame.auxiliary.performance.MultiAgentPerformancePlotter
-
The last episode at which the plot's series data was updated
- lastJointAction - Variable in class burlap.oomdp.stochasticgames.World
-
- lastNumSteps - Variable in class burlap.behavior.singleagent.options.Option
-
How many steps were taken in the options last execution
- lastOpponentMove - Variable in class burlap.behavior.stochasticgame.agents.twoplayer.repeatedsinglestage.TitForTat
-
The last opponent's move
- lastOptionExecutionResults - Variable in class burlap.behavior.singleagent.options.Option
-
Stores the last execution results of an option from the initiation state to the state in which it terminated
- lastPollTime - Variable in class burlap.behavior.learningrate.ExponentialDecayLR
-
The last agent time at which they polled the learning rate
- lastPollTime - Variable in class burlap.behavior.learningrate.SoftTimeInverseDecayLR
-
The last agent time at which they polled the learning rate
- lastReward - Variable in class burlap.behavior.singleagent.interfaces.rlglue.RLGlueAgentShell
-
The last reward returned by RLGlue
- lastReward - Variable in class burlap.behavior.stochasticgame.agents.interfacing.singleagent.SingleAgentInterface
-
The last reward received by this agent
- lastReward - Variable in class burlap.oomdp.singleagent.explorer.VisualExplorer
-
- lastRewards - Variable in class burlap.oomdp.stochasticgames.explorers.SGVisualExplorer
-
- lastRowPlayerPayoff - Variable in class burlap.behavior.stochasticgame.agents.twoplayer.singlestage.equilibriumplayer.BimatrixEquilibriumSolver
-
The last cached row player payoff matrix
- lastRowStrategy - Variable in class burlap.behavior.stochasticgame.agents.twoplayer.singlestage.equilibriumplayer.BimatrixEquilibriumSolver
-
The last cached row player strategy
- lastStateIsTerminal - Variable in class burlap.behavior.singleagent.interfaces.rlglue.RLGlueAgentShell
-
Whether the last state was a terminal state
- lastStateIsTerminal - Variable in class burlap.behavior.stochasticgame.agents.interfacing.singleagent.SingleAgentInterface
-
Whether the last state was a terminal state
- lastStateOnPathIsNew(PrioritizedSearchNode) - Method in class burlap.behavior.singleagent.planning.deterministic.informed.astar.IDAStar
-
Returns true if the search node has not be visited previously on the current search path.
- lastSyncedState - Variable in class burlap.behavior.stochasticgame.JointPolicy
-
The last state in which synchronized actions were queried.
- lastSynchronizedJointAction - Variable in class burlap.behavior.stochasticgame.JointPolicy
-
The last synchronized joint action that was selected
- lastTimeStepUpdate - Variable in class burlap.behavior.singleagent.auxiliary.performance.PerformancePlotter
-
The last time step at which the plots' series data was updated
- lastTimeStepUpdate - Variable in class burlap.behavior.stochasticgame.auxiliary.performance.MultiAgentPerformancePlotter
-
The last time step at which the plots' series data was updated
- lastWeights - Variable in class burlap.behavior.singleagent.learning.lspi.LSPI
-
The last weight values set from LSTDQ
- LATTNAME - Static variable in class burlap.domain.singleagent.lunarlander.LunarLanderDomain
-
Constant for the name of the left boundary attribute for rectangular obstacles and landing pads
- launchThread() - Method in class burlap.behavior.singleagent.auxiliary.performance.PerformancePlotter
-
Launches the automatic plot refresh thread.
- launchThread() - Method in class burlap.behavior.stochasticgame.auxiliary.performance.MultiAgentPerformancePlotter
-
Launches the automatic plot refresh thread.
- leafNodeInit - Variable in class burlap.behavior.singleagent.planning.vfa.fittedvi.FittedVI
-
This class computes the Bellman operator by using an instance of
SparseSampling
and setting its leaf nodes values to the current value function approximation.
- LearningAgent - Interface in burlap.behavior.singleagent.learning
-
This is the standard interface for defining an agent that learns how to behave in the world through experience.
- LearningAgent.LearningAgentBookKeeping - Class in burlap.behavior.singleagent.learning
-
Because
LearningAgent
is an interface, default methods for managing
the history of experienced episodes is not provided.
- LearningAgent.LearningAgentBookKeeping() - Constructor for class burlap.behavior.singleagent.learning.LearningAgent.LearningAgentBookKeeping
-
- LearningAgentFactory - Interface in burlap.behavior.singleagent.learning
-
A factory interface for generating learning agents.
- LearningAlgorithmExperimenter - Class in burlap.behavior.singleagent.auxiliary.performance
-
This class is used to simplify the comparison of different learning algorithms.
- LearningAlgorithmExperimenter(SADomain, RewardFunction, StateGenerator, int, int, LearningAgentFactory...) - Constructor for class burlap.behavior.singleagent.auxiliary.performance.LearningAlgorithmExperimenter
-
Initializes.
- learningPolicy - Variable in class burlap.behavior.singleagent.interfaces.rlglue.common.RLGlueCMACSarsaLambdaFactory
-
The learning policy to use.
- learningPolicy - Variable in class burlap.behavior.singleagent.interfaces.rlglue.common.RLGLueQlearningFactory
-
The learning policy to use.
- learningPolicy - Variable in class burlap.behavior.singleagent.learning.lspi.LSPI
-
- learningPolicy - Variable in class burlap.behavior.singleagent.learning.tdmethods.QLearning
-
The learning policy to use.
- learningPolicy - Variable in class burlap.behavior.singleagent.learning.tdmethods.vfa.GradientDescentSarsaLam
-
The learning policy to use.
- learningPolicy - Variable in class burlap.behavior.stochasticgame.agents.maql.MAQLFactory
-
- learningPolicy - Variable in class burlap.behavior.stochasticgame.agents.maql.MultiAgentQLearning
-
The learning policy to be followed
- learningRate - Variable in class burlap.behavior.learningrate.ConstantLR
-
- LearningRate - Interface in burlap.behavior.learningrate
-
Provides an interface for different methods of learning rate decay schedules.
- learningRate(int) - Method in class burlap.behavior.learningrate.SoftTimeInverseDecayLR
-
- learningRate - Variable in class burlap.behavior.singleagent.interfaces.rlglue.common.RLGlueCMACSarsaLambdaFactory
-
The learning rate function used.
- learningRate - Variable in class burlap.behavior.singleagent.interfaces.rlglue.common.RLGLueQlearningFactory
-
The learning rate function used.
- learningRate - Variable in class burlap.behavior.singleagent.learnbydemo.mlirl.MLIRL
-
The gradient ascent learning rate
- learningRate - Variable in class burlap.behavior.singleagent.learning.actorcritic.actor.BoltzmannActor
-
The learning rate used to update action preferences in response to critiques.
- learningRate - Variable in class burlap.behavior.singleagent.learning.actorcritic.critics.TDLambda
-
The learning rate function that affects how quickly the estimated value function changes.
- learningRate - Variable in class burlap.behavior.singleagent.learning.tdmethods.QLearning
-
The learning rate function used.
- learningRate - Variable in class burlap.behavior.singleagent.learning.tdmethods.vfa.GradientDescentSarsaLam
-
A learning rate function to use
- learningRate - Variable in class burlap.behavior.stochasticgame.agents.maql.MAQLFactory
-
- learningRate - Variable in class burlap.behavior.stochasticgame.agents.maql.MultiAgentQLearning
-
The learning rate for updating Q-values
- learningRate - Variable in class burlap.behavior.stochasticgame.agents.naiveq.history.SGQWActionHistoryFactory
-
The learning rate the Q-learning algorithm will use
- learningRate - Variable in class burlap.behavior.stochasticgame.agents.naiveq.SGNaiveQFactory
-
The learning rate the Q-learning algorithm will use
- learningRate - Variable in class burlap.behavior.stochasticgame.agents.naiveq.SGNaiveQLAgent
-
the learning rate
- LimitedMemoryDFS - Class in burlap.behavior.singleagent.planning.deterministic.uninformed.dfs
-
This is a modified version of DFS that maintains a memory of the last n states it has previously expanded.
- LimitedMemoryDFS(Domain, StateConditionTest, StateHashFactory, int, boolean, boolean, int) - Constructor for class burlap.behavior.singleagent.planning.deterministic.uninformed.dfs.LimitedMemoryDFS
-
Constructor for memory limited DFS
- LinearDiffRFVInit - Class in burlap.behavior.singleagent.learnbydemo.mlirl.differentiableplanners.diffvinit
-
- LinearDiffRFVInit(StateToFeatureVectorGenerator, StateToFeatureVectorGenerator, int, int) - Constructor for class burlap.behavior.singleagent.learnbydemo.mlirl.differentiableplanners.diffvinit.LinearDiffRFVInit
-
Initializes a linear reward function for a given feature vector of a given dimension and linear
value function initialization for a given feature vector and set of dimensions.
- LinearDiffRFVInit(StateToFeatureVectorGenerator, StateToFeatureVectorGenerator, int, int, boolean) - Constructor for class burlap.behavior.singleagent.learnbydemo.mlirl.differentiableplanners.diffvinit.LinearDiffRFVInit
-
Initializes a linear reward function for a given feature vector of a given dimension and linear
value function initialization for a given feature vector and set of dimensions.
- LinearFVVFA - Class in burlap.behavior.singleagent.vfa.common
-
This class can be used to perform linear value function approximation, either for a states or state-actions (Q-values).
- LinearFVVFA(StateToFeatureVectorGenerator, double) - Constructor for class burlap.behavior.singleagent.vfa.common.LinearFVVFA
-
Initializes.
- LinearStateActionDifferentiableRF - Class in burlap.behavior.singleagent.learnbydemo.mlirl.commonrfs
-
- LinearStateActionDifferentiableRF(StateToFeatureVectorGenerator, int, GroundedAction...) - Constructor for class burlap.behavior.singleagent.learnbydemo.mlirl.commonrfs.LinearStateActionDifferentiableRF
-
Initializes.
- LinearStateDifferentiableRF - Class in burlap.behavior.singleagent.learnbydemo.mlirl.commonrfs
-
- LinearStateDifferentiableRF(StateToFeatureVectorGenerator, int) - Constructor for class burlap.behavior.singleagent.learnbydemo.mlirl.commonrfs.LinearStateDifferentiableRF
-
Initializes.
- LinearStateDifferentiableRF(StateToFeatureVectorGenerator, int, boolean) - Constructor for class burlap.behavior.singleagent.learnbydemo.mlirl.commonrfs.LinearStateDifferentiableRF
-
Initializes.
- LinearStateDiffVF - Class in burlap.behavior.singleagent.learnbydemo.mlirl.differentiableplanners.diffvinit
-
A class for defining a (differentiable) linear function over state features for value function initialization.
- LinearStateDiffVF(StateToFeatureVectorGenerator, int) - Constructor for class burlap.behavior.singleagent.learnbydemo.mlirl.differentiableplanners.diffvinit.LinearStateDiffVF
-
Initializes with the state feature vector generator over which the linear function is defined and the dimensionality of it.
- LinearVFA - Class in burlap.behavior.singleagent.vfa.common
-
This class is used for general purpose linear VFA.
- LinearVFA(FeatureDatabase) - Constructor for class burlap.behavior.singleagent.vfa.common.LinearVFA
-
Initializes with a feature database; the default weight value will be zero
- LinearVFA(FeatureDatabase, double) - Constructor for class burlap.behavior.singleagent.vfa.common.LinearVFA
-
Initializes
- lld - Variable in class burlap.domain.singleagent.lunarlander.LLVisualizer.AgentPainter
-
- lld - Variable in class burlap.domain.singleagent.lunarlander.LLVisualizer.ObstaclePainter
-
- lld - Variable in class burlap.domain.singleagent.lunarlander.LLVisualizer.PadPainter
-
- LLStateParser - Class in burlap.domain.singleagent.lunarlander
-
A state parser specifically for the lunar lander domain.
- LLStateParser(Domain) - Constructor for class burlap.domain.singleagent.lunarlander.LLStateParser
-
Initializes for the specific lunar lander domain instance.
- LLVisualizer - Class in burlap.domain.singleagent.lunarlander
-
- LLVisualizer() - Constructor for class burlap.domain.singleagent.lunarlander.LLVisualizer
-
- LLVisualizer.AgentPainter - Class in burlap.domain.singleagent.lunarlander
-
Object painter for a lunar lander agent class.
- LLVisualizer.AgentPainter(LunarLanderDomain.LLPhysicsParams) - Constructor for class burlap.domain.singleagent.lunarlander.LLVisualizer.AgentPainter
-
- LLVisualizer.ObstaclePainter - Class in burlap.domain.singleagent.lunarlander
-
Object painter for obstacles of a lunar lander domain, rendered as black rectangles.
- LLVisualizer.ObstaclePainter(LunarLanderDomain.LLPhysicsParams) - Constructor for class burlap.domain.singleagent.lunarlander.LLVisualizer.ObstaclePainter
-
- LLVisualizer.PadPainter - Class in burlap.domain.singleagent.lunarlander
-
Object painter for landing pads of a lunar lander domain, rendered as blue rectangles.
- LLVisualizer.PadPainter(LunarLanderDomain.LLPhysicsParams) - Constructor for class burlap.domain.singleagent.lunarlander.LLVisualizer.PadPainter
-
- load() - Method in class burlap.oomdp.singleagent.interfaces.rlglue.RLGlueEnvironment
-
Loads this environment into RLGlue
- load(String, String) - Method in class burlap.oomdp.singleagent.interfaces.rlglue.RLGlueEnvironment
-
Loads this environment into RLGLue with the specified host address and port
- loadAgent() - Method in class burlap.behavior.singleagent.interfaces.rlglue.RLGlueAgentShell
-
Loads this agent into RLGlue using the default host and port.
- loadAgent(String, String) - Method in class burlap.behavior.singleagent.interfaces.rlglue.RLGlueAgentShell
-
Loads this agent into RLGlue using the specified host address and port.
- LocalSubgoalRF - Class in burlap.behavior.singleagent.options
-
It is typical for options to be defined for following policies to subgoals and it is often useful
to use a planning or learning algorithm to define these policies, in which case a subgoal reward
function for the option would need to be specified.
- LocalSubgoalRF(StateConditionTest) - Constructor for class burlap.behavior.singleagent.options.LocalSubgoalRF
-
Initializes with a given set of subgoal states.
- LocalSubgoalRF(StateConditionTest, double, double) - Constructor for class burlap.behavior.singleagent.options.LocalSubgoalRF
-
Initializes with a given set of subgoal states, a default reward and a subgoal reward.
- LocalSubgoalRF(StateConditionTest, StateConditionTest) - Constructor for class burlap.behavior.singleagent.options.LocalSubgoalRF
-
Initializes with a set of states in which an option is applicable and which the agent should not enter and a set of
subgoal states
- LocalSubgoalRF(StateConditionTest, StateConditionTest, double, double, double) - Constructor for class burlap.behavior.singleagent.options.LocalSubgoalRF
-
Initializes
- LocalSubgoalTF - Class in burlap.behavior.singleagent.options
-
It is typical for options to be defined for following policies to subgoals and it is often useful
to use a planning or learning algorithm to define these policies, in which case a terminal
function for the option would need to be specified in order to learn or plan for its policy.
- LocalSubgoalTF(StateConditionTest) - Constructor for class burlap.behavior.singleagent.options.LocalSubgoalTF
-
Initializes with a set of subgoal states.
- LocalSubgoalTF(StateConditionTest, StateConditionTest) - Constructor for class burlap.behavior.singleagent.options.LocalSubgoalTF
-
Initializes with a set of states in which the option is applicable and the options subgoal states.
- logbase(double, double) - Static method in class burlap.behavior.singleagent.planning.stochastic.sparsesampling.SparseSampling
-
Retuns the log value at the given bases.
- logLikelihood() - Method in class burlap.behavior.singleagent.learnbydemo.mlirl.MLIRL
-
Computes and returns the log-likelihood of all expert trajectories under the current reward function parameters.
- logLikelihoodGradient() - Method in class burlap.behavior.singleagent.learnbydemo.mlirl.MLIRL
-
Computes and returns the gradient of the log-likelihood of all trajectories
- logLikelihoodOfTrajectory(EpisodeAnalysis, double) - Method in class burlap.behavior.singleagent.learnbydemo.mlirl.MLIRL
-
Computes and returns the log-likelihood of the given trajectory under the current reward function parameters and weights it by the given weight.
- logPolicyGrad(State, GroundedAction) - Method in class burlap.behavior.singleagent.learnbydemo.mlirl.MLIRL
-
Computes and returns the gradient of the Boltzmann policy for the given state and action.
- logSum(double[], double, double) - Static method in class burlap.behavior.singleagent.learnbydemo.mlirl.support.BoltzmannPolicyGradient
-
Computes the log sum of exponentiated Q-values (Scaled by beta)
- lostReward - Variable in class burlap.domain.singleagent.frostbite.FrostbiteRF
-
- lowerBoundV - Variable in class burlap.behavior.singleagent.planning.stochastic.rtdp.BoundedRTDP
-
The lower bound value function
- lowerLim - Variable in class burlap.oomdp.core.Attribute
-
lowest value for a non-relational attribute
- lowerVal - Variable in class burlap.behavior.singleagent.auxiliary.StateGridder.AttributeSpecification
-
The lower value of the attribute on the gird
- lowerVInit - Variable in class burlap.behavior.singleagent.planning.stochastic.rtdp.BoundedRTDP
-
The lowerbound value function initialization
- LSPI - Class in burlap.behavior.singleagent.learning.lspi
-
This class implements the optimized version of last squares policy iteration [1] (runs in quadratic time of the number of state features).
- LSPI(Domain, RewardFunction, TerminalFunction, double, FeatureDatabase) - Constructor for class burlap.behavior.singleagent.learning.lspi.LSPI
-
Initializes for the given domain, reward function, terminal state function, discount factor and the feature database that provides the state features used by LSPI.
- LSPI.SSFeatures - Class in burlap.behavior.singleagent.learning.lspi
-
Pair of the the state-action features and the next state-action features.
- LSPI.SSFeatures(List<ActionFeaturesQuery>, List<ActionFeaturesQuery>) - Constructor for class burlap.behavior.singleagent.learning.lspi.LSPI.SSFeatures
-
Initializes.
- LSTDQ() - Method in class burlap.behavior.singleagent.learning.lspi.LSPI
-
Runs LSTDQ on this object's current
SARSData
dataset.
- LunarLanderDomain - Class in burlap.domain.singleagent.lunarlander
-
This domain generator is used for producing a Lunar Lander domain, based on the classic
Lunar Lander Atari game (1979).
- LunarLanderDomain() - Constructor for class burlap.domain.singleagent.lunarlander.LunarLanderDomain
-
Initializes with no thrust actions set.
- LunarLanderDomain.ActionIdle - Class in burlap.domain.singleagent.lunarlander
-
An action class for having the agent idle (its current velocity and the force of gravity will be all that acts on the lander).
- LunarLanderDomain.ActionIdle(String, Domain, LunarLanderDomain.LLPhysicsParams) - Constructor for class burlap.domain.singleagent.lunarlander.LunarLanderDomain.ActionIdle
-
Initializes the idle action.
- LunarLanderDomain.ActionThrust - Class in burlap.domain.singleagent.lunarlander
-
An action class for exerting a thrust.
- LunarLanderDomain.ActionThrust(String, Domain, double, LunarLanderDomain.LLPhysicsParams) - Constructor for class burlap.domain.singleagent.lunarlander.LunarLanderDomain.ActionThrust
-
Initializes a thrust action for a given thrust force
- LunarLanderDomain.ActionTurn - Class in burlap.domain.singleagent.lunarlander
-
An action class for turning the lander
- LunarLanderDomain.ActionTurn(String, Domain, double, LunarLanderDomain.LLPhysicsParams) - Constructor for class burlap.domain.singleagent.lunarlander.LunarLanderDomain.ActionTurn
-
Creates a turn action for the indicated direction.
- LunarLanderDomain.LLPhysicsParams - Class in burlap.domain.singleagent.lunarlander
-
A class for holding the physics parameters
- LunarLanderDomain.LLPhysicsParams() - Constructor for class burlap.domain.singleagent.lunarlander.LunarLanderDomain.LLPhysicsParams
-
- LunarLanderDomain.OnPadPF - Class in burlap.domain.singleagent.lunarlander
-
A propositional function that evaluates to true if the agent has landed on the top surface of a landing pad.
- LunarLanderDomain.OnPadPF(String, Domain) - Constructor for class burlap.domain.singleagent.lunarlander.LunarLanderDomain.OnPadPF
-
Initializes to be evaluated on an agent object and landing pad object.
- LunarLanderDomain.TouchGroundPF - Class in burlap.domain.singleagent.lunarlander
-
A propositional function that evaluates to true if the agent is touching the ground.
- LunarLanderDomain.TouchGroundPF(String, Domain) - Constructor for class burlap.domain.singleagent.lunarlander.LunarLanderDomain.TouchGroundPF
-
Initializes to be evaluated on an agent object.
- LunarLanderDomain.TouchPadPF - Class in burlap.domain.singleagent.lunarlander
-
A propositional function that evaluates to true if the agent is touching any part of the landing pad, including its
side boundaries.
- LunarLanderDomain.TouchPadPF(String, Domain) - Constructor for class burlap.domain.singleagent.lunarlander.LunarLanderDomain.TouchPadPF
-
Initializes to be evaluated on an agent object and landing pad object.
- LunarLanderDomain.TouchSurfacePF - Class in burlap.domain.singleagent.lunarlander
-
A propositional function that evaluates to true if the agent is touching any part of an obstacle, including its
side boundaries.
- LunarLanderDomain.TouchSurfacePF(String, Domain) - Constructor for class burlap.domain.singleagent.lunarlander.LunarLanderDomain.TouchSurfacePF
-
Initializes to be evaluated on an agent object and obstacle object.
- LunarLanderRF - Class in burlap.domain.singleagent.lunarlander
-
- LunarLanderRF(Domain) - Constructor for class burlap.domain.singleagent.lunarlander.LunarLanderRF
-
Initializes with default reward values (move through air = -1; collision = -100; land on pad = +1000)
- LunarLanderRF(Domain, double, double, double) - Constructor for class burlap.domain.singleagent.lunarlander.LunarLanderRF
-
Initializes with custom reward condition values.
- LunarLanderTF - Class in burlap.domain.singleagent.lunarlander
-
- LunarLanderTF(Domain) - Constructor for class burlap.domain.singleagent.lunarlander.LunarLanderTF
-
Initializes.