Skip navigation links
A B C D E F G H I J K L M N O P Q R S T U V W X Y Z 

L

lambda - Variable in class burlap.behavior.singleagent.learning.actorcritic.critics.TDLambda
Indicates the strength of eligibility traces.
lambda - Variable in class burlap.behavior.singleagent.learning.tdmethods.SarsaLam
the strength of eligibility traces (0 for one step, 1 for full propagation)
lambda - Variable in class burlap.behavior.singleagent.learning.tdmethods.vfa.GradientDescentSarsaLam
the strength of eligibility traces (0 for one step, 1 for full propagation)
LandmarkColorBlendInterpolation - Class in burlap.behavior.singleagent.auxiliary.valuefunctionvis.common
A ColorBlend instance that takes as input a set of "landmark" Color objects and corresponding values between them.
LandmarkColorBlendInterpolation() - Constructor for class burlap.behavior.singleagent.auxiliary.valuefunctionvis.common.LandmarkColorBlendInterpolation
 
LandmarkColorBlendInterpolation(double) - Constructor for class burlap.behavior.singleagent.auxiliary.valuefunctionvis.common.LandmarkColorBlendInterpolation
Initializes the color blend with a power to raise the normalized distance of values.
landmarkColors - Variable in class burlap.behavior.singleagent.auxiliary.valuefunctionvis.common.LandmarkColorBlendInterpolation
The landmark colors
landmarkValues - Variable in class burlap.behavior.singleagent.auxiliary.valuefunctionvis.common.LandmarkColorBlendInterpolation
The value position of each landmark
lastAction - Variable in class burlap.behavior.functionapproximation.sparse.LinearVFA
 
lastColPlayerPayoff - Variable in class burlap.behavior.stochasticgames.agents.twoplayer.singlestage.equilibriumplayer.BimatrixEquilibriumSolver
The last cached column player payoff matrix
lastColsStrategy - Variable in class burlap.behavior.stochasticgames.agents.twoplayer.singlestage.equilibriumplayer.BimatrixEquilibriumSolver
The last cached column player strategy
lastComputedCumR - Variable in class burlap.behavior.singleagent.planning.deterministic.informed.astar.AStar
Store the most recent cumulative reward received to some node.
lastComputedDepth - Variable in class burlap.behavior.singleagent.planning.deterministic.informed.astar.DynamicWeightedAStar
maintains the depth of the last explored node
lastEpisode - Variable in class burlap.behavior.singleagent.auxiliary.performance.PerformancePlotter
The last episode at which the plot's series data was updated
lastEpisode - Variable in class burlap.behavior.stochasticgames.auxiliary.performance.MultiAgentPerformancePlotter
The last episode at which the plot's series data was updated
lastJointAction - Variable in class burlap.mdp.stochasticgames.world.World
 
LastJointActionCommand - Class in burlap.shell.command.world
A ShellCommand for printing the last joint action taken in a World.
LastJointActionCommand() - Constructor for class burlap.shell.command.world.LastJointActionCommand
 
lastOpponentMove - Variable in class burlap.behavior.stochasticgames.agents.twoplayer.repeatedsinglestage.TitForTat
The last opponent's move
lastPollTime - Variable in class burlap.behavior.learningrate.ExponentialDecayLR
The last agent time at which they polled the learning rate
lastPollTime - Variable in class burlap.behavior.learningrate.SoftTimeInverseDecayLR
The last agent time at which they polled the learning rate
lastReward - Variable in class burlap.behavior.singleagent.interfaces.rlglue.RLGlueAgent
The last reward received
lastReward() - Method in class burlap.behavior.singleagent.interfaces.rlglue.RLGlueAgent
 
lastReward - Variable in class burlap.behavior.stochasticgames.agents.interfacing.singleagent.LearningAgentToSGAgentInterface
The last reward received by this agent
lastReward() - Method in class burlap.behavior.stochasticgames.agents.interfacing.singleagent.LearningAgentToSGAgentInterface
 
lastReward() - Method in interface burlap.mdp.singleagent.environment.Environment
Returns the last reward returned by the environment
lastReward() - Method in class burlap.mdp.singleagent.environment.extensions.EnvironmentServer
 
lastReward - Variable in class burlap.mdp.singleagent.environment.SimulatedEnvironment
The last reward generated from this environment.
lastReward() - Method in class burlap.mdp.singleagent.environment.SimulatedEnvironment
 
lastRewards - Variable in class burlap.mdp.stochasticgames.world.World
 
lastRowPlayerPayoff - Variable in class burlap.behavior.stochasticgames.agents.twoplayer.singlestage.equilibriumplayer.BimatrixEquilibriumSolver
The last cached row player payoff matrix
lastRowStrategy - Variable in class burlap.behavior.stochasticgames.agents.twoplayer.singlestage.equilibriumplayer.BimatrixEquilibriumSolver
The last cached row player strategy
lastState - Variable in class burlap.behavior.functionapproximation.dense.DenseLinearVFA
 
lastState - Variable in class burlap.behavior.functionapproximation.dense.DenseStateActionLinearVFA
 
lastState - Variable in class burlap.behavior.functionapproximation.sparse.LinearVFA
 
lastStateOnPathIsNew(PrioritizedSearchNode) - Method in class burlap.behavior.singleagent.planning.deterministic.informed.astar.IDAStar
Returns true if the search node has not be visited previously on the current search path.
lastSyncedState - Variable in class burlap.behavior.stochasticgames.JointPolicy
The last state in which synchronized actions were queried.
lastSynchronizedJointAction - Variable in class burlap.behavior.stochasticgames.JointPolicy
The last synchronized joint action that was selected
lastTimeStepUpdate - Variable in class burlap.behavior.singleagent.auxiliary.performance.PerformancePlotter
The last time step at which the plots' series data was updated
lastTimeStepUpdate - Variable in class burlap.behavior.stochasticgames.auxiliary.performance.MultiAgentPerformancePlotter
The last time step at which the plots' series data was updated
lastWeights - Variable in class burlap.behavior.singleagent.learning.lspi.LSPI
The last weight values set from LSTDQ
launchThread() - Method in class burlap.behavior.singleagent.auxiliary.performance.PerformancePlotter
Launches the automatic plot refresh thread.
launchThread() - Method in class burlap.behavior.stochasticgames.auxiliary.performance.MultiAgentPerformancePlotter
Launches the automatic plot refresh thread.
leafNodeInit - Variable in class burlap.behavior.singleagent.planning.vfa.fittedvi.FittedVI
This class computes the Bellman operator by using an instance of SparseSampling and setting its leaf nodes values to the current value function approximation.
LearnedModel - Interface in burlap.behavior.singleagent.learning.modellearning
An interface extension of FullModel for models that are learned from data.
LearningAgent - Interface in burlap.behavior.singleagent.learning
This is the standard interface for defining an agent that learns how to behave in the world through experience.
learningAgent - Variable in class burlap.behavior.stochasticgames.agents.interfacing.singleagent.LearningAgentToSGAgentInterface
The single agent LearningAgent that will be learning in this stochastic game as if the other players are part of the environment.
LearningAgentFactory - Interface in burlap.behavior.singleagent.learning
A factory interface for generating learning agents.
LearningAgentToSGAgentInterface - Class in burlap.behavior.stochasticgames.agents.interfacing.singleagent
A stochastic games SGAgent that takes as input a single agent LearningAgent to handle behavior.
LearningAgentToSGAgentInterface(SGDomain, LearningAgent, String, SGAgentType) - Constructor for class burlap.behavior.stochasticgames.agents.interfacing.singleagent.LearningAgentToSGAgentInterface
Initializes.
LearningAgentToSGAgentInterface.ActionReference - Class in burlap.behavior.stochasticgames.agents.interfacing.singleagent
A wrapper that maintains a reference to a Action or null.
LearningAgentToSGAgentInterface.StateReference - Class in burlap.behavior.stochasticgames.agents.interfacing.singleagent
A wrapper that maintains a reference to a State or null.
LearningAlgorithmExperimenter - Class in burlap.behavior.singleagent.auxiliary.performance
This class is used to simplify the comparison of different learning algorithms.
LearningAlgorithmExperimenter(Environment, int, int, LearningAgentFactory...) - Constructor for class burlap.behavior.singleagent.auxiliary.performance.LearningAlgorithmExperimenter
Initializes.
learningPolicy - Variable in class burlap.behavior.singleagent.learning.lspi.LSPI
learningPolicy - Variable in class burlap.behavior.singleagent.learning.tdmethods.QLearning
The learning policy to use.
learningPolicy - Variable in class burlap.behavior.singleagent.learning.tdmethods.vfa.GradientDescentSarsaLam
The learning policy to use.
learningPolicy - Variable in class burlap.behavior.stochasticgames.agents.maql.MAQLFactory
 
learningPolicy - Variable in class burlap.behavior.stochasticgames.agents.maql.MultiAgentQLearning
The learning policy to be followed
learningRate - Variable in class burlap.behavior.learningrate.ConstantLR
 
LearningRate - Interface in burlap.behavior.learningrate
Provides an interface for different methods of learning rate decay schedules.
learningRate(int) - Method in class burlap.behavior.learningrate.SoftTimeInverseDecayLR
 
learningRate - Variable in class burlap.behavior.singleagent.learnfromdemo.mlirl.MLIRL
The gradient ascent learning rate
learningRate - Variable in class burlap.behavior.singleagent.learning.actorcritic.actor.BoltzmannActor
The learning rate used to update action preferences in response to critiques.
learningRate - Variable in class burlap.behavior.singleagent.learning.actorcritic.critics.TDLambda
 
learningRate - Variable in class burlap.behavior.singleagent.learning.tdmethods.QLearning
The learning rate function used.
learningRate - Variable in class burlap.behavior.singleagent.learning.tdmethods.vfa.GradientDescentSarsaLam
A learning rate function to use
learningRate - Variable in class burlap.behavior.stochasticgames.agents.maql.MAQLFactory
 
learningRate - Variable in class burlap.behavior.stochasticgames.agents.maql.MultiAgentQLearning
The learning rate for updating Q-values
learningRate - Variable in class burlap.behavior.stochasticgames.agents.naiveq.history.SGQWActionHistoryFactory
The learning rate the Q-learning algorithm will use
learningRate - Variable in class burlap.behavior.stochasticgames.agents.naiveq.SGNaiveQFactory
The learning rate the Q-learning algorithm will use
learningRate - Variable in class burlap.behavior.stochasticgames.agents.naiveq.SGNaiveQLAgent
the learning rate
left - Variable in class burlap.domain.singleagent.lunarlander.state.LLBlock
 
LimitedMemoryDFS - Class in burlap.behavior.singleagent.planning.deterministic.uninformed.dfs
This is a modified version of DFS that maintains a memory of the last n states it has previously expanded.
LimitedMemoryDFS(SADomain, StateConditionTest, HashableStateFactory, int, boolean, boolean, int) - Constructor for class burlap.behavior.singleagent.planning.deterministic.uninformed.dfs.LimitedMemoryDFS
Constructor for memory limited DFS
LinearDiffRFVInit - Class in burlap.behavior.singleagent.learnfromdemo.mlirl.differentiableplanners.diffvinit
A class for creating a DifferentiableRF and a DifferentiableVInit when the reward function and value function initialization are linear functions over some set of features.
LinearDiffRFVInit(DenseStateFeatures, DenseStateFeatures, int, int) - Constructor for class burlap.behavior.singleagent.learnfromdemo.mlirl.differentiableplanners.diffvinit.LinearDiffRFVInit
Initializes a linear reward function for a given feature vector of a given dimension and linear value function initialization for a given feature vector and set of dimensions.
LinearDiffRFVInit(DenseStateFeatures, DenseStateFeatures, int, int, boolean) - Constructor for class burlap.behavior.singleagent.learnfromdemo.mlirl.differentiableplanners.diffvinit.LinearDiffRFVInit
Initializes a linear reward function for a given feature vector of a given dimension and linear value function initialization for a given feature vector and set of dimensions.
LinearStateActionDifferentiableRF - Class in burlap.behavior.singleagent.learnfromdemo.mlirl.commonrfs
A class for defining a state-action linear DifferentiableRF.
LinearStateActionDifferentiableRF(DenseStateFeatures, int, Action...) - Constructor for class burlap.behavior.singleagent.learnfromdemo.mlirl.commonrfs.LinearStateActionDifferentiableRF
Initializes.
LinearStateDifferentiableRF - Class in burlap.behavior.singleagent.learnfromdemo.mlirl.commonrfs
A class for defining a linear state DifferentiableRF.
LinearStateDifferentiableRF(DenseStateFeatures, int) - Constructor for class burlap.behavior.singleagent.learnfromdemo.mlirl.commonrfs.LinearStateDifferentiableRF
Initializes.
LinearStateDifferentiableRF(DenseStateFeatures, int, boolean) - Constructor for class burlap.behavior.singleagent.learnfromdemo.mlirl.commonrfs.LinearStateDifferentiableRF
Initializes.
LinearStateDiffVF - Class in burlap.behavior.singleagent.learnfromdemo.mlirl.differentiableplanners.diffvinit
A class for defining a (differentiable) linear function over state features for value function initialization.
LinearStateDiffVF(DenseStateFeatures, int) - Constructor for class burlap.behavior.singleagent.learnfromdemo.mlirl.differentiableplanners.diffvinit.LinearStateDiffVF
Initializes with the state feature vector generator over which the linear function is defined and the dimensionality of it.
LinearVFA - Class in burlap.behavior.functionapproximation.sparse
This class is used for general purpose linear VFA.
LinearVFA(SparseStateFeatures) - Constructor for class burlap.behavior.functionapproximation.sparse.LinearVFA
Initializes with a feature database; the default weight value will be zero
LinearVFA(SparseStateFeatures, double) - Constructor for class burlap.behavior.functionapproximation.sparse.LinearVFA
Initializes
ListActionsCommand - Class in burlap.shell.command.env
A ShellCommand for listing the set of applicable actions given the current environment observation.
ListActionsCommand() - Constructor for class burlap.shell.command.env.ListActionsCommand
 
listen - Variable in class burlap.domain.singleagent.pomdp.tiger.TigerModel
The reward for listening
listenAccuracy - Variable in class burlap.domain.singleagent.pomdp.tiger.TigerDomain
The probability of hearing accurately where the tiger is
listenAccuracy - Variable in class burlap.domain.singleagent.pomdp.tiger.TigerObservations
 
listenReward - Variable in class burlap.domain.singleagent.pomdp.tiger.TigerDomain
The reward for listening
ListManualAgents() - Constructor for class burlap.shell.command.world.ManualAgentsCommands.ListManualAgents
 
ListPropFunctions - Class in burlap.shell.command.env
A ShellCommand for listing the true (or false) GroundedProp's given the current environment observation.
ListPropFunctions() - Constructor for class burlap.shell.command.env.ListPropFunctions
 
LivePollCommand() - Constructor for class burlap.shell.visual.VisualExplorer.LivePollCommand
 
livePollingTimer - Variable in class burlap.shell.visual.VisualExplorer
 
LLAgent - Class in burlap.domain.singleagent.lunarlander.state
 
LLAgent() - Constructor for class burlap.domain.singleagent.lunarlander.state.LLAgent
 
LLAgent(double, double, double) - Constructor for class burlap.domain.singleagent.lunarlander.state.LLAgent
 
LLAgent(double, double, double, double, double) - Constructor for class burlap.domain.singleagent.lunarlander.state.LLAgent
 
LLBlock - Class in burlap.domain.singleagent.lunarlander.state
 
LLBlock() - Constructor for class burlap.domain.singleagent.lunarlander.state.LLBlock
 
LLBlock(double, double, double, double, String) - Constructor for class burlap.domain.singleagent.lunarlander.state.LLBlock
 
LLBlock.LLObstacle - Class in burlap.domain.singleagent.lunarlander.state
 
LLBlock.LLPad - Class in burlap.domain.singleagent.lunarlander.state
 
lld - Variable in class burlap.domain.singleagent.lunarlander.LLVisualizer.AgentPainter
 
lld - Variable in class burlap.domain.singleagent.lunarlander.LLVisualizer.ObstaclePainter
 
lld - Variable in class burlap.domain.singleagent.lunarlander.LLVisualizer.PadPainter
 
LLObstacle() - Constructor for class burlap.domain.singleagent.lunarlander.state.LLBlock.LLObstacle
 
LLObstacle(double, double, double, double, String) - Constructor for class burlap.domain.singleagent.lunarlander.state.LLBlock.LLObstacle
 
LLPad() - Constructor for class burlap.domain.singleagent.lunarlander.state.LLBlock.LLPad
 
LLPad(double, double, double, double, String) - Constructor for class burlap.domain.singleagent.lunarlander.state.LLBlock.LLPad
 
LLPhysicsParams() - Constructor for class burlap.domain.singleagent.lunarlander.LunarLanderDomain.LLPhysicsParams
 
LLState - Class in burlap.domain.singleagent.lunarlander.state
 
LLState() - Constructor for class burlap.domain.singleagent.lunarlander.state.LLState
 
LLState(LLAgent, LLBlock.LLPad, List<LLBlock.LLObstacle>) - Constructor for class burlap.domain.singleagent.lunarlander.state.LLState
 
LLState(LLAgent, LLBlock.LLPad, LLBlock.LLObstacle...) - Constructor for class burlap.domain.singleagent.lunarlander.state.LLState
 
LLVisualizer - Class in burlap.domain.singleagent.lunarlander
Class for creating a 2D visualizer for a LunarLanderDomain Domain.
LLVisualizer.AgentPainter - Class in burlap.domain.singleagent.lunarlander
Object painter for a lunar lander agent class.
LLVisualizer.ObstaclePainter - Class in burlap.domain.singleagent.lunarlander
Object painter for obstacles of a lunar lander domain, rendered as black rectangles.
LLVisualizer.PadPainter - Class in burlap.domain.singleagent.lunarlander
Object painter for landing pads of a lunar lander domain, rendered as blue rectangles.
load() - Method in class burlap.domain.singleagent.rlglue.RLGlueEnvironment
Loads this environment into RLGlue
load(String, String) - Method in class burlap.domain.singleagent.rlglue.RLGlueEnvironment
Loads this environment into RLGLue with the specified host address and port
loadAgent() - Method in class burlap.behavior.singleagent.interfaces.rlglue.RLGlueAgent
Loads this RLGlue AgentInterface into RLGlue and runs its event loop in a separate thread.
loadAgent(String, String) - Method in class burlap.behavior.singleagent.interfaces.rlglue.RLGlueAgent
Loads this RLGlue AgentInterface into RLGlue using the specified host address and port nd runs its event loop in a separate thread.
loadQTable(String) - Method in class burlap.behavior.singleagent.learning.tdmethods.QLearning
Loads the q-function table located on disk at the specified path.
loadValueTable(String) - Method in class burlap.behavior.singleagent.planning.stochastic.DynamicProgramming
Loads the value function table located on disk at the specified path.
locationInd(String) - Method in class burlap.domain.singleagent.gridworld.state.GridWorldState
 
LocationPainter(int[][]) - Constructor for class burlap.domain.singleagent.gridworld.GridWorldVisualizer.LocationPainter
Initializes painter
locations - Variable in class burlap.domain.singleagent.gridworld.state.GridWorldState
 
logbase(double, double) - Static method in class burlap.behavior.singleagent.planning.stochastic.sparsesampling.SparseSampling
Retuns the log value at the given bases.
logLikelihood() - Method in class burlap.behavior.singleagent.learnfromdemo.mlirl.MLIRL
Computes and returns the log-likelihood of all expert trajectories under the current reward function parameters.
logLikelihoodGradient() - Method in class burlap.behavior.singleagent.learnfromdemo.mlirl.MLIRL
Computes and returns the gradient of the log-likelihood of all trajectories
logLikelihoodOfTrajectory(Episode, double) - Method in class burlap.behavior.singleagent.learnfromdemo.mlirl.MLIRL
Computes and returns the log-likelihood of the given trajectory under the current reward function parameters and weights it by the given weight.
logPolicyGrad(State, Action) - Method in class burlap.behavior.singleagent.learnfromdemo.mlirl.MLIRL
Computes and returns the gradient of the Boltzmann policy for the given state and action.
logSum(double[], double, double) - Static method in class burlap.behavior.singleagent.learnfromdemo.mlirl.support.BoltzmannPolicyGradient
Computes the log sum of exponentiated Q-values (Scaled by beta)
lostReward - Variable in class burlap.domain.singleagent.frostbite.FrostbiteRF
 
lower - Variable in class burlap.mdp.core.state.vardomain.VariableDomain
The lower value of the domain
lowerBoundV - Variable in class burlap.behavior.singleagent.planning.stochastic.rtdp.BoundedRTDP
The lower bound value function
lowerVal - Variable in class burlap.behavior.singleagent.auxiliary.gridset.VariableGridSpec
The lower value of the variable on the gird
lowerVInit - Variable in class burlap.behavior.singleagent.planning.stochastic.rtdp.BoundedRTDP
The lowerbound value function initialization
lsActions - Variable in class burlap.shell.command.world.ManualAgentsCommands
 
lsAgents - Variable in class burlap.shell.command.world.ManualAgentsCommands
 
LSManualAgentActionsCommands() - Constructor for class burlap.shell.command.world.ManualAgentsCommands.LSManualAgentActionsCommands
 
LSPI - Class in burlap.behavior.singleagent.learning.lspi
This class implements the optimized version of last squares policy iteration [1] (runs in quadratic time of the number of state features).
LSPI(SADomain, double, DenseStateActionFeatures) - Constructor for class burlap.behavior.singleagent.learning.lspi.LSPI
Initializes.
LSPI(SADomain, double, DenseStateActionFeatures, SARSData) - Constructor for class burlap.behavior.singleagent.learning.lspi.LSPI
Initializes.
LSPI.SSFeatures - Class in burlap.behavior.singleagent.learning.lspi
Pair of the the state-action features and the next state-action features.
LSTDQ() - Method in class burlap.behavior.singleagent.learning.lspi.LSPI
Runs LSTDQ on this object's current SARSData dataset.
LunarLanderDomain - Class in burlap.domain.singleagent.lunarlander
This domain generator is used for producing a Lunar Lander domain, based on the classic Lunar Lander Atari game (1979).
LunarLanderDomain() - Constructor for class burlap.domain.singleagent.lunarlander.LunarLanderDomain
Initializes with no thrust actions set.
LunarLanderDomain.LLPhysicsParams - Class in burlap.domain.singleagent.lunarlander
A class for holding the physics parameters
LunarLanderDomain.OnPadPF - Class in burlap.domain.singleagent.lunarlander
A propositional function that evaluates to true if the agent has landed on the top surface of a landing pad.
LunarLanderDomain.ThrustType - Class in burlap.domain.singleagent.lunarlander
 
LunarLanderDomain.ThrustType.ThrustAction - Class in burlap.domain.singleagent.lunarlander
 
LunarLanderDomain.TouchGroundPF - Class in burlap.domain.singleagent.lunarlander
A propositional function that evaluates to true if the agent is touching the ground.
LunarLanderDomain.TouchPadPF - Class in burlap.domain.singleagent.lunarlander
A propositional function that evaluates to true if the agent is touching any part of the landing pad, including its side boundaries.
LunarLanderDomain.TouchSurfacePF - Class in burlap.domain.singleagent.lunarlander
A propositional function that evaluates to true if the agent is touching any part of an obstacle, including its side boundaries.
LunarLanderModel - Class in burlap.domain.singleagent.lunarlander
 
LunarLanderModel(LunarLanderDomain.LLPhysicsParams) - Constructor for class burlap.domain.singleagent.lunarlander.LunarLanderModel
 
LunarLanderRF - Class in burlap.domain.singleagent.lunarlander
A reward function for the LunarLanderDomain.
LunarLanderRF(OODomain) - Constructor for class burlap.domain.singleagent.lunarlander.LunarLanderRF
Initializes with default reward values (move through air = -1; collision = -100; land on pad = +1000)
LunarLanderRF(OODomain, double, double, double) - Constructor for class burlap.domain.singleagent.lunarlander.LunarLanderRF
Initializes with custom reward condition values.
LunarLanderTF - Class in burlap.domain.singleagent.lunarlander
LunarLanderTF(OODomain) - Constructor for class burlap.domain.singleagent.lunarlander.LunarLanderTF
Initializes.
A B C D E F G H I J K L M N O P Q R S T U V W X Y Z 
Skip navigation links