Skip navigation links
A B C D E F G H I J K L M N O P Q R S T U V W X Y Z 

R

r - Variable in class burlap.behavior.singleagent.learning.lspi.SARSData.SARS
The resulting reward received
r - Variable in class burlap.mdp.singleagent.environment.EnvironmentOutcome
The reward received
rand - Variable in class burlap.behavior.functionapproximation.sparse.tilecoding.TileCodingFeatures
A random object for jittering the tile alignments.
rand - Variable in class burlap.behavior.policy.EpsilonGreedy
 
rand - Variable in class burlap.behavior.policy.GreedyQPolicy
 
rand - Variable in class burlap.behavior.policy.RandomPolicy
The random factory used to randomly select actions.
rand - Variable in class burlap.behavior.singleagent.learnfromdemo.mlirl.MultipleIntentionsMLIRL
A random object used for initializing each cluster's RF parameters randomly.
rand - Variable in class burlap.behavior.singleagent.planning.deterministic.uninformed.dfs.DFS
A random object for random walks
rand - Variable in class burlap.behavior.singleagent.planning.stochastic.montecarlo.uct.UCT
 
rand - Variable in class burlap.behavior.stochasticgames.agents.twoplayer.singlestage.equilibriumplayer.EquilibriumPlayingSGAgent
Random generator for selecting actions according to the solved solution
rand - Variable in class burlap.behavior.stochasticgames.madynamicprogramming.policies.EGreedyJointPolicy
A random object used for sampling
rand - Variable in class burlap.behavior.stochasticgames.madynamicprogramming.policies.EGreedyMaxWellfare
A random object used for sampling
rand - Variable in class burlap.datastructures.BoltzmannDistribution
The random object to use for sampling.
rand - Variable in class burlap.datastructures.StochasticTree
A random object used for sampling.
rand - Variable in class burlap.domain.singleagent.graphdefined.GraphDefinedDomain.GraphActionType
Random object for sampling the stochastic graph transitions
rand - Variable in class burlap.domain.singleagent.graphdefined.GraphDefinedDomain.GraphStateModel
 
rand - Variable in class burlap.domain.singleagent.gridworld.GridWorldDomain.GridWorldModel
 
RandomFactory - Class in burlap.debugtools
Random factory that allows you to logically group various random generators.
RandomFactory() - Constructor for class burlap.debugtools.RandomFactory
Initializes the map structures
randomizeParameters(DifferentiableRF) - Method in class burlap.behavior.singleagent.learnfromdemo.mlirl.MultipleIntentionsMLIRL
Randomizes the parameters for a given DifferentiableRF.
randomizeParameters(double[]) - Method in class burlap.behavior.singleagent.learnfromdemo.mlirl.MultipleIntentionsMLIRL
Randomizes parameters in the given vector between -1 and 1.
RandomPolicy - Class in burlap.behavior.policy
A uniform random policy for single agent domains.
RandomPolicy(SADomain) - Constructor for class burlap.behavior.policy.RandomPolicy
Initializes by copying all the primitive actions references defined for the domain into an internal action list for this policy.
RandomPolicy(List<ActionType>) - Constructor for class burlap.behavior.policy.RandomPolicy
Initializes by copying all the actions references defined in the provided list into an internal action list for this policy.
RandomSGAgent - Class in burlap.behavior.stochasticgames.agents
Stochastic games agent that chooses actions uniformly randomly.
RandomSGAgent() - Constructor for class burlap.behavior.stochasticgames.agents.RandomSGAgent
 
randomSideStateGenerator() - Static method in class burlap.domain.singleagent.pomdp.tiger.TigerDomain
Returns a StateGenerator that 50% of the time generates an hidden tiger state with the tiger on the left side, and 50% time on the right.
randomSideStateGenerator(double) - Static method in class burlap.domain.singleagent.pomdp.tiger.TigerDomain
Returns a StateGenerator that some of the of the time generates an hidden tiger state with the tiger on the left side, and others on the right.
RandomStartStateGenerator - Class in burlap.mdp.auxiliary.common
This class will return a random state from a set of states that are reachable from a source seed state.
RandomStartStateGenerator(SADomain, State) - Constructor for class burlap.mdp.auxiliary.common.RandomStartStateGenerator
Will discover the reachable states from which to randomly select.
RandomStartStateGenerator(SADomain, State, HashableStateFactory) - Constructor for class burlap.mdp.auxiliary.common.RandomStartStateGenerator
Will discover reachable states from which to randomly select.
RBF - Class in burlap.behavior.functionapproximation.dense.rbf
A class for defining radial basis functions for states represented with a double array.
RBF(double[], DistanceMetric) - Constructor for class burlap.behavior.functionapproximation.dense.rbf.RBF
Initializes.
RBFFeatures - Class in burlap.behavior.functionapproximation.dense.rbf
A feature database of RBF units that can be used for linear value function approximation.
RBFFeatures(DenseStateFeatures, boolean) - Constructor for class burlap.behavior.functionapproximation.dense.rbf.RBFFeatures
Initializes with an empty list of RBF units.
rbfs - Variable in class burlap.behavior.functionapproximation.dense.rbf.RBFFeatures
The list of RBF units in this database
read(String) - Static method in class burlap.behavior.singleagent.Episode
Reads an episode that was written to a file and turns into an EpisodeAnalysis object.
read(String) - Static method in class burlap.behavior.stochasticgames.GameEpisode
Reads a game that was written to a file and turns into a GameEpisode object.
read() - Method in class burlap.shell.visual.TextAreaStreams.TextIn
 
readEpisodes(String) - Static method in class burlap.behavior.singleagent.Episode
Takes a path to a directory containing .episode files and reads them all into a List of Episode objects.
recCommand - Variable in class burlap.shell.command.env.EpisodeRecordingCommands
 
receiveInput(String) - Method in class burlap.shell.visual.TextAreaStreams
Adds data to the InputStream
recomputeReachableStates() - Method in class burlap.behavior.singleagent.learnfromdemo.mlirl.differentiableplanners.DifferentiableVI
Calling this method will force the valueFunction to recompute the reachable states when the DifferentiableVI.planFromState(State) method is called next.
recomputeReachableStates() - Method in class burlap.behavior.singleagent.planning.stochastic.policyiteration.PolicyIteration
Calling this method will force the valueFunction to recompute the reachable states when the PolicyIteration.planFromState(State) method is called next.
recomputeReachableStates() - Method in class burlap.behavior.singleagent.planning.stochastic.valueiteration.ValueIteration
Calling this method will force the valueFunction to recompute the reachable states when the ValueIteration.planFromState(State) method is called next.
RecordCommand() - Constructor for class burlap.shell.command.env.EpisodeRecordingCommands.RecordCommand
 
recordedLast - Variable in class burlap.shell.command.env.EpisodeRecordingCommands
 
recording - Variable in class burlap.shell.command.env.EpisodeRecordingCommands
 
referencesSuccessor(UCTStateNode) - Method in class burlap.behavior.singleagent.planning.stochastic.montecarlo.uct.UCTActionNode
Returns whether this action node has a observed a given successor state node in the past
ReflectiveHashableStateFactory - Class in burlap.statehashing
A HashableState factory to use when the source State objects by default implement the HashableState interface.
ReflectiveHashableStateFactory() - Constructor for class burlap.statehashing.ReflectiveHashableStateFactory
 
refreshPriority(T) - Method in class burlap.datastructures.HashIndexedHeap
Calling this method indicates that the priority of the object passed to the method has been modified and that this heap needs to reorder its elements as a result
regCommand - Variable in class burlap.shell.command.world.ManualAgentsCommands
 
RegisterAgentCommand() - Constructor for class burlap.shell.command.world.ManualAgentsCommands.RegisterAgentCommand
 
remove(int) - Method in class burlap.behavior.singleagent.learning.lspi.SARSData
Removes the SARSData.SARS tuple at the ith index
remove(T) - Method in class burlap.datastructures.StochasticTree
Removes the given element from the tree.
removeAction(String) - Method in class burlap.behavior.policy.RandomPolicy
Removes an action from consideration.
removeAlias(String) - Method in class burlap.shell.BurlapShell
 
removeAttributeMasks(Object...) - Method in class burlap.statehashing.masked.MaskedConfig
Removes variable masks.
removeAttributeMasks(Object...) - Method in class burlap.statehashing.masked.MaskedHashableStateFactory
Removes variable masks.
removeCommand(String) - Method in class burlap.shell.BurlapShell
 
removeEdge(int, int, int) - Method in class burlap.domain.singleagent.graphdefined.GraphDefinedDomain
Removes a given edge from the transition dynamics.
removeHelper(StochasticTree<T>.STNode) - Method in class burlap.datastructures.StochasticTree
A recursive method for removing a node
removeObject(String) - Method in class burlap.domain.singleagent.blockdude.state.BlockDudeState
 
removeObject(String) - Method in class burlap.domain.singleagent.blocksworld.BlocksWorldState
 
removeObject(String) - Method in class burlap.domain.singleagent.frostbite.state.FrostbiteState
 
removeObject(String) - Method in class burlap.domain.singleagent.gridworld.state.GridWorldState
 
removeObject(String) - Method in class burlap.domain.singleagent.lunarlander.state.LLState
 
removeObject(String) - Method in class burlap.mdp.core.oo.state.generic.GenericOOState
 
removeObject(String) - Method in interface burlap.mdp.core.oo.state.MutableOOState
Removes the object instance with the name oname from this state.
removeObjectClassMasks(String...) - Method in class burlap.statehashing.masked.MaskedConfig
Removes masks for OO-MDP object classes
removeObjectClassMasks(String...) - Method in class burlap.statehashing.masked.MaskedHashableStateFactory
Removes masks for OO-MDP object classes
removeObservers(EnvironmentObserver...) - Method in class burlap.mdp.singleagent.environment.extensions.EnvironmentServer
Removes one or more EnvironmentObservers from this server.
removeObservers(EnvironmentObserver...) - Method in interface burlap.mdp.singleagent.environment.extensions.EnvironmentServerInterface
Removes one or more EnvironmentObservers from this server.
removeObservers(EnvironmentObserver...) - Method in class burlap.mdp.singleagent.environment.SimulatedEnvironment
 
removeRenderLayer(int) - Method in class burlap.visualizer.MultiLayerRenderer
Removes the render layer at teh specified position.
RemoveStateObjectCommand - Class in burlap.shell.command.env
A ShellCommand for removing an OO-MDP object from the current Environment State.
RemoveStateObjectCommand() - Constructor for class burlap.shell.command.env.RemoveStateObjectCommand
 
RemoveStateObjectSGCommand - Class in burlap.shell.command.world
A ShellCommand for removing an OO-MDP object from the current World State.
RemoveStateObjectSGCommand() - Constructor for class burlap.shell.command.world.RemoveStateObjectSGCommand
 
removeTerminals(int...) - Method in class burlap.domain.singleagent.graphdefined.GraphTF
Removes nodes as being marked as terminal states
removeWorldObserver(WorldObserver) - Method in class burlap.mdp.stochasticgames.world.World
Removes the specified world observer from this world
removeZeroRows(double[][]) - Static method in class burlap.behavior.stochasticgames.solvers.CorrelatedEquilibriumSolver
Takes an input 2D double matrix and returns a new matrix will all the all zero rows removed.
renameObject(String, String) - Method in class burlap.domain.singleagent.blockdude.state.BlockDudeState
 
renameObject(String, String) - Method in class burlap.domain.singleagent.blocksworld.BlocksWorldState
 
renameObject(String, String) - Method in class burlap.domain.singleagent.frostbite.state.FrostbiteState
 
renameObject(String, String) - Method in class burlap.domain.singleagent.gridworld.state.GridWorldState
 
renameObject(String, String) - Method in class burlap.domain.singleagent.lunarlander.state.LLState
 
renameObject(String, String) - Method in class burlap.mdp.core.oo.state.generic.GenericOOState
 
renameObject(String, String) - Method in interface burlap.mdp.core.oo.state.MutableOOState
Renames the identifier for object instance o in this state to newName.
renameObjects(GridWorldState) - Method in class burlap.testing.TestHashing
 
render(Graphics2D, float, float) - Method in class burlap.behavior.singleagent.auxiliary.valuefunctionvis.PolicyRenderLayer
 
render(Graphics2D, float, float) - Method in class burlap.behavior.singleagent.auxiliary.valuefunctionvis.ValueFunctionRenderLayer
 
render(Graphics2D, float, float) - Method in interface burlap.visualizer.RenderLayer
 
render(Graphics2D, float, float) - Method in class burlap.visualizer.StateActionRenderLayer
 
render(Graphics2D, float, float) - Method in class burlap.visualizer.StateRenderLayer
 
renderAction - Variable in class burlap.visualizer.StateActionRenderLayer
The current Action to render
RenderLayer - Interface in burlap.visualizer
A RenderLayer is a 2 dimensional layer that paints to a provided 2D graphics context.
renderLayers - Variable in class burlap.visualizer.MultiLayerRenderer
The layers that will be rendered in order from index 0 to n
renderState - Variable in class burlap.visualizer.StateActionRenderLayer
The current State to render
renderStateAction(Graphics2D, State, Action, float, float) - Method in class burlap.visualizer.StateActionRenderLayer
Method to be implemented by subclasses that will render the input state-action to the given graphics context.
renderStyle - Variable in class burlap.behavior.singleagent.auxiliary.valuefunctionvis.common.PolicyGlyphPainter2D
The render style to use
renderValueString - Variable in class burlap.behavior.singleagent.auxiliary.valuefunctionvis.common.StateValuePainter2D
Whether the numeric string for the value of the state should be rendered in its cell or not.
repaintOnActionInitiation - Variable in class burlap.mdp.singleagent.common.VisualActionObserver
If true, then a a state-action pair is rendered on calls to VisualActionObserver.observeEnvironmentActionInitiation(State, Action) so long as the input Visualizer has a set StateActionRenderLayer.
repaintStateOnEnvironmentInteraction - Variable in class burlap.mdp.singleagent.common.VisualActionObserver
replayGame(GameEpisode) - Method in class burlap.mdp.stochasticgames.common.VisualWorldObserver
Causes the visualizer to be replayed for the given GameEpisode object.
request - Variable in class burlap.behavior.singleagent.learnfromdemo.mlirl.MLIRL
The MLRIL request defining the IRL problem.
request - Variable in class burlap.behavior.singleagent.learnfromdemo.mlirl.MultipleIntentionsMLIRL
The source problem request defining the problem to be solved.
requireMarkov - Variable in class burlap.behavior.singleagent.options.model.BFSMarkovOptionModel
 
rerunVI() - Method in class burlap.behavior.singleagent.learning.modellearning.modelplanners.VIModelLearningPlanner
Reruns VI on the new updated model.
rescale(double, double) - Method in interface burlap.behavior.singleagent.auxiliary.valuefunctionvis.common.ColorBlend
Tells this object the minimum value and the maximum value it can receive.
rescale(double, double) - Method in class burlap.behavior.singleagent.auxiliary.valuefunctionvis.common.LandmarkColorBlendInterpolation
 
rescale(double, double) - Method in class burlap.behavior.singleagent.auxiliary.valuefunctionvis.common.StateValuePainter2D
 
rescale(double, double) - Method in class burlap.behavior.singleagent.auxiliary.valuefunctionvis.StateValuePainter
Used to tell this painter that it should render state values so that the minimum possible value is lowerValue and the maximum is upperValue.
rescaleRect(float, float, float, float, float) - Method in class burlap.behavior.singleagent.auxiliary.valuefunctionvis.common.PolicyGlyphPainter2D
Takes in a rectangle specification and scales it equally along each direction by a scale factor.
reserved - Variable in class burlap.shell.BurlapShell
 
resetAvgs() - Method in class burlap.debugtools.MyTimer
Resets to zero the average and total time recorded over all start-stop calls.
resetData() - Method in class burlap.behavior.singleagent.learning.actorcritic.actor.BoltzmannActor
 
resetData() - Method in class burlap.behavior.singleagent.learning.actorcritic.Actor
Used to reset any data that was created/modified during learning so that learning can be begin anew.
resetData() - Method in interface burlap.behavior.singleagent.learning.actorcritic.Critic
Used to reset any data that was created/modified during learning so that learning can be begin anew.
resetData() - Method in class burlap.behavior.singleagent.learning.actorcritic.critics.TDLambda
 
resetData() - Method in class burlap.behavior.singleagent.learning.actorcritic.critics.TimeIndexedTDLambda
 
resetDecay() - Method in class burlap.behavior.functionapproximation.dense.fourier.FourierBasisLearningRateWrapper
 
resetDecay() - Method in class burlap.behavior.learningrate.ConstantLR
 
resetDecay() - Method in class burlap.behavior.learningrate.ExponentialDecayLR
 
resetDecay() - Method in interface burlap.behavior.learningrate.LearningRate
Causes any learnign rate decay to reset to where it started.
resetDecay() - Method in class burlap.behavior.learningrate.SoftTimeInverseDecayLR
 
ResetEnvCommand - Class in burlap.shell.command.env
A ShellCommand for resetting the Environment.
ResetEnvCommand() - Constructor for class burlap.shell.command.env.ResetEnvCommand
 
resetEnvironment() - Method in class burlap.behavior.singleagent.interfaces.rlglue.RLGlueAgent
 
resetEnvironment() - Method in class burlap.behavior.stochasticgames.agents.interfacing.singleagent.LearningAgentToSGAgentInterface
 
resetEnvironment() - Method in interface burlap.mdp.singleagent.environment.Environment
Resets this environment to some initial state, if the functionality exists.
resetEnvironment() - Method in class burlap.mdp.singleagent.environment.extensions.EnvironmentServer
 
resetEnvironment() - Method in class burlap.mdp.singleagent.environment.SimulatedEnvironment
 
resetEnvironment() - Method in class burlap.mdp.singleagent.pomdp.SimulatedPOEnvironment
 
resetMatchSelections() - Method in class burlap.mdp.stochasticgames.tournament.common.AllPairWiseSameTypeMS
 
resetMatchSelections() - Method in interface burlap.mdp.stochasticgames.tournament.MatchSelector
Resets the match selections and causes the MatchSelector.getNextMatch() method to start from the beginning of matches
resetModel() - Method in interface burlap.behavior.singleagent.learning.modellearning.LearnedModel
Resets the model data so that learning can begin anew.
resetModel() - Method in class burlap.behavior.singleagent.learning.modellearning.models.TabularModel
 
resetModel() - Method in class burlap.behavior.singleagent.learning.modellearning.rmax.RMaxModel
 
resetParameters() - Method in class burlap.behavior.functionapproximation.dense.DenseLinearVFA
 
resetParameters() - Method in class burlap.behavior.functionapproximation.dense.DenseStateActionLinearVFA
 
resetParameters() - Method in interface burlap.behavior.functionapproximation.ParametricFunction
Resets the parameters of this function to default values.
resetParameters() - Method in class burlap.behavior.functionapproximation.sparse.LinearVFA
 
resetParameters() - Method in class burlap.behavior.singleagent.learnfromdemo.mlirl.commonrfs.LinearStateActionDifferentiableRF
 
resetParameters() - Method in class burlap.behavior.singleagent.learnfromdemo.mlirl.commonrfs.LinearStateDifferentiableRF
 
resetParameters() - Method in class burlap.behavior.singleagent.learnfromdemo.mlirl.differentiableplanners.diffvinit.DiffVFRF
 
resetParameters() - Method in class burlap.behavior.singleagent.learnfromdemo.mlirl.differentiableplanners.diffvinit.LinearDiffRFVInit
 
resetParameters() - Method in class burlap.behavior.singleagent.learnfromdemo.mlirl.differentiableplanners.diffvinit.LinearStateDiffVF
 
resetParameters() - Method in class burlap.behavior.singleagent.learnfromdemo.mlirl.differentiableplanners.diffvinit.VanillaDiffVinit
 
resetSolver() - Method in class burlap.behavior.singleagent.learnfromdemo.mlirl.differentiableplanners.DifferentiableDP
 
resetSolver() - Method in class burlap.behavior.singleagent.learnfromdemo.mlirl.differentiableplanners.DifferentiableSparseSampling
 
resetSolver() - Method in class burlap.behavior.singleagent.learnfromdemo.mlirl.differentiableplanners.DifferentiableVI
 
resetSolver() - Method in class burlap.behavior.singleagent.learning.actorcritic.ActorCritic
 
resetSolver() - Method in class burlap.behavior.singleagent.learning.actorcritic.critics.TDLambda
 
resetSolver() - Method in class burlap.behavior.singleagent.learning.lspi.LSPI
 
resetSolver() - Method in class burlap.behavior.singleagent.learning.modellearning.artdp.ARTDP
 
resetSolver() - Method in class burlap.behavior.singleagent.learning.modellearning.rmax.PotentialShapedRMax
 
resetSolver() - Method in class burlap.behavior.singleagent.learning.tdmethods.QLearning
 
resetSolver() - Method in class burlap.behavior.singleagent.learning.tdmethods.vfa.GradientDescentSarsaLam
 
resetSolver() - Method in class burlap.behavior.singleagent.MDPSolver
 
resetSolver() - Method in interface burlap.behavior.singleagent.MDPSolverInterface
This method resets all solver results so that a solver can be restarted fresh as if had never solved the MDP.
resetSolver() - Method in class burlap.behavior.singleagent.planning.deterministic.DeterministicPlanner
 
resetSolver() - Method in class burlap.behavior.singleagent.planning.deterministic.uninformed.dfs.DFS
 
resetSolver() - Method in class burlap.behavior.singleagent.planning.stochastic.DynamicProgramming
 
resetSolver() - Method in class burlap.behavior.singleagent.planning.stochastic.montecarlo.uct.UCT
 
resetSolver() - Method in class burlap.behavior.singleagent.planning.stochastic.policyiteration.PolicyIteration
 
resetSolver() - Method in class burlap.behavior.singleagent.planning.stochastic.sparsesampling.SparseSampling
 
resetSolver() - Method in class burlap.behavior.singleagent.planning.stochastic.valueiteration.ValueIteration
 
resetSolver() - Method in class burlap.behavior.singleagent.planning.vfa.fittedvi.FittedVI
 
resetSolver() - Method in class burlap.behavior.singleagent.pomdp.qmdp.QMDP
 
resetSolver() - Method in class burlap.behavior.singleagent.pomdp.wrappedmdpalgs.BeliefSparseSampling
 
resetTournamentReward() - Method in class burlap.mdp.stochasticgames.tournament.Tournament
Reset the cumulative reward received by each agent in this tournament.
resolveCollisions(List<GridGameStandardMechanics.Location2>, List<GridGameStandardMechanics.Location2>) - Method in class burlap.domain.stochasticgames.gridgame.GridGameStandardMechanics
Resolves collisions that occur when two or more agents try to enter the same cell, in which case only one agent will make it into the position and the rest will stay in place
resolveCommand(String) - Method in class burlap.shell.BurlapShell
 
resolvePositionSwaps(List<GridGameStandardMechanics.Location2>, List<GridGameStandardMechanics.Location2>) - Method in class burlap.domain.stochasticgames.gridgame.GridGameStandardMechanics
Returns the position of each agent after accounting for collisions that are a result of agents trying to move into each others previous locations.
responseFor(double[]) - Method in class burlap.behavior.functionapproximation.dense.rbf.functions.GaussianRBF
 
responseFor(double[]) - Method in class burlap.behavior.functionapproximation.dense.rbf.RBF
Returns the RBF response from its center state to the query input state.
returnCode - Variable in class burlap.shell.ShellObserver.ShellCommandEvent
The return code of the command.
reverseEnumerate - Variable in class burlap.behavior.singleagent.auxiliary.StateEnumerator
The reverse enumeration id to state map
reward(int) - Method in class burlap.behavior.singleagent.Episode
Returns the reward received at timestep t.
reward(State, Action, State) - Method in class burlap.behavior.singleagent.learnfromdemo.mlirl.commonrfs.LinearStateActionDifferentiableRF
 
reward(State, Action, State) - Method in class burlap.behavior.singleagent.learnfromdemo.mlirl.commonrfs.LinearStateDifferentiableRF
 
reward(State, Action, State) - Method in class burlap.behavior.singleagent.learnfromdemo.mlirl.differentiableplanners.diffvinit.DiffVFRF
 
reward(State, Action, State) - Method in class burlap.behavior.singleagent.learnfromdemo.mlirl.differentiableplanners.diffvinit.LinearDiffRFVInit
 
reward(State, Action, State) - Method in class burlap.behavior.singleagent.shaping.ShapedRewardFunction
 
reward(State, Action, State) - Method in class burlap.domain.singleagent.cartpole.CartPoleDomain.CartPoleRewardFunction
 
reward(State, Action, State) - Method in class burlap.domain.singleagent.cartpole.InvertedPendulum.InvertedPendulumRewardFunction
 
reward(State, Action, State) - Method in class burlap.domain.singleagent.frostbite.FrostbiteRF
 
reward(State, Action, State) - Method in class burlap.domain.singleagent.graphdefined.GraphRF
 
reward(int, int, int) - Method in class burlap.domain.singleagent.graphdefined.GraphRF
Returns the reward for taking action a in state node s and transition to state node sprime.
reward(State, Action, State) - Method in class burlap.domain.singleagent.gridworld.GridWorldRewardFunction
 
reward(State, Action, State) - Method in class burlap.domain.singleagent.lunarlander.LunarLanderRF
 
reward(State, JointAction, State) - Method in class burlap.domain.stochasticgames.gridgame.GridGame.GGJointRewardFunction
 
reward(State, JointAction, State) - Method in class burlap.domain.stochasticgames.normalform.SingleStageNormalFormGame.SingleStageNormalFormJointRewardFunction
 
reward(State, Action, State) - Method in class burlap.mdp.singleagent.common.GoalBasedRF
 
reward(State, Action, State) - Method in class burlap.mdp.singleagent.common.NullRewardFunction
 
reward(State, Action, State) - Method in class burlap.mdp.singleagent.common.SingleGoalPFRF
 
reward(State, Action, State) - Method in class burlap.mdp.singleagent.common.UniformCostRF
 
reward(State, Action, State) - Method in interface burlap.mdp.singleagent.model.RewardFunction
Returns the reward received when action a is executed in state s and the agent transitions to state sprime.
reward(State, JointAction, State) - Method in class burlap.mdp.stochasticgames.common.NullJointRewardFunction
 
reward(State, JointAction, State) - Method in interface burlap.mdp.stochasticgames.model.JointRewardFunction
Returns the reward received by each agent specified in the joint action.
RewardCommand - Class in burlap.shell.command.env
A ShellCommand for checking the last reward received from the Environment.
RewardCommand() - Constructor for class burlap.shell.command.env.RewardCommand
 
rewardFunction - Variable in class burlap.behavior.singleagent.learnfromdemo.CustomRewardModel
 
rewardFunction() - Method in class burlap.mdp.singleagent.model.FactoredModel
 
RewardFunction - Interface in burlap.mdp.singleagent.model
Defines the reward function for a task.
rewardFunction() - Method in interface burlap.mdp.singleagent.model.TaskFactoredModel
Returns the RewardFunction this model uses to compute rewards
rewardMatrix - Variable in class burlap.domain.singleagent.gridworld.GridWorldRewardFunction
 
rewardRange - Variable in class burlap.domain.singleagent.rlglue.RLGlueEnvironment
The reward function value range
RewardsCommand - Class in burlap.shell.command.world
A ShellCommand for printing the last joint rewards delivered by a World.
RewardsCommand() - Constructor for class burlap.shell.command.world.RewardsCommand
 
rewardSequence - Variable in class burlap.behavior.singleagent.Episode
The sequence of rewards received.
RewardValueProjection - Class in burlap.behavior.singleagent.learnfromdemo
This class is a QProvider/ValueFunction wrapper to provide the immediate reward signals for a source RewardFunction.
RewardValueProjection(RewardFunction) - Constructor for class burlap.behavior.singleagent.learnfromdemo.RewardValueProjection
Initializes for the given RewardFunction assuming that it only depends on the destination state.
RewardValueProjection(RewardFunction, RewardValueProjection.RewardProjectionType) - Constructor for class burlap.behavior.singleagent.learnfromdemo.RewardValueProjection
Initializes.
RewardValueProjection(RewardFunction, RewardValueProjection.RewardProjectionType, SADomain) - Constructor for class burlap.behavior.singleagent.learnfromdemo.RewardValueProjection
Initializes.
RewardValueProjection.CustomRewardNoTermModel - Class in burlap.behavior.singleagent.learnfromdemo
 
RewardValueProjection.RewardProjectionType - Enum in burlap.behavior.singleagent.learnfromdemo
 
rf - Variable in class burlap.behavior.singleagent.learnfromdemo.mlirl.differentiableplanners.DifferentiableDP
The differentiable RF
rf - Variable in class burlap.behavior.singleagent.learnfromdemo.mlirl.differentiableplanners.DifferentiableSparseSampling
 
rf - Variable in class burlap.behavior.singleagent.learnfromdemo.mlirl.differentiableplanners.diffvinit.VanillaDiffVinit
The differentiable reward function that defines the parameter space over which this value function initialization must differentiate.
rf - Variable in class burlap.behavior.singleagent.learnfromdemo.mlirl.MLIRLRequest
The differentiable reward function model that will be estimated by MLRIL.
rf - Variable in class burlap.behavior.singleagent.learnfromdemo.RewardValueProjection
 
rf - Variable in class burlap.domain.singleagent.blockdude.BlockDude
 
rf - Variable in class burlap.domain.singleagent.blocksworld.BlocksWorld
 
rf - Variable in class burlap.domain.singleagent.cartpole.CartPoleDomain
 
rf - Variable in class burlap.domain.singleagent.cartpole.InvertedPendulum
 
rf - Variable in class burlap.domain.singleagent.frostbite.FrostbiteDomain
 
rf - Variable in class burlap.domain.singleagent.graphdefined.GraphDefinedDomain
 
rf - Variable in class burlap.domain.singleagent.gridworld.GridWorldDomain
 
rf - Variable in class burlap.domain.singleagent.lunarlander.LunarLanderDomain
 
rf - Variable in class burlap.domain.singleagent.mountaincar.MountainCar
 
rf - Variable in class burlap.mdp.singleagent.model.FactoredModel
 
rfDim - Variable in class burlap.behavior.singleagent.learnfromdemo.mlirl.differentiableplanners.DifferentiableSparseSampling
The dimensionality of the differentiable reward function
rfDim - Variable in class burlap.behavior.singleagent.learnfromdemo.mlirl.differentiableplanners.diffvinit.LinearDiffRFVInit
The dimensionality of the reward function parameters
rfFeaturesAreForNextState - Variable in class burlap.behavior.singleagent.learnfromdemo.mlirl.differentiableplanners.diffvinit.LinearDiffRFVInit
Whether features are based on the next state or previous state.
rfFvGen - Variable in class burlap.behavior.singleagent.learnfromdemo.mlirl.differentiableplanners.diffvinit.LinearDiffRFVInit
The state feature vector generator.
right - Variable in class burlap.domain.singleagent.lunarlander.state.LLBlock
 
RLGLueAction(int) - Constructor for class burlap.behavior.singleagent.interfaces.rlglue.RLGlueDomain.RLGlueActionType.RLGLueAction
 
RLGlueActionType(Domain, int) - Constructor for class burlap.behavior.singleagent.interfaces.rlglue.RLGlueDomain.RLGlueActionType
Initiaizes.
RLGlueAgent - Class in burlap.behavior.singleagent.interfaces.rlglue
An RLGlue Agent that is implemented as a BURLAP Environment, so that any BURLAP RL agent can control the RLGlue agent.
RLGlueAgent() - Constructor for class burlap.behavior.singleagent.interfaces.rlglue.RLGlueAgent
 
RLGlueAgent.MutableInt - Class in burlap.behavior.singleagent.interfaces.rlglue
A mutable int wrapper
RLGlueAgent.StateReference - Class in burlap.behavior.singleagent.interfaces.rlglue
A wrapper that maintains a reference to a State or null.
RLGlueDomain - Class in burlap.behavior.singleagent.interfaces.rlglue
A class for generating a BURLAP Domain for an RLGlue TaskSpec.
RLGlueDomain(TaskSpec) - Constructor for class burlap.behavior.singleagent.interfaces.rlglue.RLGlueDomain
 
RLGlueDomain.RLGlueActionType - Class in burlap.behavior.singleagent.interfaces.rlglue
A BURLAP ActionType that corresponds to an RLGlue action that is defined by a single int value.
RLGlueDomain.RLGlueActionType.RLGLueAction - Class in burlap.behavior.singleagent.interfaces.rlglue
 
RLGlueEnvironment - Class in burlap.domain.singleagent.rlglue
This class can be used to take a BURLAP domain and task with discrete actions and turn it into an RLGlue environment with which other RLGlue agents can interact.
RLGlueEnvironment(SADomain, StateGenerator, DenseStateFeatures, DoubleRange[], DoubleRange, boolean, double) - Constructor for class burlap.domain.singleagent.rlglue.RLGlueEnvironment
Constructs with all the BURLAP information necessary for generating an RLGlue Environment.
rlGlueExperimentFinished - Variable in class burlap.behavior.singleagent.interfaces.rlglue.RLGlueAgent
 
rlGlueExperimentFinished() - Method in class burlap.behavior.singleagent.interfaces.rlglue.RLGlueAgent
Returns true if the RLGlue experiment is finished; false otherwise.
RLGlueState - Class in burlap.behavior.singleagent.interfaces.rlglue
A State for RLGLue Observation objects.
RLGlueState() - Constructor for class burlap.behavior.singleagent.interfaces.rlglue.RLGlueState
 
RLGlueState(Observation) - Constructor for class burlap.behavior.singleagent.interfaces.rlglue.RLGlueState
 
RLGlueState.RLGlueVarKey - Class in burlap.behavior.singleagent.interfaces.rlglue
 
RLGlueVarKey(char, int) - Constructor for class burlap.behavior.singleagent.interfaces.rlglue.RLGlueState.RLGlueVarKey
 
RLGlueVarKey(String) - Constructor for class burlap.behavior.singleagent.interfaces.rlglue.RLGlueState.RLGlueVarKey
 
RMaxModel - Class in burlap.behavior.singleagent.learning.modellearning.rmax
 
RMaxModel(KWIKModel, PotentialFunction, double, List<ActionType>) - Constructor for class burlap.behavior.singleagent.learning.modellearning.rmax.RMaxModel
 
RMaxPotential(double, double) - Constructor for class burlap.behavior.singleagent.learning.modellearning.rmax.PotentialShapedRMax.RMaxPotential
Initializes for a given maximum reward and discount factor.
RMaxPotential(double) - Constructor for class burlap.behavior.singleagent.learning.modellearning.rmax.PotentialShapedRMax.RMaxPotential
Initializes using the given maximum value function value
rollout(Policy, State, SampleModel) - Static method in class burlap.behavior.policy.PolicyUtils
This method will return the an episode that results from following the given policy from state s.
rollout(Policy, State, SampleModel, int) - Static method in class burlap.behavior.policy.PolicyUtils
This method will return the an episode that results from following the given policy from state s.
rollout(Policy, Environment) - Static method in class burlap.behavior.policy.PolicyUtils
Follows the policy in the given Environment.
rollout(Policy, Environment, int) - Static method in class burlap.behavior.policy.PolicyUtils
Follows the policy in the given Environment.
rolloutJointPolicy(JointPolicy, int) - Method in class burlap.mdp.stochasticgames.world.World
Rollsout a joint policy until a terminate state is reached for a maximum number of stages.
rolloutJointPolicyFromState(JointPolicy, State, int) - Method in class burlap.mdp.stochasticgames.world.World
Rollsout a joint policy from a given state until a terminate state is reached for a maximum number of stages.
rolloutOneStageOfJointPolicy(JointPolicy) - Method in class burlap.mdp.stochasticgames.world.World
Runs a single stage following a joint policy for the current world state
rollOutPolicy - Variable in class burlap.behavior.singleagent.planning.stochastic.rtdp.RTDP
The policy to use for episode rollouts
rolloutsDecomposeOptions - Static variable in class burlap.behavior.policy.PolicyUtils
Indicates whether rollout methods will decompose Option selections into the primitive Action objects they execute and annotate them with the name of the calling Option in the returned Episode.
root - Variable in class burlap.behavior.singleagent.planning.stochastic.montecarlo.uct.UCT
 
root - Variable in class burlap.datastructures.StochasticTree
Root node of the stochastic tree
rootLevelQValues - Variable in class burlap.behavior.singleagent.learnfromdemo.mlirl.differentiableplanners.DifferentiableSparseSampling
The root state node Q-values that have been estimated by previous planning calls.
rootLevelQValues - Variable in class burlap.behavior.singleagent.planning.stochastic.sparsesampling.SparseSampling
The root state node Q-values that have been estimated by previous planning calls.
roundNegativesToZero(double[]) - Static method in class burlap.behavior.stochasticgames.solvers.CorrelatedEquilibriumSolver
Creates a new 1D double array with all negative values rounded to 0.
rowCol(int, int) - Static method in class burlap.behavior.stochasticgames.solvers.CorrelatedEquilibriumSolver
Returns the 2D row column index in a matrix of a given number of columns for a given 1D array index.
rowPayoffs - Variable in class burlap.behavior.stochasticgames.agents.twoplayer.singlestage.equilibriumplayer.EquilibriumPlayingSGAgent.BimatrixTuple
The row player's payoffs.
RTDP - Class in burlap.behavior.singleagent.planning.stochastic.rtdp
Implementation of Real-time dynamic programming [1].
RTDP(SADomain, double, HashableStateFactory, double, int, double, int) - Constructor for class burlap.behavior.singleagent.planning.stochastic.rtdp.RTDP
Initializes.
RTDP(SADomain, double, HashableStateFactory, ValueFunction, int, double, int) - Constructor for class burlap.behavior.singleagent.planning.stochastic.rtdp.RTDP
Initializes.
runEpisodeBoundTrial(LearningAgentFactory) - Method in class burlap.behavior.singleagent.auxiliary.performance.LearningAlgorithmExperimenter
Runs a trial for an agent generated by the given factory when interpreting trial length as a number of episodes.
runEpisodewiseTrial(World) - Method in class burlap.behavior.stochasticgames.auxiliary.performance.MultiAgentExperimenter
Runs a trial where trial length is interpreted as the number of episodes in a trial.
runGame() - Method in class burlap.mdp.stochasticgames.world.World
Runs a game until a terminal state is hit.
runGame(int) - Method in class burlap.mdp.stochasticgames.world.World
Runs a game until a terminal state is hit for maxStages have occurred
runGame(int, State) - Method in class burlap.mdp.stochasticgames.world.World
Runs a game starting in the input state until a terminal state is hit.
runIteration() - Method in class burlap.behavior.singleagent.planning.vfa.fittedvi.FittedVI
Runs a single iteration of value iteration.
runLearningEpisode(Environment) - Method in class burlap.behavior.singleagent.learning.actorcritic.ActorCritic
 
runLearningEpisode(Environment, int) - Method in class burlap.behavior.singleagent.learning.actorcritic.ActorCritic
 
runLearningEpisode(Environment) - Method in interface burlap.behavior.singleagent.learning.LearningAgent
 
runLearningEpisode(Environment, int) - Method in interface burlap.behavior.singleagent.learning.LearningAgent
 
runLearningEpisode(Environment) - Method in class burlap.behavior.singleagent.learning.lspi.LSPI
 
runLearningEpisode(Environment, int) - Method in class burlap.behavior.singleagent.learning.lspi.LSPI
 
runLearningEpisode(Environment) - Method in class burlap.behavior.singleagent.learning.modellearning.artdp.ARTDP
 
runLearningEpisode(Environment, int) - Method in class burlap.behavior.singleagent.learning.modellearning.artdp.ARTDP
 
runLearningEpisode(Environment) - Method in class burlap.behavior.singleagent.learning.modellearning.rmax.PotentialShapedRMax
 
runLearningEpisode(Environment, int) - Method in class burlap.behavior.singleagent.learning.modellearning.rmax.PotentialShapedRMax
 
runLearningEpisode(Environment) - Method in class burlap.behavior.singleagent.learning.tdmethods.QLearning
 
runLearningEpisode(Environment, int) - Method in class burlap.behavior.singleagent.learning.tdmethods.QLearning
 
runLearningEpisode(Environment, int) - Method in class burlap.behavior.singleagent.learning.tdmethods.SarsaLam
 
runLearningEpisode(Environment) - Method in class burlap.behavior.singleagent.learning.tdmethods.vfa.GradientDescentSarsaLam
 
runLearningEpisode(Environment, int) - Method in class burlap.behavior.singleagent.learning.tdmethods.vfa.GradientDescentSarsaLam
 
runLPAndGetJointActionProbs(LinearProgram, int, int) - Static method in class burlap.behavior.stochasticgames.solvers.CorrelatedEquilibriumSolver
Helper method for running the linear program optimization (after its constraints have already been set) and returning the result in the form of the 2D double matrix joint strategy.
runPlannerForAllInitStates(Planner, StateConditionTestIterable) - Static method in class burlap.behavior.singleagent.planning.deterministic.MultiStatePrePlanner
Runs a planning algorithm from multiple initial states to ensure that an adequate plan/policy exist for of the states.
runPlannerForAllInitStates(Planner, Collection<State>) - Static method in class burlap.behavior.singleagent.planning.deterministic.MultiStatePrePlanner
Runs a planning algorithm from multiple initial states to ensure that an adequate plan/policy exist for of the states.
runPolicyIteration(int, double) - Method in class burlap.behavior.singleagent.learning.lspi.LSPI
Runs LSPI for either numIterations or until the change in the weight matrix is no greater than maxChange.
runRollout(State) - Method in class burlap.behavior.singleagent.planning.stochastic.rtdp.BoundedRTDP
Runs a planning rollout from the provided state.
runRolloutsInReverse - Variable in class burlap.behavior.singleagent.planning.stochastic.rtdp.BoundedRTDP
Whether each rollout should be run in reverse after completion.
runStage() - Method in class burlap.mdp.stochasticgames.world.World
Runs a single stage of this game.
runStepBoundTrial(LearningAgentFactory) - Method in class burlap.behavior.singleagent.auxiliary.performance.LearningAlgorithmExperimenter
Runs a trial for an agent generated by the given factor when interpreting trial length as a number of total steps.
runStepwiseTrial(World) - Method in class burlap.behavior.stochasticgames.auxiliary.performance.MultiAgentExperimenter
Runs a trial where the trial lenght is interpreted as the number of total steps taken.
runTournament() - Method in class burlap.mdp.stochasticgames.tournament.Tournament
Runs the tournament
runVI() - Method in class burlap.behavior.singleagent.learnfromdemo.mlirl.differentiableplanners.DifferentiableVI
Runs VI until the specified termination conditions are met.
runVI() - Method in class burlap.behavior.singleagent.planning.stochastic.valueiteration.PrioritizedSweeping
 
runVI() - Method in class burlap.behavior.singleagent.planning.stochastic.valueiteration.ValueIteration
Runs VI until the specified termination conditions are met.
runVI() - Method in class burlap.behavior.singleagent.planning.vfa.fittedvi.FittedVI
Runs value iteration.
runVI() - Method in class burlap.behavior.stochasticgames.madynamicprogramming.dpplanners.MAValueIteration
Runs Value Iteration over the set of states that have been discovered.
A B C D E F G H I J K L M N O P Q R S T U V W X Y Z 
Skip navigation links