 ga  Variable in class burlap.behavior.functionapproximation.sparse.tilecoding.TileCodingFeatures.ActionFeatureID

 ga  Variable in class burlap.behavior.policy.support.ActionProb

The action to be considered.
 GameCommand  Class in burlap.shell.command.world

 GameCommand()  Constructor for class burlap.shell.command.world.GameCommand

 gameEnding(State)  Method in class burlap.behavior.stochasticgames.auxiliary.performance.MultiAgentPerformancePlotter

 gameEnding(State)  Method in class burlap.mdp.stochasticgames.common.VisualWorldObserver

 gameEnding(State)  Method in interface burlap.mdp.stochasticgames.world.WorldObserver

This method is called whenever a game in a world ends.
 gameEnding(State)  Method in class burlap.shell.visual.SGVisualExplorer

 GameEpisode  Class in burlap.behavior.stochasticgames

This class provides a means to record all the interactions in a stochastic game; specifically, the sequence of states, joint actions taken, and joint reward received.
 GameEpisode()  Constructor for class burlap.behavior.stochasticgames.GameEpisode

Initialzes the datastructures.
 GameEpisode(State)  Constructor for class burlap.behavior.stochasticgames.GameEpisode

Initializes with an initial state of the game.
 gameHeight  Variable in class burlap.domain.singleagent.frostbite.FrostbiteModel

 gameHeight  Variable in class burlap.domain.singleagent.frostbite.FrostbiteVisualizer.IglooPainter

 gameIsRunning()  Method in class burlap.mdp.stochasticgames.world.World

Returns whether a game in this world is currently running.
 GameSequenceVisualizer  Class in burlap.behavior.stochasticgames.auxiliary

This class is used to visualize a set of games that have been
saved to files in a common directory or which are
provided to the object as a list of
GameEpisode
objects.
 GameSequenceVisualizer(Visualizer, SGDomain, String)  Constructor for class burlap.behavior.stochasticgames.auxiliary.GameSequenceVisualizer

Initializes the GameSequenceVisualizer.
 GameSequenceVisualizer(Visualizer, SGDomain, List<GameEpisode>)  Constructor for class burlap.behavior.stochasticgames.auxiliary.GameSequenceVisualizer

Initializes the GameSequenceVisualizer with programatially supplied list of
GameEpisode
objects to view.
 GameSequenceVisualizer(Visualizer, SGDomain, String, int, int)  Constructor for class burlap.behavior.stochasticgames.auxiliary.GameSequenceVisualizer

Initializes the GameSequenceVisualizer.
 GameSequenceVisualizer(Visualizer, SGDomain, List<GameEpisode>, int, int)  Constructor for class burlap.behavior.stochasticgames.auxiliary.GameSequenceVisualizer

Initializes the GameSequenceVisualizer with programmatially supplied list of
GameEpisode
objects to view.
 gameStarting(World, int)  Method in class burlap.behavior.stochasticgames.agents.interfacing.singleagent.LearningAgentToSGAgentInterface

 gameStarting(World, int)  Method in class burlap.behavior.stochasticgames.agents.madp.MultiAgentDPPlanningAgent

 gameStarting(World, int)  Method in class burlap.behavior.stochasticgames.agents.maql.MultiAgentQLearning

 gameStarting(World, int)  Method in class burlap.behavior.stochasticgames.agents.naiveq.history.SGQWActionHistory

 gameStarting(World, int)  Method in class burlap.behavior.stochasticgames.agents.naiveq.SGNaiveQLAgent

 gameStarting(World, int)  Method in class burlap.behavior.stochasticgames.agents.RandomSGAgent

 gameStarting(World, int)  Method in class burlap.behavior.stochasticgames.agents.SetStrategySGAgent

 gameStarting(World, int)  Method in class burlap.behavior.stochasticgames.agents.twoplayer.repeatedsinglestage.GrimTrigger

 gameStarting(World, int)  Method in class burlap.behavior.stochasticgames.agents.twoplayer.repeatedsinglestage.TitForTat

 gameStarting(World, int)  Method in class burlap.behavior.stochasticgames.agents.twoplayer.singlestage.equilibriumplayer.EquilibriumPlayingSGAgent

 gameStarting(State)  Method in class burlap.behavior.stochasticgames.auxiliary.performance.MultiAgentPerformancePlotter

 gameStarting(World, int)  Method in interface burlap.mdp.stochasticgames.agent.SGAgent

This method is called by the world when a new game is starting.
 gameStarting(State)  Method in class burlap.mdp.stochasticgames.common.VisualWorldObserver

 gameStarting(State)  Method in interface burlap.mdp.stochasticgames.world.WorldObserver

This method is called whenever a new game in a world is starting.
 gameStarting(World, int)  Method in class burlap.shell.command.world.ManualAgentsCommands.ManualSGAgent

 gameStarting(State)  Method in class burlap.shell.visual.SGVisualExplorer

 gameTerminated()  Method in class burlap.behavior.stochasticgames.agents.interfacing.singleagent.LearningAgentToSGAgentInterface

 gameTerminated()  Method in class burlap.behavior.stochasticgames.agents.madp.MultiAgentDPPlanningAgent

 gameTerminated()  Method in class burlap.behavior.stochasticgames.agents.maql.MultiAgentQLearning

 gameTerminated()  Method in class burlap.behavior.stochasticgames.agents.naiveq.SGNaiveQLAgent

 gameTerminated()  Method in class burlap.behavior.stochasticgames.agents.RandomSGAgent

 gameTerminated()  Method in class burlap.behavior.stochasticgames.agents.SetStrategySGAgent

 gameTerminated()  Method in class burlap.behavior.stochasticgames.agents.twoplayer.repeatedsinglestage.GrimTrigger

 gameTerminated()  Method in class burlap.behavior.stochasticgames.agents.twoplayer.repeatedsinglestage.TitForTat

 gameTerminated()  Method in class burlap.behavior.stochasticgames.agents.twoplayer.singlestage.equilibriumplayer.EquilibriumPlayingSGAgent

 gameTerminated()  Method in interface burlap.mdp.stochasticgames.agent.SGAgent

This method is called by the world when a game has ended.
 gameTerminated()  Method in class burlap.shell.command.world.ManualAgentsCommands.ManualSGAgent

 gameWidth  Variable in class burlap.domain.singleagent.frostbite.FrostbiteModel

 gameWidth  Variable in class burlap.domain.singleagent.frostbite.FrostbiteVisualizer.IglooPainter

 gamma  Variable in class burlap.behavior.singleagent.learnfromdemo.IRLRequest

The discount factor of the problem
 gamma  Variable in class burlap.behavior.singleagent.learning.modellearning.rmax.RMaxModel

 gamma  Variable in class burlap.behavior.singleagent.MDPSolver

The MDP discount factor
 GaussianRBF  Class in burlap.behavior.functionapproximation.dense.rbf.functions

An
RBF
whose response is dictated by a Gaussian kernel.
 GaussianRBF(double[], DistanceMetric, double)  Constructor for class burlap.behavior.functionapproximation.dense.rbf.functions.GaussianRBF

Initializes.
 GaussianRBF(double[], double)  Constructor for class burlap.behavior.functionapproximation.dense.rbf.functions.GaussianRBF

 gc  Variable in class burlap.behavior.singleagent.planning.deterministic.DeterministicPlanner

This State condition test should return true for goal states and false for nongoal states.
 gc  Variable in class burlap.mdp.singleagent.common.GoalBasedRF

 GDSLInit(SADomain, double, DifferentiableStateActionValue, double, Policy, int, double)  Method in class burlap.behavior.singleagent.learning.tdmethods.vfa.GradientDescentSarsaLam

Initializes SARSA(\lambda) By default the agent will only save the last learning episode and a call to the
GradientDescentSarsaLam.planFromState(State)
method
will cause the valueFunction to use only one episode for planning; this should probably be changed to a much larger value if you plan on using this
algorithm as a planning algorithm.
 GeneralBimatrixSolverTools  Class in burlap.behavior.stochasticgames.solvers

A class holding static methods for performing common operations on bimatrix games.
 generate(Action)  Method in class burlap.behavior.singleagent.planning.stochastic.montecarlo.uct.UCTActionNode.UCTActionConstructor

Returns a UCTActionNode Object that wraps the given action
 generate(HashableState, int, List<ActionType>, UCTActionNode.UCTActionConstructor)  Method in class burlap.behavior.singleagent.planning.stochastic.montecarlo.uct.UCTStateNode.UCTStateConstructor

 generateAction(String[])  Method in class burlap.mdp.singleagent.oo.ObjectParameterizedActionType

 generateAgent()  Method in interface burlap.behavior.singleagent.learning.LearningAgentFactory

Generates a new LearningAgent object and returns it.
 generateAgent(String, SGAgentType)  Method in class burlap.behavior.stochasticgames.agents.madp.MADPPlanAgentFactory

 generateAgent(String, SGAgentType)  Method in class burlap.behavior.stochasticgames.agents.maql.MAQLFactory

 generateAgent(String, SGAgentType)  Method in class burlap.behavior.stochasticgames.agents.naiveq.history.SGQWActionHistoryFactory

 generateAgent(String, SGAgentType)  Method in class burlap.behavior.stochasticgames.agents.naiveq.SGNaiveQFactory

 generateAgent(String, SGAgentType)  Method in class burlap.behavior.stochasticgames.agents.SetStrategySGAgent.SetStrategyAgentFactory

 generateAgent(String, SGAgentType)  Method in class burlap.behavior.stochasticgames.agents.twoplayer.repeatedsinglestage.GrimTrigger.GrimTriggerAgentFactory

 generateAgent(String, SGAgentType)  Method in class burlap.behavior.stochasticgames.agents.twoplayer.repeatedsinglestage.TitForTat.TitForTatAgentFactory

 generateAgent(String, SGAgentType)  Method in interface burlap.mdp.stochasticgames.agent.AgentFactory

 generateAgent(String, SGAgentType)  Method in class burlap.mdp.stochasticgames.common.AgentFactoryWithSubjectiveReward

 generateAgentType(int)  Method in class burlap.domain.stochasticgames.normalform.SingleStageNormalFormGame

 generateAllLocationSetsHelper(List<List<GridGameStandardMechanics.Location2Prob>>, int, GridGameStandardMechanics.Location2[], double, List<GridGameStandardMechanics.LocationSetProb>)  Method in class burlap.domain.stochasticgames.gridgame.GridGameStandardMechanics

This method will recursively generate all possible joint location outcomes for a list of possible outcomes for each agent
 generateAllPossibleCollisionWinnerAssignments(List<List<Integer>>, int, int[], List<List<Integer>>)  Method in class burlap.domain.stochasticgames.gridgame.GridGameStandardMechanics

 generateCoefficientVectors()  Method in class burlap.behavior.functionapproximation.dense.fourier.FourierBasis

Generates all coefficient vectors given the number of state variables and the maximum number of nonzero coefficient element entries.
 generateCoefficientVectorsHelper(int, short[], int)  Method in class burlap.behavior.functionapproximation.dense.fourier.FourierBasis

Recursive cofficient generator helper method.
 generateDifferentiablePlannerForRequest(MLIRLRequest)  Method in class burlap.behavior.singleagent.learnfromdemo.mlirl.support.QGradientPlannerFactory.DifferentiableVIFactory

 generateDifferentiablePlannerForRequest(MLIRLRequest)  Method in interface burlap.behavior.singleagent.learnfromdemo.mlirl.support.QGradientPlannerFactory

 generateDomain()  Method in class burlap.behavior.singleagent.interfaces.rlglue.RLGlueDomain

 generateDomain()  Method in class burlap.domain.singleagent.blockdude.BlockDude

 generateDomain()  Method in class burlap.domain.singleagent.blocksworld.BlocksWorld

 generateDomain()  Method in class burlap.domain.singleagent.cartpole.CartPoleDomain

 generateDomain()  Method in class burlap.domain.singleagent.cartpole.InvertedPendulum

 generateDomain()  Method in class burlap.domain.singleagent.frostbite.FrostbiteDomain

Creates a new frostbite domain.
 generateDomain()  Method in class burlap.domain.singleagent.graphdefined.GraphDefinedDomain

 generateDomain()  Method in class burlap.domain.singleagent.gridworld.GridWorldDomain

 generateDomain()  Method in class burlap.domain.singleagent.lunarlander.LunarLanderDomain

 generateDomain()  Method in class burlap.domain.singleagent.mountaincar.MountainCar

 generateDomain()  Method in class burlap.domain.singleagent.pomdp.tiger.TigerDomain

 generateDomain()  Method in class burlap.domain.stochasticgames.gridgame.GridGame

 generateDomain()  Method in class burlap.domain.stochasticgames.normalform.SingleStageNormalFormGame

 generateDomain()  Method in interface burlap.mdp.auxiliary.DomainGenerator

Returns a newly instanced Domain object
 generateDomain()  Method in class burlap.mdp.singleagent.pomdp.BeliefMDPGenerator

 generateGaussianRBFsForStates(List<State>, DenseStateFeatures, DistanceMetric, double)  Static method in class burlap.behavior.functionapproximation.dense.rbf.functions.GaussianRBF

 generateGaussianRBFsForStates(List<State>, DenseStateFeatures, double)  Static method in class burlap.behavior.functionapproximation.dense.rbf.functions.GaussianRBF

 generateKey(Object)  Static method in class burlap.mdp.core.oo.state.OOStateUtilities

 generateLargeGW(SADomain, int)  Method in class burlap.testing.TestHashing

 generateNewCurrentState()  Method in class burlap.mdp.stochasticgames.world.World

Causes the world to set the current state to a state generated by the provided
StateGenerator
object
if a game is not currently running.
 generatePfs()  Method in class burlap.domain.singleagent.blockdude.BlockDude

 generatePfs()  Method in class burlap.domain.singleagent.blocksworld.BlocksWorld

 generatePFs()  Method in class burlap.domain.singleagent.frostbite.FrostbiteDomain

 generatePfs()  Method in class burlap.domain.singleagent.gridworld.GridWorldDomain

 generatePfs()  Method in class burlap.domain.singleagent.lunarlander.LunarLanderDomain

 generateRandomPolicy(SADomain)  Static method in class burlap.behavior.singleagent.learnfromdemo.apprenticeship.ApprenticeshipLearning.StationaryRandomDistributionPolicy

 generateRandomStates(SADomain, State, HashableStateFactory, int, int, boolean)  Method in class burlap.testing.TestHashing

 generateReserved()  Method in class burlap.shell.BurlapShell

 generateRewardFunction(DenseStateFeatures, ApprenticeshipLearning.FeatureWeights)  Static method in class burlap.behavior.singleagent.learnfromdemo.apprenticeship.ApprenticeshipLearning

Generates an anonymous instance of a reward function derived from a FeatureMapping
and associated feature weights
Computes (w^(i))T phi from step 4 in section 3
 generateStandard()  Method in class burlap.shell.BurlapShell

 generateStandard()  Method in class burlap.shell.EnvironmentShell

 generateStandard()  Method in class burlap.shell.SGWorldShell

 generateState()  Method in class burlap.domain.singleagent.mountaincar.MCRandomStateGenerator

 generateState()  Method in class burlap.domain.stochasticgames.normalform.NFGameState

 generateState()  Method in class burlap.mdp.auxiliary.common.ConstantStateGenerator

 generateState()  Method in class burlap.mdp.auxiliary.common.RandomStartStateGenerator

 generateState()  Method in interface burlap.mdp.auxiliary.StateGenerator

Returns a new state object.
 generateState()  Method in class burlap.testing.TestBlockDude

 generateState()  Method in class burlap.testing.TestGridWorld

 GenerateStateCommand  Class in burlap.shell.command.world

 GenerateStateCommand()  Constructor for class burlap.shell.command.world.GenerateStateCommand

 generateStates(SADomain, State, HashableStateFactory, int)  Method in class burlap.testing.TestHashing

 generateVFA(double)  Method in class burlap.behavior.functionapproximation.dense.fourier.FourierBasis

Creates and returns a linear VFA object over this Fourier basis feature database.
 generateVFA(double)  Method in class burlap.behavior.functionapproximation.dense.rbf.RBFFeatures

Creates and returns a linear VFA object over this RBF feature database.
 generateVFA(double)  Method in class burlap.behavior.functionapproximation.sparse.tilecoding.TileCodingFeatures

After all the tiling specifications have been set, this method can be called to produce a linear
VFA object.
 generateWorld()  Method in class burlap.mdp.stochasticgames.tournament.common.ConstantWorldGenerator

 generateWorld()  Method in interface burlap.mdp.stochasticgames.world.WorldGenerator

Generates a new
World
instance.
 generatingAction  Variable in class burlap.behavior.singleagent.planning.deterministic.SearchNode

The action that generated this state in the previous state.
 GenericOOState  Class in burlap.mdp.core.oo.state.generic

A generic implementation of a
OOState
.
 GenericOOState()  Constructor for class burlap.mdp.core.oo.state.generic.GenericOOState

 GenericOOState(OOState)  Constructor for class burlap.mdp.core.oo.state.generic.GenericOOState

 GenericOOState(ObjectInstance...)  Constructor for class burlap.mdp.core.oo.state.generic.GenericOOState

 genInd(Object)  Method in class burlap.domain.stochasticgames.normalform.NFGameState

 get(Object)  Method in class burlap.behavior.singleagent.interfaces.rlglue.RLGlueState

 get(int)  Method in class burlap.behavior.singleagent.learning.lspi.SARSData

 get(Object)  Method in class burlap.behavior.stochasticgames.agents.naiveq.history.HistoryState

 get(Object)  Method in class burlap.domain.singleagent.blockdude.state.BlockDudeAgent

 get(Object)  Method in class burlap.domain.singleagent.blockdude.state.BlockDudeCell

 get(Object)  Method in class burlap.domain.singleagent.blockdude.state.BlockDudeMap

 get(Object)  Method in class burlap.domain.singleagent.blockdude.state.BlockDudeState

 get(Object)  Method in class burlap.domain.singleagent.blocksworld.BlocksWorldBlock

 get(Object)  Method in class burlap.domain.singleagent.blocksworld.BlocksWorldState

 get(Object)  Method in class burlap.domain.singleagent.cartpole.states.CartPoleFullState

 get(Object)  Method in class burlap.domain.singleagent.cartpole.states.CartPoleState

 get(Object)  Method in class burlap.domain.singleagent.cartpole.states.InvertedPendulumState

 get(Object)  Method in class burlap.domain.singleagent.frostbite.state.FrostbiteAgent

 get(Object)  Method in class burlap.domain.singleagent.frostbite.state.FrostbiteIgloo

 get(Object)  Method in class burlap.domain.singleagent.frostbite.state.FrostbitePlatform

 get(Object)  Method in class burlap.domain.singleagent.frostbite.state.FrostbiteState

 get(Object)  Method in class burlap.domain.singleagent.graphdefined.GraphStateNode

 get(Object)  Method in class burlap.domain.singleagent.gridworld.state.GridAgent

 get(Object)  Method in class burlap.domain.singleagent.gridworld.state.GridLocation

 get(Object)  Method in class burlap.domain.singleagent.gridworld.state.GridWorldState

 get(Object)  Method in class burlap.domain.singleagent.lunarlander.state.LLAgent

 get(Object)  Method in class burlap.domain.singleagent.lunarlander.state.LLBlock

 get(Object)  Method in class burlap.domain.singleagent.lunarlander.state.LLState

 get(Object)  Method in class burlap.domain.singleagent.mountaincar.MCState

 get(Object)  Method in class burlap.domain.singleagent.pomdp.tiger.TigerObservation

 get(Object)  Method in class burlap.domain.singleagent.pomdp.tiger.TigerState

 get(Object)  Method in class burlap.domain.stochasticgames.gridgame.state.GGAgent

 get(Object)  Method in class burlap.domain.stochasticgames.gridgame.state.GGGoal

 get(Object)  Method in class burlap.domain.stochasticgames.gridgame.state.GGWall

 get(Object)  Method in class burlap.domain.stochasticgames.normalform.NFGameState

 get(String)  Method in class burlap.domain.stochasticgames.normalform.SingleStageNormalFormGame.ActionNameMap

 get(Object)  Method in class burlap.mdp.core.oo.state.generic.GenericOOState

 get(OOState, Object)  Static method in class burlap.mdp.core.oo.state.OOStateUtilities

 get(Object)  Method in class burlap.mdp.core.state.NullState

 get(Object)  Method in interface burlap.mdp.core.state.State

Returns the value for the given variable key.
 get(Object)  Method in class burlap.mdp.singleagent.pomdp.beliefstate.TabularBeliefState

 getA()  Method in class burlap.behavior.singleagent.learning.actorcritic.CritiqueResult

Returns the action of this behavior.
 getActingAgent()  Method in class burlap.behavior.stochasticgames.PolicyFromJointPolicy

Returns the acting agent
 getAction(BeliefState)  Method in class burlap.behavior.singleagent.pomdp.BeliefPolicyAgent

 getAction(BeliefState)  Method in class burlap.mdp.singleagent.pomdp.BeliefAgent

Returns the action the agent should take for the input
BeliefState
.
 getAction(String)  Method in class burlap.mdp.singleagent.SADomain

Returns the
ActionType
in this domain with the given type name, or null if one does not exist.
 getActionLearningRateEntry(Action)  Method in class burlap.behavior.learningrate.ExponentialDecayLR.StateWiseLearningRate

Returns the mutable double entry for the learning rate for the action for the state with which this object is associated.
 getActionModel()  Method in class burlap.mdp.stochasticgames.world.World

 getActionOffset(Action)  Method in class burlap.behavior.functionapproximation.dense.DenseCrossProductFeatures

 getActionOffset(Action)  Method in class burlap.behavior.functionapproximation.dense.DenseLinearVFA

 getActionOffset()  Method in class burlap.behavior.functionapproximation.dense.DenseLinearVFA

Returns the Map
of feature index offsets into the full feature vector for each action
 getActions()  Method in class burlap.mdp.stochasticgames.JointAction

Returns a list of the actions in this joint action.
 getActionSequence()  Method in class burlap.behavior.singleagent.options.MacroAction

 getActionsTypes()  Method in class burlap.behavior.singleagent.learning.modellearning.rmax.RMaxModel

 getActionTimeIndexEntry(Action)  Method in class burlap.behavior.learningrate.SoftTimeInverseDecayLR.StateWiseTimeIndex

Returns the mutable int entry for the time index for the action for the state with which this object is associated.
 getActionType(String)  Method in class burlap.mdp.stochasticgames.SGDomain

 getActionTypes()  Method in class burlap.behavior.singleagent.MDPSolver

 getActionTypes()  Method in interface burlap.behavior.singleagent.MDPSolverInterface

Returns a copy of all actions this solver uses for reasoning; including added actions that are not part of the
domain specification (e.g.,
Option
s).
 getActionTypes()  Method in class burlap.mdp.singleagent.SADomain

Returns the
ActionType
s associated with this domain.
 getActionTypes()  Method in class burlap.mdp.stochasticgames.SGDomain

 getAgentDefinitions()  Method in class burlap.mdp.stochasticgames.world.World

Returns the agent definitions for the agents registered in this world.
 getAgentName()  Method in interface burlap.behavior.singleagent.learning.LearningAgentFactory

Will return a name to identify the kind of agent that will be generated by this factory.
 getAgentsInJointPolicy()  Method in class burlap.behavior.stochasticgames.JointPolicy

Returns a map specifying the agents who contribute actions to this joint policy.
 getAgentSynchronizedActionSelection(int, State)  Method in class burlap.behavior.stochasticgames.JointPolicy

This method returns the action for a single agent by a synchronized sampling of this joint policy,
which enables multiple agents to query this policy object and act according to the same selected joint
actions from it.
 getAliases()  Method in class burlap.shell.BurlapShell

 getAllJointActions(State)  Method in class burlap.behavior.stochasticgames.JointPolicy

Returns all possible joint actions that can be taken in state s for the set of agents defined to be used in this joint policy.
 getAllJointActions(State, List<SGAgent>)  Static method in class burlap.mdp.stochasticgames.JointAction

 getAllJointActionsFromTypes(State, List<SGAgentType>)  Static method in class burlap.mdp.stochasticgames.JointAction

 getAllLocationSets(List<List<GridGameStandardMechanics.Location2Prob>>)  Method in class burlap.domain.stochasticgames.gridgame.GridGameStandardMechanics

Takes a list of possible location outcomes for each agent and generates all joint location outcomes
 getAllPossibleCollisionWinnerAssignment(Map<Integer, List<Integer>>)  Method in class burlap.domain.stochasticgames.gridgame.GridGameStandardMechanics

 getAllStates()  Method in class burlap.behavior.singleagent.planning.stochastic.DynamicProgramming

This method will return all states that are stored in this planners value function.
 getAllStoredLearningEpisodes()  Method in class burlap.behavior.singleagent.learning.actorcritic.ActorCritic

 getAllStoredLearningEpisodes()  Method in class burlap.behavior.singleagent.learning.lspi.LSPI

 getAllStoredLearningEpisodes()  Method in class burlap.behavior.singleagent.learning.modellearning.artdp.ARTDP

 getAllStoredLearningEpisodes()  Method in class burlap.behavior.singleagent.learning.modellearning.rmax.PotentialShapedRMax

 getAllSuccessors()  Method in class burlap.behavior.singleagent.planning.stochastic.montecarlo.uct.UCTActionNode

Returns a list of all successor nodes observed
 getAnginc()  Method in class burlap.domain.singleagent.lunarlander.LunarLanderDomain

Returns how many radians the agent will rotate from its current orientation when a turn/rotate action is applied
 getAnginc()  Method in class burlap.domain.singleagent.lunarlander.LunarLanderDomain.LLPhysicsParams

 getAngle2ndDeriv(double, double, double, double, double)  Method in class burlap.domain.singleagent.cartpole.model.CPCorrectModel

Computes the 2nd order derivative of the angle for a given normal force sign using the corrected model.
 getAngmax()  Method in class burlap.domain.singleagent.lunarlander.LunarLanderDomain

Returns the maximum rotate angle (in radians) that the lander can be rotated from the vertical orientation in either
clockwise or counterclockwise direction.
 getAngmax()  Method in class burlap.domain.singleagent.lunarlander.LunarLanderDomain.LLPhysicsParams

 getAvgTime()  Method in class burlap.debugtools.MyTimer

Returns the average time in seconds recorded over all startstop calls.
 getBattleOfTheSexes1()  Static method in class burlap.domain.stochasticgames.normalform.SingleStageNormalFormGame

Returns an instance of Battle of the Sexes 1, which is defined by:
 getBattleOfTheSexes2()  Static method in class burlap.domain.stochasticgames.normalform.SingleStageNormalFormGame

Returns an instance of Battle of the Sexes 2, which is defined by:
 getBeliefMDP()  Method in class burlap.behavior.singleagent.pomdp.wrappedmdpalgs.BeliefSparseSampling

Returns the generated Belief MDP that will be solved.
 getBeliefValues()  Method in class burlap.mdp.singleagent.pomdp.beliefstate.TabularBeliefState

 getBeta()  Method in class burlap.behavior.singleagent.planning.stochastic.dpoperator.SoftmaxOperator

 getBgColor()  Method in class burlap.visualizer.MultiLayerRenderer

Returns the background color of the renderer
 getBlockAt(BlockDudeState, int, int)  Method in class burlap.domain.singleagent.blockdude.BlockDudeModel

Finds a block object in the
State
located at the provided position and returns it
 getBoltzmannBeta()  Method in class burlap.behavior.singleagent.learnfromdemo.mlirl.MLIRLRequest

 getBrowser()  Method in class burlap.shell.command.env.EpisodeRecordingCommands

 getC()  Method in class burlap.behavior.singleagent.learnfromdemo.mlirl.differentiableplanners.DifferentiableSparseSampling

Returns the number of state transition samples
 getC()  Method in class burlap.behavior.singleagent.planning.stochastic.sparsesampling.SparseSampling

Returns the number of state transition samples
 getCartPoleStateRenderLayer()  Static method in class burlap.domain.singleagent.cartpole.CartPoleVisualizer

Returns a StateRenderLayer for cart pole.
 getCartPoleVisualizer()  Static method in class burlap.domain.singleagent.cartpole.CartPoleVisualizer

Returns a visualizer for cart pole.
 getCAtHeight(int)  Method in class burlap.behavior.singleagent.learnfromdemo.mlirl.differentiableplanners.DifferentiableSparseSampling

Returns the value of C for a node at the given height (height from a leaf node).
 getCAtHeight(int)  Method in class burlap.behavior.singleagent.planning.stochastic.sparsesampling.SparseSampling

Returns the value of C for a node at the given height (height from a leaf node).
 getChicken()  Static method in class burlap.domain.stochasticgames.normalform.SingleStageNormalFormGame

Returns an instance of Chicken, which is defined by:
 getCI(DescriptiveStatistics, double)  Static method in class burlap.behavior.singleagent.auxiliary.performance.PerformancePlotter

Returns the confidence interval for the specified significance level
 getCI(DescriptiveStatistics, double)  Static method in class burlap.behavior.stochasticgames.auxiliary.performance.MultiAgentPerformancePlotter

Returns the confidence interval for the specified significance level
 getClassName()  Method in class burlap.domain.singleagent.blockdude.state.BlockDudeCell

 getClusterPriors()  Method in class burlap.behavior.singleagent.learnfromdemo.mlirl.MultipleIntentionsMLIRL

Returns the behavior cluster prior probabilities.
 getClusterRFs()  Method in class burlap.behavior.singleagent.learnfromdemo.mlirl.MultipleIntentionsMLIRL

 getCoefficientVector(int)  Method in class burlap.behavior.functionapproximation.dense.fourier.FourierBasis

Returns the coefficient vector for the given basis function index.
 getColissionSets(List<GridGameStandardMechanics.Location2>)  Method in class burlap.domain.stochasticgames.gridgame.GridGameStandardMechanics

Returns with whom each agent is in a collision competition for a cell.
 getCollisionReward()  Method in class burlap.domain.singleagent.lunarlander.LunarLanderRF

 getColPlayersStrategy(double[][])  Static method in class burlap.behavior.stochasticgames.solvers.MinMaxSolver

Computes the minmax strategy for the column player of the given payoff matrix.
 getCommands()  Method in class burlap.shell.BurlapShell

 getComputedPolicy()  Method in class burlap.behavior.singleagent.planning.stochastic.policyiteration.PolicyIteration

Returns the policy that was last computed (or the initial policy if no planning has been performed).
 getConfig()  Method in class burlap.statehashing.masked.MaskedHashableStateFactory

 getControlDepth()  Method in class burlap.behavior.singleagent.planning.vfa.fittedvi.FittedVI

 getCopyOfValueFunction()  Method in class burlap.behavior.singleagent.planning.stochastic.DynamicProgramming

 getCorrdinationGameInitialState()  Static method in class burlap.domain.stochasticgames.gridgame.GridGame

Returns the initial state for a classic coordination game, where the agent's personal goals are on opposite sides.
 getCorrectDoorReward()  Method in class burlap.domain.singleagent.pomdp.tiger.TigerDomain

 getCorrelatedEQJointStrategy(CorrelatedEquilibriumSolver.CorrelatedEquilibriumObjective, double[][], double[][])  Static method in class burlap.behavior.stochasticgames.solvers.CorrelatedEquilibriumSolver

Returns the correlated equilibrium joint strategy in a 2D double matrix, which represents the probability of each joint actino (where rows
are player 1s actions and columns are player 2's actions).
 getCorrelatedEQJointStrategyEgalitarian(double[][], double[][])  Static method in class burlap.behavior.stochasticgames.solvers.CorrelatedEquilibriumSolver

Returns the correlated equilibrium joint strategy in a 2D double matrix for the Egalitarian objective.
 getCorrelatedEQJointStrategyLibertarianForCol(double[][], double[][])  Static method in class burlap.behavior.stochasticgames.solvers.CorrelatedEquilibriumSolver

Returns the correlated equilibrium joint strategy in a 2D double matrix for the Libertarian objective.
 getCorrelatedEQJointStrategyLibertarianForRow(double[][], double[][])  Static method in class burlap.behavior.stochasticgames.solvers.CorrelatedEquilibriumSolver

Returns the correlated equilibrium joint strategy in a 2D double matrix for the Libertarian objective.
 getCorrelatedEQJointStrategyRepublican(double[][], double[][])  Static method in class burlap.behavior.stochasticgames.solvers.CorrelatedEquilibriumSolver

Returns the correlated equilibrium joint strategy in a 2D double matrix for the Republican objective.
 getCorrelatedEQJointStrategyUtilitarian(double[][], double[][])  Static method in class burlap.behavior.stochasticgames.solvers.CorrelatedEquilibriumSolver

Returns the correlated equilibrium joint strategy in a 2D double matrix for the Utilitarian objective.
 getCritique()  Method in class burlap.behavior.singleagent.learning.actorcritic.CritiqueResult

Returns the critique of this behavior.
 getCumulativeRewardFor(int)  Method in class burlap.mdp.stochasticgames.tournament.Tournament

Returns the cumulative reward received by the agent who is indexed at position i
 getCumulativeRewardForAgent(String)  Method in class burlap.mdp.stochasticgames.world.World

Returns the cumulative reward that the agent with name aname has received across all interactions in this world.
 getCurrentHiddenState()  Method in class burlap.mdp.singleagent.pomdp.SimulatedPOEnvironment

 getCurrentWorldState()  Method in class burlap.mdp.stochasticgames.world.World

Returns the current world state
 getCurState()  Method in class burlap.visualizer.StateRenderLayer

 getCurTime()  Method in class burlap.behavior.singleagent.learning.actorcritic.critics.TimeIndexedTDLambda

Returns the current time/depth of the current episodes
 getDataset()  Method in class burlap.behavior.singleagent.learning.lspi.LSPI

Returns the dataset this object uses for LSPI
 getDebugCode()  Method in class burlap.behavior.singleagent.interfaces.rlglue.RLGlueAgent

 getDebugCode()  Method in class burlap.behavior.singleagent.learnfromdemo.mlirl.differentiableplanners.DifferentiableSparseSampling

Returns the debug code used for logging plan results with
DPrint
.
 getDebugCode()  Method in class burlap.behavior.singleagent.learnfromdemo.mlirl.MLIRL

Returns the debug code used for printing to the terminal
 getDebugCode()  Method in class burlap.behavior.singleagent.learnfromdemo.mlirl.MultipleIntentionsMLIRL

Returns the debug code used for printing to the terminal
 getDebugCode()  Method in class burlap.behavior.singleagent.MDPSolver

 getDebugCode()  Method in interface burlap.behavior.singleagent.MDPSolverInterface

Returns the debug code used by this solver for calls to
DPrint
 getDebugCode()  Method in class burlap.behavior.singleagent.planning.stochastic.sparsesampling.SparseSampling

Returns the debug code used for logging plan results with
DPrint
.
 getDebugId()  Method in class burlap.mdp.stochasticgames.world.World

This class will report execution information as games are played using the
DPrint
class.
 getDeepCopyOfPayoutArray(SingleStageNormalFormGame.AgentPayoutFunction[])  Static method in class burlap.domain.stochasticgames.normalform.SingleStageNormalFormGame.AgentPayoutFunction

 getDefault()  Static method in class burlap.debugtools.RandomFactory

Returns the default random number generator.
 getDefaultReward()  Method in class burlap.domain.singleagent.lunarlander.LunarLanderRF

 getDefaultReward()  Method in class burlap.mdp.singleagent.common.GoalBasedRF

 getDefaultValue(State)  Method in class burlap.behavior.singleagent.planning.stochastic.DynamicProgramming

Returns the default Vvalue to use for the state
 getDefaultWeight()  Method in class burlap.behavior.functionapproximation.dense.DenseLinearVFA

 getDelegateImplementing(Environment, Class<?>)  Static method in class burlap.mdp.singleagent.environment.extensions.EnvironmentDelegation.Helper

 getDiscountFactor()  Method in class burlap.behavior.singleagent.interfaces.rlglue.RLGlueAgent

Returns the discount factor for this environment.
 getDomain()  Method in class burlap.behavior.singleagent.interfaces.rlglue.RLGlueAgent

Returns the domain for this environment.
 getDomain()  Method in class burlap.behavior.singleagent.learnfromdemo.IRLRequest

 getDomain()  Method in class burlap.behavior.singleagent.MDPSolver

 getDomain()  Method in interface burlap.behavior.singleagent.MDPSolverInterface

Returns the
Domain
this solver solves.
 getDomain()  Method in class burlap.mdp.singleagent.pomdp.beliefstate.TabularBeliefState

 getDomain()  Method in class burlap.mdp.singleagent.pomdp.beliefstate.TabularBeliefUpdate

 getDomain()  Method in class burlap.mdp.stochasticgames.world.World

 getDomain()  Method in class burlap.shell.BurlapShell

 getDomain()  Method in class burlap.testing.TestGridWorld

 getEgalitarianObjective(double[][], double[][])  Static method in class burlap.behavior.stochasticgames.solvers.CorrelatedEquilibriumSolver

Returns the egalitarian objective for the given payoffs for the row and column player.
 getEnumeratedID(State)  Method in class burlap.behavior.singleagent.auxiliary.StateEnumerator

Get or create and get the enumeration id for a state
 getEnumeratedID(HashableState)  Method in class burlap.behavior.singleagent.auxiliary.StateEnumerator

Get or create and get the enumeration id for a hashed state
 getEnv()  Method in class burlap.shell.EnvironmentShell

 getEnvironmentDelegate()  Method in interface burlap.mdp.singleagent.environment.extensions.EnvironmentDelegation

 getEnvironmentDelegate()  Method in class burlap.mdp.singleagent.environment.extensions.EnvironmentServer

 getEpisodeWeights()  Method in class burlap.behavior.singleagent.learnfromdemo.mlirl.MLIRLRequest

Returns expert episodes weights.
 getEpsilon()  Method in class burlap.behavior.policy.EpsilonGreedy

Returns the epsilon value, where epsilon is the probability of taking a random action.
 getEpsilon()  Method in class burlap.behavior.singleagent.learnfromdemo.apprenticeship.ApprenticeshipLearningRequest

 getExpertEpisodes()  Method in class burlap.behavior.singleagent.learnfromdemo.apprenticeship.ApprenticeshipLearningRequest

 getExpertEpisodes()  Method in class burlap.behavior.singleagent.learnfromdemo.IRLRequest

 getFeatureGenerator()  Method in class burlap.behavior.singleagent.learnfromdemo.apprenticeship.ApprenticeshipLearningRequest

 getFeatureWiseLearningRate(int)  Method in class burlap.behavior.learningrate.ExponentialDecayLR

Returns the learning rate data structure for the given state feature.
 getFeatureWiseTimeIndex(int)  Method in class burlap.behavior.learningrate.SoftTimeInverseDecayLR

Returns the learning rate data structure for the given state feature.
 getFlag(int)  Static method in class burlap.debugtools.DebugFlags

Returns the value for a given flag; 0 if the flag has never been created/set
 getFriendFoeInitialState()  Static method in class burlap.domain.stochasticgames.gridgame.GridGame

Returns the initial state for Friend Foe game.
 getFVTile(double[])  Method in class burlap.behavior.functionapproximation.sparse.tilecoding.Tiling

Retunrs a tile for the given input vector.
 getGamma()  Method in class burlap.behavior.singleagent.learnfromdemo.IRLRequest

 getGamma()  Method in class burlap.behavior.singleagent.learning.modellearning.rmax.RMaxModel

 getGamma()  Method in class burlap.behavior.singleagent.MDPSolver

 getGamma()  Method in interface burlap.behavior.singleagent.MDPSolverInterface

Returns gamma, the discount factor used by this solver
 getGap(HashableState)  Method in class burlap.behavior.singleagent.planning.stochastic.rtdp.BoundedRTDP

Returns the lower bound and upper bound value function margin/gap for the given state
 getGoalCondition()  Method in class burlap.mdp.singleagent.common.GoalBasedRF

Returns the goal condition for this reward function.
 getGoalReward()  Method in class burlap.domain.singleagent.lunarlander.LunarLanderRF

 getGoalReward()  Method in class burlap.mdp.singleagent.common.GoalBasedRF

 getGravity()  Method in class burlap.domain.singleagent.lunarlander.LunarLanderDomain.LLPhysicsParams

 getGridWorldValueFunctionVisualization(List<State>, int, int, ValueFunction, Policy)  Static method in class burlap.domain.singleagent.gridworld.GridWorldDomain

 getH()  Method in class burlap.behavior.singleagent.learnfromdemo.mlirl.differentiableplanners.DifferentiableSparseSampling

Returns the height of the tree
 getH()  Method in class burlap.behavior.singleagent.planning.stochastic.sparsesampling.SparseSampling

Returns the height of the tree
 getHalfTrackLength()  Method in class burlap.domain.singleagent.cartpole.CartPoleDomain.CartPoleRewardFunction

 getHalfTrackLength()  Method in class burlap.domain.singleagent.cartpole.CartPoleDomain.CartPoleTerminalFunction

 getHash1(int, int)  Method in class burlap.testing.TestHashing

 getHash2(int, int)  Method in class burlap.testing.TestHashing

 getHash3(int, int)  Method in class burlap.testing.TestHashing

 getHashingFactory()  Method in class burlap.behavior.singleagent.MDPSolver

 getHashingFactory()  Method in interface burlap.behavior.singleagent.MDPSolverInterface

 getHashMap()  Method in class burlap.datastructures.HashedAggregator

Returns the HashMap that backs this object.
 getHawkDove()  Static method in class burlap.domain.stochasticgames.normalform.SingleStageNormalFormGame

Returns an instance of Hawk Dove, which is defined by:
 getHeight(OOState, BlocksWorldBlock)  Method in class burlap.domain.singleagent.blocksworld.BlocksWorldVisualizer.BlockPainter

 getHeight()  Method in class burlap.domain.singleagent.gridworld.GridWorldDomain

Returns this grid world's height
 getHelpText()  Method in class burlap.shell.BurlapShell

 getId()  Method in class burlap.domain.singleagent.graphdefined.GraphStateNode

 getIdentityScalar()  Method in class burlap.behavior.singleagent.learning.lspi.LSPI

Returns the initial LSPI identity matrix scalar used
 getIncredibleInitialState()  Static method in class burlap.domain.stochasticgames.gridgame.GridGame

Returns the initial state for the Incredible game (a game in which player 0 can give an incredible threat).
 getInd()  Method in class burlap.behavior.singleagent.interfaces.rlglue.RLGlueDomain.RLGlueActionType

Returns the RLGlue int identifier of this action
 getInitialBeliefState(PODomain)  Static method in class burlap.domain.singleagent.pomdp.tiger.TigerDomain

Generates an initial
TabularBeliefState
in which the it is equally uncertain where the tiger is (50/50).
 getInitialState(List<Episode>)  Static method in class burlap.behavior.singleagent.learnfromdemo.apprenticeship.ApprenticeshipLearning

Returns the initial state of a randomly chosen episode analysis
 getInitiationTest()  Method in class burlap.behavior.singleagent.options.SubgoalOption

Returns the object defining the initiation states.
 getInternalRewardFunction()  Method in class burlap.mdp.stochasticgames.agent.SGAgentBase

Returns the internal reward function used by the agent.
 getIs()  Method in class burlap.shell.BurlapShell

 getJAMap(State)  Method in class burlap.behavior.stochasticgames.madynamicprogramming.QSourceForSingleAgent.HashBackedQSource

Returns the Map from joint actions to qvalues for a given state.
 getJointActionModel()  Method in class burlap.mdp.stochasticgames.SGDomain

Returns the joint action model associated with this domain.
 getJointActions()  Method in class burlap.behavior.stochasticgames.GameEpisode

Returns the joint action sequence list object
 getJointPolicy()  Method in class burlap.behavior.stochasticgames.PolicyFromJointPolicy

Returns the underlying joint policy
 getJointRewardFunction()  Method in class burlap.domain.stochasticgames.normalform.SingleStageNormalFormGame

 getJointRewards()  Method in class burlap.behavior.stochasticgames.GameEpisode

Returns the joint reward sequence list object
 getK()  Method in class burlap.behavior.singleagent.learnfromdemo.mlirl.MultipleIntentionsMLIRLRequest

Returns the number of clusters.
 getLastComputedColStrategy()  Method in class burlap.behavior.stochasticgames.agents.twoplayer.singlestage.equilibriumplayer.BimatrixEquilibriumSolver

 getLastComputedRowStrategy()  Method in class burlap.behavior.stochasticgames.agents.twoplayer.singlestage.equilibriumplayer.BimatrixEquilibriumSolver

 getLastJointAction()  Method in class burlap.mdp.stochasticgames.world.World

Returns the last joint action taken in this world; null if none have been taken yet.
 getLastLearningEpisode()  Method in class burlap.behavior.singleagent.learning.actorcritic.ActorCritic

 getLastLearningEpisode()  Method in class burlap.behavior.singleagent.learning.lspi.LSPI

 getLastLearningEpisode()  Method in class burlap.behavior.singleagent.learning.modellearning.artdp.ARTDP

 getLastLearningEpisode()  Method in class burlap.behavior.singleagent.learning.modellearning.rmax.PotentialShapedRMax

 getLastNumSteps()  Method in class burlap.behavior.singleagent.learning.tdmethods.QLearning

Returns the number of steps taken in the last episode;
 getLastNumSteps()  Method in class burlap.behavior.singleagent.learning.tdmethods.vfa.GradientDescentSarsaLam

Returns the number of steps taken in the last episode;
 getLastRewards()  Method in class burlap.mdp.stochasticgames.world.World

Returns the last rewards received.
 getLatestTrial()  Method in class burlap.behavior.stochasticgames.auxiliary.performance.MultiAgentPerformancePlotter.DatasetsAndTrials

 getLearnedPolicy(ApprenticeshipLearningRequest)  Static method in class burlap.behavior.singleagent.learnfromdemo.apprenticeship.ApprenticeshipLearning

Computes a policy that models the expert trajectories included in the request object.
 getLearningPolicy()  Method in class burlap.behavior.singleagent.learning.lspi.LSPI

 getLevel1(Domain)  Static method in class burlap.domain.singleagent.blockdude.BlockDudeLevelConstructor

Returns the initial
State
of the first level.
 getLevel2(Domain)  Static method in class burlap.domain.singleagent.blockdude.BlockDudeLevelConstructor

Returns the initial
State
of the second level.
 getLevel3(Domain)  Static method in class burlap.domain.singleagent.blockdude.BlockDudeLevelConstructor

Returns the initial
State
of the third level.
 getListenAccuracy()  Method in class burlap.domain.singleagent.pomdp.tiger.TigerDomain

 getListenReward()  Method in class burlap.domain.singleagent.pomdp.tiger.TigerDomain

 getLocation(OOState, String)  Method in class burlap.domain.stochasticgames.gridgame.GridGameStandardMechanics

Returns the xy position of an agent stored in a Location2 object.
 getLsActions()  Method in class burlap.shell.command.world.ManualAgentsCommands

 getLsAgents()  Method in class burlap.shell.command.world.ManualAgentsCommands

 getManualAgent(String)  Method in class burlap.shell.command.world.ManualAgentsCommands

 getManualAgents()  Method in class burlap.shell.command.world.ManualAgentsCommands

 getMap()  Method in class burlap.domain.singleagent.gridworld.GridWorldDomain

Returns a deep copy of the map being used for the domain
 getMapped(int)  Static method in class burlap.debugtools.RandomFactory

Returns the random generator with the associated id or creates it if it does not yet exist
 getMapped(String)  Static method in class burlap.debugtools.RandomFactory

Returns the random generator with the associated String id or creates it if it does not yet exist
 getMaskedObjectClasses()  Method in class burlap.statehashing.masked.MaskedConfig

 getMaskedVariables()  Method in class burlap.statehashing.masked.MaskedConfig

 getMatchingPennies()  Static method in class burlap.domain.stochasticgames.normalform.SingleStageNormalFormGame

Returns an instance of Matching Pennies, which is defined by:
 getMatchingPreference(HashableState, Action, BoltzmannActor.PolicyNode)  Method in class burlap.behavior.singleagent.learning.actorcritic.actor.BoltzmannActor

Returns the stored BoltzmannActor.ActionPreference
that is stored in a policy node.
 getMaxAbsoluteAngle()  Method in class burlap.domain.singleagent.cartpole.CartPoleDomain.CartPoleRewardFunction

 getMaxAbsoluteAngle()  Method in class burlap.domain.singleagent.cartpole.CartPoleDomain.CartPoleTerminalFunction

 getMaxChange()  Method in class burlap.behavior.singleagent.learning.lspi.LSPI

 getMaxDim()  Method in class burlap.domain.stochasticgames.gridgame.GridGame

Returns the maximum dimension of the world; it's width and height.
 getMaxGT()  Method in class burlap.domain.stochasticgames.gridgame.GridGame

Returns the maximum goal types
 getMaxIterations()  Method in class burlap.behavior.singleagent.learnfromdemo.apprenticeship.ApprenticeshipLearningRequest

 getMaxLearningSteps()  Method in class burlap.behavior.singleagent.learning.lspi.LSPI

 getMaxNumPlanningIterations()  Method in class burlap.behavior.singleagent.learning.lspi.LSPI

 getMaxPlyrs()  Method in class burlap.domain.stochasticgames.gridgame.GridGame

Returns the max number of players
 getMaxQ(HashableState)  Method in class burlap.behavior.singleagent.learning.tdmethods.QLearning

Returns the maximum Qvalue in the hashed stated.
 getMaxQValue(State)  Method in class burlap.behavior.stochasticgames.agents.naiveq.SGNaiveQLAgent

Returns maximum numeric Qvalue for a given state
 getMaxWT()  Method in class burlap.domain.stochasticgames.gridgame.GridGame

Returns the maximum number of wall types
 getMaxx()  Method in class burlap.domain.singleagent.blockdude.BlockDude

 getMaxy()  Method in class burlap.domain.singleagent.blockdude.BlockDude

 getMinNewStepsForLearningPI()  Method in class burlap.behavior.singleagent.learning.lspi.LSPI

The minimum number of new learning observations before policy iteration is run again.
 getModel()  Method in class burlap.behavior.singleagent.learnfromdemo.mlirl.differentiableplanners.DifferentiableSparseSampling

 getModel()  Method in class burlap.behavior.singleagent.learning.modellearning.rmax.PotentialShapedRMax

Returns the model learning algorithm being used.
 getModel()  Method in class burlap.behavior.singleagent.MDPSolver

 getModel()  Method in interface burlap.behavior.singleagent.MDPSolverInterface

Returns the model being used by this solver
 getModel()  Method in class burlap.behavior.singleagent.planning.stochastic.DynamicProgramming

 getModel()  Method in class burlap.mdp.singleagent.SADomain

Returns the
SampleModel
associated with this domain, or null if one is not defined.
 getModeledRewardFunction()  Method in class burlap.behavior.singleagent.learning.modellearning.rmax.PotentialShapedRMax

Returns the model reward function.
 getModeledTerminalFunction()  Method in class burlap.behavior.singleagent.learning.modellearning.rmax.PotentialShapedRMax

Returns the model terminal function.
 getModelPlanner()  Method in class burlap.behavior.singleagent.learning.modellearning.rmax.PotentialShapedRMax

Returns the planning algorithm used on the model that can be iteratively updated as the model changes.
 getMultiLayerRenderer()  Method in class burlap.behavior.singleagent.auxiliary.valuefunctionvis.ValueFunctionVisualizerGUI

 getMyQSource()  Method in class burlap.behavior.stochasticgames.agents.maql.MultiAgentQLearning

Returns this agent's individual Qvalue source
 getName()  Method in class burlap.behavior.singleagent.options.MacroAction

 getName()  Method in class burlap.behavior.singleagent.options.SubgoalOption

 getName()  Method in class burlap.domain.singleagent.blockdude.state.BlockDudeCell

 getName()  Method in class burlap.domain.singleagent.blocksworld.BlocksWorldBlock

 getName()  Method in class burlap.domain.singleagent.frostbite.state.FrostbitePlatform

 getName()  Method in class burlap.domain.singleagent.gridworld.state.GridAgent

 getName()  Method in class burlap.domain.singleagent.gridworld.state.GridLocation

 getName()  Method in class burlap.domain.singleagent.lunarlander.state.LLBlock

 getName()  Method in class burlap.domain.stochasticgames.gridgame.state.GGAgent

 getName()  Method in class burlap.domain.stochasticgames.gridgame.state.GGGoal

 getName()  Method in class burlap.domain.stochasticgames.gridgame.state.GGWall

 getName()  Method in class burlap.mdp.core.action.SimpleAction

 getName()  Method in class burlap.mdp.core.oo.propositional.PropositionalFunction

Returns the name of this propositional function.
 getNegatedArray(double[])  Static method in class burlap.behavior.stochasticgames.solvers.GeneralBimatrixSolverTools

Returns a negated version of the input array in a new array object.
 getNegatedMatrix(double[][])  Static method in class burlap.behavior.stochasticgames.solvers.GeneralBimatrixSolverTools

Returns a negated version of the input matrix in a new matrix object.
 getNewState(int)  Static method in class burlap.domain.singleagent.blocksworld.BlocksWorld

Creates a new state with nBlocks block objects in it.
 getNextAction()  Method in class burlap.shell.command.world.ManualAgentsCommands.ManualSGAgent

 getNextMatch()  Method in class burlap.mdp.stochasticgames.tournament.common.AllPairWiseSameTypeMS

 getNextMatch()  Method in interface burlap.mdp.stochasticgames.tournament.MatchSelector

Returns the next match information, which is a list of
MatchEntry
objects
 getNextState(State, Action)  Method in class burlap.behavior.singleagent.planning.stochastic.rtdp.BoundedRTDP

Selects a next state for expansion when action a is applied in state s.
 getNextStateByMaxMargin(State, Action)  Method in class burlap.behavior.singleagent.planning.stochastic.rtdp.BoundedRTDP

Selects a next state for expansion when action a is applied in state s according to the next possible state that has the largest lower and upper bound margin.
 getNextStateBySampling(State, Action)  Method in class burlap.behavior.singleagent.planning.stochastic.rtdp.BoundedRTDP

Selects a next state for expansion when action a is applied in state s by randomly sampling from the transition dynamics weighted by the margin of the lower and
upper bound value functions.
 getNode(HashableState)  Method in class burlap.behavior.singleagent.learning.actorcritic.actor.BoltzmannActor

Returns the policy node that stores the action preferences for state.
 getNodeFor(HashableState)  Method in class burlap.behavior.singleagent.planning.stochastic.valueiteration.PrioritizedSweeping

Returns or creates, stores, and returns a priority back pointer node for the given hased state
 getNodeTransitionTo(Set<GraphDefinedDomain.NodeTransitionProbability>, int)  Method in class burlap.domain.singleagent.graphdefined.GraphDefinedDomain

 getNonZeroPartialDerivatives()  Method in interface burlap.behavior.functionapproximation.FunctionGradient

Returns all nonzero partial derivatives.
 getNonZeroPartialDerivatives()  Method in class burlap.behavior.functionapproximation.FunctionGradient.SparseGradient

 getNormForce(double, double, double)  Method in class burlap.domain.singleagent.cartpole.model.CPCorrectModel

Computes the normal force for the corrected model
 getNothingReward()  Method in class burlap.domain.singleagent.pomdp.tiger.TigerDomain

 getNSEWPolicyGlyphPainter(Object, Object, VariableDomain, VariableDomain, double, double, String, String, String, String)  Static method in class burlap.behavior.singleagent.auxiliary.valuefunctionvis.common.ArrowActionGlyph

Creates and returns a
PolicyGlyphPainter2D
where the x and y position attributes are name xAtt and yAtt, respectively, belong to the class
classWithPositionAtts, and the north, south east, west actions have the corresponding names and
will be rendered using
ArrowActionGlyph
objects.
 getNumActions()  Method in class burlap.behavior.functionapproximation.dense.DenseCrossProductFeatures

 getNumAgents()  Method in class burlap.mdp.stochasticgames.tournament.Tournament

Returns the number of agents who are playing in this tournament
 getNumberOfBellmanUpdates()  Method in class burlap.behavior.singleagent.planning.stochastic.rtdp.BoundedRTDP

Returns the total number of Bellman updates across all planning
 getNumberOfBellmanUpdates()  Method in class burlap.behavior.singleagent.planning.stochastic.rtdp.RTDP

Returns the total number of Bellman updates across all planning
 getNumberOfStateNodesCreated()  Method in class burlap.behavior.singleagent.planning.stochastic.sparsesampling.SparseSampling

Returns the total number of state nodes that have been created.
 getNumberOfSteps()  Method in class burlap.behavior.singleagent.planning.stochastic.rtdp.BoundedRTDP

Returns the total number of planning steps that have been performed.
 getNumberOfValueEsitmates()  Method in class burlap.behavior.singleagent.learnfromdemo.mlirl.differentiableplanners.DifferentiableSparseSampling

 getNumberOfValueEsitmates()  Method in class burlap.behavior.singleagent.planning.stochastic.sparsesampling.SparseSampling

 getNumberPlatformCol()  Method in class burlap.domain.singleagent.frostbite.FrostbiteModel

 getNumNodes()  Method in class burlap.domain.singleagent.graphdefined.GraphDefinedDomain

Returns the number of state nodes specified in this domain
 getNumPlayers()  Method in class burlap.domain.stochasticgames.normalform.SingleStageNormalFormGame

Returns the number of players in the domain to be generated
 getNumSamplesForPlanning()  Method in class burlap.behavior.singleagent.learning.lspi.LSPI

 getNumVisited()  Method in class burlap.behavior.singleagent.planning.deterministic.uninformed.dfs.DFS

Returns the number of search nodes visited/expanded.
 getObjectParameters()  Method in interface burlap.mdp.core.oo.ObjectParameterizedAction

Returns the parameters of this
Action
that correspond to OOMDP objects.
 getObjectParameters()  Method in class burlap.mdp.singleagent.oo.ObjectParameterizedActionType.SAObjectParameterizedAction

 getObjectsByClass()  Method in class burlap.mdp.core.oo.state.generic.GenericOOState

Getter method for underlying data to support serialization.
 getObjectsMap()  Method in class burlap.mdp.core.oo.state.generic.GenericOOState

Getter method for underlying data to support serialization.
 getObs()  Method in class burlap.behavior.singleagent.interfaces.rlglue.RLGlueState

 getObservationFunction()  Method in class burlap.mdp.singleagent.pomdp.PODomain

 getOperator()  Method in class burlap.behavior.singleagent.learnfromdemo.mlirl.differentiableplanners.DifferentiableDP

 getOperator()  Method in class burlap.behavior.singleagent.learnfromdemo.mlirl.differentiableplanners.DifferentiableSparseSampling

 getOperator()  Method in class burlap.behavior.singleagent.planning.stochastic.DynamicProgramming

Returns the dynamic programming operator used
 getOperator()  Method in class burlap.behavior.singleagent.planning.stochastic.sparsesampling.SparseSampling

 getOpponent()  Method in class burlap.behavior.stochasticgames.agents.twoplayer.singlestage.equilibriumplayer.EquilibriumPlayingSGAgent

Returns the
SGAgent
object in the world for the opponent.
 getOrCreate(int)  Method in class burlap.behavior.functionapproximation.sparse.SparseCrossProductFeatures.FeaturesMap

 getOrCreateActionNode(HashableState, Action)  Method in class burlap.behavior.singleagent.learning.modellearning.models.TabularModel

Returns the TabularModel.StateActionNode
object associated with the given hashed state and action.
 getOrCreateModel(Option)  Method in class burlap.behavior.singleagent.options.model.BFSMarkovOptionModel

 getOrGenerateActionFeatureList(Map<Tiling.FVTile, List<TileCodingFeatures.ActionFeatureID>>, Tiling.FVTile)  Method in class burlap.behavior.functionapproximation.sparse.tilecoding.TileCodingFeatures

Returns or creates stores and returns the list of action feature ids in the given map for the given tile.
 getOrGenerateFeature(Map<Tiling.FVTile, Integer>, Tiling.FVTile)  Method in class burlap.behavior.functionapproximation.sparse.tilecoding.TileCodingFeatures

Returns the stored feature id or creates, stores and returns one.
 getOrGenerateObjectClassList(String)  Method in class burlap.mdp.core.oo.state.generic.GenericOOState

 getOrSeedDefault(long)  Static method in class burlap.debugtools.RandomFactory

Either return a the default random generator if it has already been created; or created it with the given seed if it has not been created.
 getOrSeedMapped(int, long)  Static method in class burlap.debugtools.RandomFactory

Either returns the random generator for the given id or creates if with the given seed it does not yet exit
 getOrSeedMapped(String, long)  Static method in class burlap.debugtools.RandomFactory

Either returns the random generator for the given String id or creates if with the given seed it does not yet exit
 getOs()  Method in class burlap.shell.BurlapShell

 getPainter()  Method in class burlap.mdp.singleagent.common.VisualActionObserver

 getParameter(int)  Method in class burlap.behavior.functionapproximation.dense.DenseLinearVFA

 getParameter(int)  Method in class burlap.behavior.functionapproximation.dense.DenseStateActionLinearVFA

 getParameter(int)  Method in interface burlap.behavior.functionapproximation.ParametricFunction

Returns the value of the ith parameter value
 getParameter(int)  Method in class burlap.behavior.functionapproximation.sparse.LinearVFA

 getParameter(int)  Method in class burlap.behavior.singleagent.learnfromdemo.mlirl.commonrfs.LinearStateActionDifferentiableRF

 getParameter(int)  Method in class burlap.behavior.singleagent.learnfromdemo.mlirl.commonrfs.LinearStateDifferentiableRF

 getParameter(int)  Method in class burlap.behavior.singleagent.learnfromdemo.mlirl.differentiableplanners.diffvinit.DiffVFRF

 getParameter(int)  Method in class burlap.behavior.singleagent.learnfromdemo.mlirl.differentiableplanners.diffvinit.LinearDiffRFVInit

 getParameter(int)  Method in class burlap.behavior.singleagent.learnfromdemo.mlirl.differentiableplanners.diffvinit.LinearStateDiffVF

 getParameter(int)  Method in class burlap.behavior.singleagent.learnfromdemo.mlirl.differentiableplanners.diffvinit.VanillaDiffVinit

 getParameterClasses()  Method in class burlap.mdp.core.oo.propositional.PropositionalFunction

Returns the object classes of the parameters for this propositional function
 getParameterClasses()  Method in class burlap.mdp.singleagent.oo.ObjectParameterizedActionType

Returns a String array of the names of of the object classes to which bound parameters must belong
 getParameterOrderGroups()  Method in class burlap.mdp.core.oo.propositional.PropositionalFunction

Returns the parameter order group names for the parameters of this propositional function.
 getParameterOrderGroups()  Method in class burlap.mdp.singleagent.oo.ObjectParameterizedActionType

Returns the a String array specifying the parameter order group of each parameter.
 getPartialDerivative(int)  Method in interface burlap.behavior.functionapproximation.FunctionGradient

Returns the partial derivative for the given weight
 getPartialDerivative(int)  Method in class burlap.behavior.functionapproximation.FunctionGradient.SparseGradient

 getPayout(SingleStageNormalFormGame.StrategyProfile)  Method in class burlap.domain.stochasticgames.normalform.SingleStageNormalFormGame.AgentPayoutFunction

Returns the payout for a given strategy profile
 getPayout(int, String...)  Method in class burlap.domain.stochasticgames.normalform.SingleStageNormalFormGame

Returns the payout that player pn
receives for the given strategy profile.
 getPayout(int, int...)  Method in class burlap.domain.stochasticgames.normalform.SingleStageNormalFormGame

Returns the payout that player pn
receives for the given strategy profile.
 getPersonalGoalReward(OOState, String)  Method in class burlap.domain.stochasticgames.gridgame.GridGame.GGJointRewardFunction

Returns the personal goal rewards.
 getPhysParams()  Method in class burlap.domain.singleagent.lunarlander.LunarLanderDomain

 getPlanner()  Method in class burlap.behavior.singleagent.learnfromdemo.IRLRequest

 getPlannerFactory()  Method in class burlap.behavior.singleagent.learnfromdemo.mlirl.MultipleIntentionsMLIRLRequest

 getPlannerInstance()  Method in class burlap.behavior.stochasticgames.agents.madp.MADPPlannerFactory.ConstantMADPPlannerFactory

 getPlannerInstance()  Method in interface burlap.behavior.stochasticgames.agents.madp.MADPPlannerFactory

 getPlannerInstance()  Method in class burlap.behavior.stochasticgames.agents.madp.MADPPlannerFactory.MAVIPlannerFactory

 getPlanningCollector()  Method in class burlap.behavior.singleagent.learning.lspi.LSPI

 getPlanningDepth()  Method in class burlap.behavior.singleagent.planning.vfa.fittedvi.FittedVI

Returns the Bellman operator depth used during planning.
 getPlayerNumberForAgent(String)  Method in class burlap.mdp.stochasticgames.world.World

Returns the player index for the agent with the given name.
 getPolicy()  Method in class burlap.behavior.singleagent.auxiliary.valuefunctionvis.PolicyRenderLayer

Returns the policy that will be rendered.
 getPolicy()  Method in class burlap.behavior.singleagent.learning.actorcritic.ActorCritic

Returns the policy/actor of this learning algorithm.
 getPolicy()  Method in class burlap.behavior.singleagent.options.SubgoalOption

 getPolicyCount()  Method in class burlap.behavior.singleagent.learnfromdemo.apprenticeship.ApprenticeshipLearningRequest

 getPolicyReachableHashedStates(SADomain, EnumerablePolicy, State, HashableStateFactory)  Static method in class burlap.behavior.singleagent.auxiliary.StateReachability

Finds the set of states (
HashableState
) that are reachable under a policy from a source state.
 getPolicyReachableStates(SADomain, EnumerablePolicy, State, HashableStateFactory)  Static method in class burlap.behavior.singleagent.auxiliary.StateReachability

Finds the set of states that are reachable under a policy from a source state.
 getPolynomialDegree()  Method in class burlap.behavior.singleagent.auxiliary.valuefunctionvis.common.LandmarkColorBlendInterpolation

Returns the power to raise the normalized distance
 getPositiveMatrix(double[][])  Static method in class burlap.behavior.stochasticgames.solvers.GeneralBimatrixSolverTools

Creates a new matrix (m2) whose values are the values of m shifted by a constant amount c such that all the values
in m2 are positive.
 getPossibleBindingsGivenParamOrderGroups(OOState, String[], String[])  Static method in class burlap.mdp.core.oo.state.OOStateUtilities

Returns all possible object assignments from a
OOState
for a given set of parameters that are typed
to OOMDP object classes.
 getPossibleCollisionOutcomes(List<GridGameStandardMechanics.Location2>, List<GridGameStandardMechanics.Location2>)  Method in class burlap.domain.stochasticgames.gridgame.GridGameStandardMechanics

 getPossibleLocationsFromWallCollisions(OOState, GridGameStandardMechanics.Location2, GridGameStandardMechanics.Location2, List<GridGameStandardMechanics.Location2>)  Method in class burlap.domain.stochasticgames.gridgame.GridGameStandardMechanics

Returns the list of possible outcome locations for a given start point and desired position change.
 getPotentialFunction()  Method in class burlap.behavior.singleagent.learning.modellearning.rmax.RMaxModel

 getPreferences()  Method in class burlap.datastructures.BoltzmannDistribution

Returns the input preferences
 getPrisonersDilemma()  Static method in class burlap.domain.stochasticgames.normalform.SingleStageNormalFormGame

Returns an instance of Prisoner's Dilemma, which is defined by:
 getPrisonersDilemmaInitialState()  Static method in class burlap.domain.stochasticgames.gridgame.GridGame

Returns the initial state for a classic prisoner's dilemma formulated in a Grid Game.
 getProbabilities()  Method in class burlap.datastructures.BoltzmannDistribution

Returns the output probability distribution.
 getQ(HashableState, Action)  Method in class burlap.behavior.singleagent.learning.tdmethods.QLearning

Returns the Qvalue for a given hashed state and action.
 getQGreedyNode(UCTStateNode)  Method in class burlap.behavior.singleagent.planning.stochastic.montecarlo.uct.UCTTreeWalkPolicy

Returns the
UCTActionNode
with the highest average sample return.
 getQs(HashableState)  Method in class burlap.behavior.singleagent.learning.tdmethods.QLearning

Returns the possible Qvalues for a given hashed stated.
 getQSources()  Method in class burlap.behavior.stochasticgames.agents.maql.MultiAgentQLearning

 getQSources()  Method in class burlap.behavior.stochasticgames.madynamicprogramming.MADynamicProgramming

 getQSources()  Method in interface burlap.behavior.stochasticgames.madynamicprogramming.MultiAgentQSourceProvider

Returns an object that can provide Qvalue sources for each agent.
 getQValueFor(State, JointAction)  Method in class burlap.behavior.stochasticgames.madynamicprogramming.MADynamicProgramming.BackupBasedQSource

 getQValueFor(State, JointAction)  Method in interface burlap.behavior.stochasticgames.madynamicprogramming.QSourceForSingleAgent

Returns a Qvalue (represented with a
JAQValue
object) stored for the given state and joint action.
 getQValueFor(State, JointAction)  Method in class burlap.behavior.stochasticgames.madynamicprogramming.QSourceForSingleAgent.HashBackedQSource

 getRandomGenerator()  Method in class burlap.behavior.policy.RandomPolicy

Returns the random generator used for action selection.
 getRandomObject()  Method in class burlap.domain.singleagent.mountaincar.MCRandomStateGenerator

Returns the random object used for generating states
 getReachableHashedStates(State, SADomain, HashableStateFactory)  Static method in class burlap.behavior.singleagent.auxiliary.StateReachability

Returns the set of
State
objects that are reachable from a source state.
 getReachableHashedStates(State, SADomain, HashableStateFactory)  Method in class burlap.testing.TestHashing

 getReachableStates(State, SADomain, HashableStateFactory)  Static method in class burlap.behavior.singleagent.auxiliary.StateReachability

Returns the list of
State
objects that are reachable from a source state.
 getRecCommand()  Method in class burlap.shell.command.env.EpisodeRecordingCommands

 getRegCommand()  Method in class burlap.shell.command.world.ManualAgentsCommands

 getRegisteredAgents()  Method in class burlap.mdp.stochasticgames.world.World

Returns the list of agents participating in this world.
 getRenderAction()  Method in class burlap.visualizer.StateActionRenderLayer

Returns the
Action
that is/will be rendered
 getRenderLayer(Domain, int[][])  Static method in class burlap.domain.singleagent.gridworld.GridWorldVisualizer

Deprecated.
 getRenderLayer(int[][])  Static method in class burlap.domain.singleagent.gridworld.GridWorldVisualizer

Returns state render layer for a gird world domain with the provided wall map.
 getRenderState()  Method in class burlap.visualizer.StateActionRenderLayer

Returns the
State
that is/will be rendered
 getRenderStyle()  Method in class burlap.behavior.singleagent.auxiliary.valuefunctionvis.common.PolicyGlyphPainter2D

Returns the rendering style
 getRepatedGameActionModel()  Static method in class burlap.domain.stochasticgames.normalform.SingleStageNormalFormGame

Returns a repeated game joint action model.
 getRepublicanObjective(double[][])  Static method in class burlap.behavior.stochasticgames.solvers.CorrelatedEquilibriumSolver

Returns the republican/libertarian objective for the given player's payoffs that are to be maximized.
 getRewardForTransitionsTo(int, int)  Method in class burlap.domain.singleagent.gridworld.GridWorldRewardFunction

Returns the reward this reward function will return when the agent transitions to position x, y.
 getRewardFunction()  Method in class burlap.mdp.stochasticgames.world.World

 getRewardMatrix()  Method in class burlap.domain.singleagent.gridworld.GridWorldRewardFunction

Returns the reward matrix this reward function uses.
 getRf()  Method in class burlap.behavior.singleagent.learnfromdemo.mlirl.MLIRLRequest

 getRf()  Method in class burlap.domain.singleagent.blockdude.BlockDude

 getRf()  Method in class burlap.domain.singleagent.blocksworld.BlocksWorld

 getRf()  Method in class burlap.domain.singleagent.cartpole.CartPoleDomain

 getRf()  Method in class burlap.domain.singleagent.cartpole.InvertedPendulum

 getRf()  Method in class burlap.domain.singleagent.frostbite.FrostbiteDomain

 getRf()  Method in class burlap.domain.singleagent.graphdefined.GraphDefinedDomain

 getRf()  Method in class burlap.domain.singleagent.gridworld.GridWorldDomain

 getRf()  Method in class burlap.domain.singleagent.lunarlander.LunarLanderDomain

 getRf()  Method in class burlap.domain.singleagent.mountaincar.MountainCar

 getRf()  Method in class burlap.mdp.singleagent.model.FactoredModel

 getRfDim()  Method in class burlap.behavior.singleagent.learnfromdemo.mlirl.differentiableplanners.diffvinit.LinearDiffRFVInit

 getRfFvGen()  Method in class burlap.behavior.singleagent.learnfromdemo.mlirl.differentiableplanners.diffvinit.LinearDiffRFVInit

 getRLGlueAction(int)  Static method in class burlap.behavior.singleagent.interfaces.rlglue.RLGlueAgent

Returns the corresponding RLGlue action for the given action id.
 getRoot()  Method in class burlap.behavior.singleagent.planning.stochastic.montecarlo.uct.UCT

Returns the root node of the UCT tree.
 getRootEnvironmentDelegate(EnvironmentDelegation)  Static method in class burlap.mdp.singleagent.environment.extensions.EnvironmentDelegation.Helper

 getRowPlayersStrategy(double[][])  Static method in class burlap.behavior.stochasticgames.solvers.MinMaxSolver

Computes the minmax strategy for the row player of the given payoff matrix.
 getS()  Method in class burlap.behavior.singleagent.learning.actorcritic.CritiqueResult

Returns the source state of this behavior.
 getS()  Method in class burlap.statehashing.WrappedHashableState

Getter for Java Bean serialization purposes.
 getSaFeatures()  Method in class burlap.behavior.singleagent.learning.lspi.LSPI

Returns the stateaction features used
 getSamples()  Method in class burlap.behavior.singleagent.planning.vfa.fittedvi.FittedVI

Returns the state samples to which the value function will be fit.
 getScale()  Method in class burlap.domain.singleagent.frostbite.FrostbiteDomain

 getScale()  Method in class burlap.domain.singleagent.frostbite.FrostbiteModel

 getSelectionActions()  Method in class burlap.behavior.policy.RandomPolicy

Returns of the list of actions that can be randomly selected.
 getSemiWallProb()  Method in class burlap.domain.stochasticgames.gridgame.GridGame

Returns the probability that an agent can pass through a semiwall.
 getSetAction()  Method in class burlap.shell.command.world.ManualAgentsCommands

 getShell()  Method in class burlap.shell.visual.SGVisualExplorer

Returns the
SGWorldShell
associated with this visual explorer.
 getShell()  Method in class burlap.shell.visual.VisualExplorer

 getSimpleGameInitialState()  Static method in class burlap.domain.stochasticgames.gridgame.GridGame

Returns the initial state for a simple game in which both players can win without interfering with one another.
 getSoftTieRenderStyleDelta()  Method in class burlap.behavior.singleagent.auxiliary.valuefunctionvis.common.PolicyGlyphPainter2D

Returns the soft difference between max actions to determine ties when the MAXACTIONSOFSOFTTIE render style is used.
 getSourceModel()  Method in class burlap.behavior.singleagent.learning.modellearning.rmax.RMaxModel

 getSparseSamplingPlanner()  Method in class burlap.behavior.singleagent.pomdp.wrappedmdpalgs.BeliefSparseSampling

 getSparseStateFeatures()  Method in class burlap.behavior.functionapproximation.dense.SparseToDenseFeatures

 getSpp()  Method in class burlap.behavior.singleagent.auxiliary.valuefunctionvis.PolicyRenderLayer

Returns the statewise policy painter
 getSpp()  Method in class burlap.behavior.singleagent.auxiliary.valuefunctionvis.ValueFunctionVisualizerGUI

Returns the statewise policy painter
 getSprime()  Method in class burlap.behavior.singleagent.learning.actorcritic.CritiqueResult

Returns the resulting state of this behavior.
 getStackBottom(OOState, BlocksWorldBlock)  Method in class burlap.domain.singleagent.blocksworld.BlocksWorldVisualizer.BlockPainter

 getStagHunt()  Static method in class burlap.domain.stochasticgames.normalform.SingleStageNormalFormGame

Returns an instance of Stag Hunt, which is defined by:
 getStandardGridGameAgentType(SGDomain)  Static method in class burlap.domain.stochasticgames.gridgame.GridGame

Creates and returns a standard
SGAgentType
for grid games.
 getStartStateGenerator()  Method in class burlap.behavior.singleagent.learnfromdemo.apprenticeship.ApprenticeshipLearningRequest

 getStateActionNode(HashableState, Action)  Method in class burlap.behavior.singleagent.learning.modellearning.models.TabularModel

Returns the TabularModel.StateActionNode
object associated with the given hashed state and action.
 getStateEnumerator()  Method in class burlap.mdp.singleagent.pomdp.beliefstate.TabularBeliefState

 getStateEnumerator()  Method in class burlap.mdp.singleagent.pomdp.beliefstate.TabularBeliefUpdate

 getStateEnumerator()  Method in class burlap.mdp.singleagent.pomdp.PODomain

Gets the
StateEnumerator
used by this domain to enumerate all underlying MDP states.
 getStateFeatures()  Method in class burlap.behavior.functionapproximation.dense.DenseCrossProductFeatures

 getStateFeatures()  Method in class burlap.behavior.functionapproximation.dense.DenseLinearVFA

 getStateForEnumerationId(int)  Method in class burlap.behavior.singleagent.auxiliary.StateEnumerator

Returns the state associated with the given enumeration id.
 getStateGenerator()  Method in class burlap.mdp.singleagent.environment.SimulatedEnvironment

 getStateModel()  Method in class burlap.mdp.singleagent.model.FactoredModel

 getStateNode(State, int)  Method in class burlap.behavior.singleagent.learnfromdemo.mlirl.differentiableplanners.DifferentiableSparseSampling

Either returns, or creates, indexes, and returns, the state node for the given state at the given height in the tree
 getStateNode(HashableState)  Method in class burlap.behavior.singleagent.learning.tdmethods.QLearning

 getStateNode(State, int)  Method in class burlap.behavior.singleagent.planning.stochastic.sparsesampling.SparseSampling

Either returns, or creates, indexes, and returns, the state node for the given state at the given height in the tree
 getStatePainters()  Method in class burlap.visualizer.StateRenderLayer

 getStateRenderLayer(int, int)  Static method in class burlap.domain.singleagent.blockdude.BlockDudeVisualizer

 getStateRenderLayer(int)  Static method in class burlap.domain.singleagent.blocksworld.BlocksWorldVisualizer

 getStateRenderLayer(LunarLanderDomain.LLPhysicsParams)  Static method in class burlap.domain.singleagent.lunarlander.LLVisualizer

 getStateRenderLayer(MountainCar.MCPhysicsParams)  Static method in class burlap.domain.singleagent.mountaincar.MountainCarVisualizer

 getStateRenderLayer()  Method in class burlap.visualizer.Visualizer

 getStates()  Method in class burlap.behavior.stochasticgames.GameEpisode

Returns the state sequence list object
 getStateSpace()  Method in class burlap.mdp.singleagent.pomdp.beliefstate.TabularBeliefState

Returns the set of underlying MDP states this belief vector spans.
 getStatesToVisualize()  Method in class burlap.behavior.singleagent.auxiliary.valuefunctionvis.PolicyRenderLayer

Returns the states that will be visualized
 getStatesToVisualize()  Method in class burlap.behavior.singleagent.auxiliary.valuefunctionvis.ValueFunctionRenderLayer

Returns the states that will be visualized
 getStateWiseLearningRate(State)  Method in class burlap.behavior.learningrate.ExponentialDecayLR

Returns the learning rate data structure for the given state.
 getStateWiseTimeIndex(State)  Method in class burlap.behavior.learningrate.SoftTimeInverseDecayLR

Returns the learning rate data structure for the given state.
 getStoredEntry(T)  Method in class burlap.datastructures.StochasticTree

Returns the pointer to the stored entry in this tree for the given query element.
 getStrategyProfile(SingleStageNormalFormGame.ActionNameMap[], String...)  Static method in class burlap.domain.stochasticgames.normalform.SingleStageNormalFormGame

Returns a hashable strategy profile object for a strategy profile specified by action names
 getSvp()  Method in class burlap.behavior.singleagent.auxiliary.valuefunctionvis.ValueFunctionRenderLayer

Returns the Statewise value function painter
 getSvp()  Method in class burlap.behavior.singleagent.auxiliary.valuefunctionvis.ValueFunctionVisualizerGUI

Returns the Statewise value function painter
 getTemperature()  Method in class burlap.datastructures.BoltzmannDistribution

Returns the temperature parameter
 getTerminalStates()  Method in class burlap.domain.singleagent.graphdefined.GraphTF

 getTerminationStates()  Method in class burlap.behavior.singleagent.options.SubgoalOption

 getTf()  Method in class burlap.domain.singleagent.blockdude.BlockDude

 getTf()  Method in class burlap.domain.singleagent.blocksworld.BlocksWorld

 getTf()  Method in class burlap.domain.singleagent.cartpole.CartPoleDomain

 getTf()  Method in class burlap.domain.singleagent.cartpole.InvertedPendulum

 getTf()  Method in class burlap.domain.singleagent.frostbite.FrostbiteDomain

 getTf()  Method in class burlap.domain.singleagent.graphdefined.GraphDefinedDomain

 getTf()  Method in class burlap.domain.singleagent.gridworld.GridWorldDomain

 getTf()  Method in class burlap.domain.singleagent.lunarlander.LunarLanderDomain

 getTf()  Method in class burlap.domain.singleagent.mountaincar.MountainCar

 getTf()  Method in class burlap.mdp.auxiliary.stateconditiontest.TFGoalCondition

 getTf()  Method in class burlap.mdp.singleagent.model.FactoredModel

 getTF()  Method in class burlap.mdp.stochasticgames.world.World

 getTheTaskSpec()  Method in class burlap.behavior.singleagent.interfaces.rlglue.RLGlueDomain

 getTHistory()  Method in class burlap.behavior.singleagent.learnfromdemo.apprenticeship.ApprenticeshipLearningRequest

 getTime()  Method in class burlap.debugtools.MyTimer

Returns the elapsed time in seconds since the last startstop calls.
 getTin()  Method in class burlap.shell.visual.TextAreaStreams

Returns the InputStream
for the JTextArea
.
 getTotalPolicyIterations()  Method in class burlap.behavior.singleagent.planning.stochastic.policyiteration.PolicyIteration

Returns the total number of policy iterations that have been performed.
 getTotalTime()  Method in class burlap.debugtools.MyTimer

Returns the total time in seconds recorded over all startstop calls.
 getTotalValueIterations()  Method in class burlap.behavior.singleagent.planning.stochastic.policyiteration.PolicyIteration

Returns the total number of value iterations used to evaluate policies.
 getTout()  Method in class burlap.shell.visual.TextAreaStreams

Returns the OutputStream
for the JTextArea
 getTransitionDynamics()  Method in class burlap.domain.singleagent.graphdefined.GraphDefinedDomain.GraphStateModel

 getTransitionDynamics()  Method in class burlap.domain.singleagent.gridworld.GridWorldDomain

 getTurkeyInitialState()  Static method in class burlap.domain.stochasticgames.gridgame.GridGame

 getUpdater()  Method in class burlap.mdp.singleagent.pomdp.BeliefAgent

 getUsingMaxMargin()  Method in class burlap.behavior.singleagent.learnfromdemo.apprenticeship.ApprenticeshipLearningRequest

 getUtilitarianObjective(double[][], double[][])  Static method in class burlap.behavior.stochasticgames.solvers.CorrelatedEquilibriumSolver

Returns the utilitarian objective for the given payoffs for the row and column player.
 getV(HashableState)  Method in class burlap.behavior.singleagent.learning.actorcritic.critics.TDLambda

Returns the TDLambda.VValue
object (storing the value) for a given hashed stated.
 getV(HashableState, int)  Method in class burlap.behavior.singleagent.learning.actorcritic.critics.TimeIndexedTDLambda

Returns the TDLambda.VValue
object (storing the value) for a given hashed stated at the specified time/depth.
 getValue(HashableState)  Method in class burlap.behavior.stochasticgames.madynamicprogramming.MADynamicProgramming.BackupBasedQSource

Returns the stored state value for hashed state sh, or creates an entry with an initial value corresponding to the
MADynamicProgramming
instance's value function initialization object and returns that value if the the quried state has never previously been indexed.
 getValueFunctionInitialization()  Method in class burlap.behavior.singleagent.planning.stochastic.DynamicProgramming

Returns the value initialization function used.
 getVInit()  Method in class burlap.behavior.singleagent.planning.vfa.fittedvi.FittedVI

Returns the value function initialization used at the start of planning.
 getVinitDim()  Method in class burlap.behavior.singleagent.learnfromdemo.mlirl.differentiableplanners.diffvinit.LinearDiffRFVInit

 getVinitFvGen()  Method in class burlap.behavior.singleagent.learnfromdemo.mlirl.differentiableplanners.diffvinit.LinearDiffRFVInit

 getVisualizer(int, int)  Static method in class burlap.domain.singleagent.blockdude.BlockDudeVisualizer

 getVisualizer()  Static method in class burlap.domain.singleagent.blocksworld.BlocksWorldVisualizer

Returns a 2D Visualizer canvas object to visualize
BlocksWorld
states.
 getVisualizer(int)  Static method in class burlap.domain.singleagent.blocksworld.BlocksWorldVisualizer

Returns a 2D Visualizer canvas object to visualize
BlocksWorld
states where the name of the block is rendered at the provided font
point size.
 getVisualizer()  Static method in class burlap.domain.singleagent.frostbite.FrostbiteVisualizer

 getVisualizer(int)  Static method in class burlap.domain.singleagent.frostbite.FrostbiteVisualizer

Returns a visualizer for a lunar lander domain.
 getVisualizer(Domain, int[][])  Static method in class burlap.domain.singleagent.gridworld.GridWorldVisualizer

Deprecated.
 getVisualizer(int[][])  Static method in class burlap.domain.singleagent.gridworld.GridWorldVisualizer

Returns visualizer for a grid world domain with the provided wall map.
 getVisualizer(LunarLanderDomain)  Static method in class burlap.domain.singleagent.lunarlander.LLVisualizer

Returns a
Visualizer
for a
LunarLanderDomain
using
the generator's current version of the physics parameters for defining
the visualized movement space size and rotation degrees.
 getVisualizer(LunarLanderDomain.LLPhysicsParams)  Static method in class burlap.domain.singleagent.lunarlander.LLVisualizer

 getVisualizer(MountainCar)  Static method in class burlap.domain.singleagent.mountaincar.MountainCarVisualizer

 getVisualizer(MountainCar.MCPhysicsParams)  Static method in class burlap.domain.singleagent.mountaincar.MountainCarVisualizer

 getVisualizer(int, int)  Static method in class burlap.domain.stochasticgames.gridgame.GGVisualizer

Generates a visualizer for a grid game
 getVisualizer()  Method in class burlap.shell.BurlapShell

 getVisualizer()  Method in class burlap.shell.visual.VisualExplorer

 getVmax()  Method in class burlap.domain.singleagent.lunarlander.LunarLanderDomain

Returns the maximum velocity of the agent (the agent cannot move faster than this value).
 getVmax()  Method in class burlap.domain.singleagent.lunarlander.LunarLanderDomain.LLPhysicsParams

 getVmax()  Method in class burlap.domain.singleagent.mountaincar.MCRandomStateGenerator

Returns the maximum velocity that a generated state can have.
 getVmin()  Method in class burlap.domain.singleagent.mountaincar.MCRandomStateGenerator

Returns the minimum velocity that a generated state can have.
 getW()  Method in class burlap.shell.visual.SGVisualExplorer

Returns the
World
associated with this explorer.
 getWeight(int)  Method in class burlap.behavior.functionapproximation.sparse.LinearVFA

 getWelcomeMessage()  Method in class burlap.shell.BurlapShell

 getWidth()  Method in class burlap.domain.singleagent.gridworld.GridWorldDomain

Returns this grid world's width
 getWinningAgentMovements(Map<Integer, List<Integer>>)  Method in class burlap.domain.stochasticgames.gridgame.GridGameStandardMechanics

Takes as input the set of collisions and randomly selects a winner
 getWorld()  Method in class burlap.shell.SGWorldShell

 getWrongDoorReward()  Method in class burlap.domain.singleagent.pomdp.tiger.TigerDomain

 getX2ndDeriv(double, double, double, double, double, double)  Method in class burlap.domain.singleagent.cartpole.model.CPCorrectModel

Returns the second order x position derivative for the corrected model.
 getXmax()  Method in class burlap.domain.singleagent.lunarlander.LunarLanderDomain

Returns the maximum x position of the lander (the agent cannot cross this boundary)
 getXmax()  Method in class burlap.domain.singleagent.lunarlander.LunarLanderDomain.LLPhysicsParams

 getXmax()  Method in class burlap.domain.singleagent.mountaincar.MCRandomStateGenerator

Returns the maximum xvalue that a generated state can have.
 getXmin()  Method in class burlap.domain.singleagent.lunarlander.LunarLanderDomain

Returns the minimum x position of the lander (the agent cannot cross this boundary)
 getXmin()  Method in class burlap.domain.singleagent.lunarlander.LunarLanderDomain.LLPhysicsParams

 getXmin()  Method in class burlap.domain.singleagent.mountaincar.MCRandomStateGenerator

Returns the minimum xvalue that a generated state can have.
 getYmax()  Method in class burlap.domain.singleagent.lunarlander.LunarLanderDomain

Returns the maximum y position of the lander (the agent cannot cross this boundary)
 getYmax()  Method in class burlap.domain.singleagent.lunarlander.LunarLanderDomain.LLPhysicsParams

 getYmin()  Method in class burlap.domain.singleagent.lunarlander.LunarLanderDomain

Returns the minimum y position of the lander (the agent cannot cross this boundary)
 getYmin()  Method in class burlap.domain.singleagent.lunarlander.LunarLanderDomain.LLPhysicsParams

 GGAgent  Class in burlap.domain.stochasticgames.gridgame.state

 GGAgent()  Constructor for class burlap.domain.stochasticgames.gridgame.state.GGAgent

 GGAgent(int, int, int, String)  Constructor for class burlap.domain.stochasticgames.gridgame.state.GGAgent

 GGGoal  Class in burlap.domain.stochasticgames.gridgame.state

 GGGoal()  Constructor for class burlap.domain.stochasticgames.gridgame.state.GGGoal

 GGGoal(int, int, int, String)  Constructor for class burlap.domain.stochasticgames.gridgame.state.GGGoal

 GGHorizontalWall()  Constructor for class burlap.domain.stochasticgames.gridgame.state.GGWall.GGHorizontalWall

 GGHorizontalWall(int, int, int, int, String)  Constructor for class burlap.domain.stochasticgames.gridgame.state.GGWall.GGHorizontalWall

 GGJointRewardFunction(OODomain)  Constructor for class burlap.domain.stochasticgames.gridgame.GridGame.GGJointRewardFunction

Initializes for a given domain.
 GGJointRewardFunction(OODomain, double, double, boolean)  Constructor for class burlap.domain.stochasticgames.gridgame.GridGame.GGJointRewardFunction

Initializes for a given domain, step cost reward and goal reward.
 GGJointRewardFunction(OODomain, double, double, double, boolean)  Constructor for class burlap.domain.stochasticgames.gridgame.GridGame.GGJointRewardFunction

Initializes for a given domain, step cost reward, personal goal reward, and universal goal reward.
 GGJointRewardFunction(OODomain, double, double, boolean, Map<Integer, Double>)  Constructor for class burlap.domain.stochasticgames.gridgame.GridGame.GGJointRewardFunction

Initializes for a given domain, step cost reward, universal goal reward, and unique personal goal reward for each player.
 GGTerminalFunction(OODomain)  Constructor for class burlap.domain.stochasticgames.gridgame.GridGame.GGTerminalFunction

Initializes for the given domain
 GGVerticalWall()  Constructor for class burlap.domain.stochasticgames.gridgame.state.GGWall.GGVerticalWall

 GGVerticalWall(int, int, int, int, String)  Constructor for class burlap.domain.stochasticgames.gridgame.state.GGWall.GGVerticalWall

 GGVisualizer  Class in burlap.domain.stochasticgames.gridgame

A class for visualizing the grid games.
 GGWall  Class in burlap.domain.stochasticgames.gridgame.state

 GGWall()  Constructor for class burlap.domain.stochasticgames.gridgame.state.GGWall

 GGWall(int, int, int, int, String)  Constructor for class burlap.domain.stochasticgames.gridgame.state.GGWall

 GGWall.GGHorizontalWall  Class in burlap.domain.stochasticgames.gridgame.state

 GGWall.GGVerticalWall  Class in burlap.domain.stochasticgames.gridgame.state

 GoalBasedRF  Class in burlap.mdp.singleagent.common

 GoalBasedRF(StateConditionTest)  Constructor for class burlap.mdp.singleagent.common.GoalBasedRF

Initializes with transitions to goal states returning a reward of 1 and all others returning 0
 GoalBasedRF(StateConditionTest, double)  Constructor for class burlap.mdp.singleagent.common.GoalBasedRF

Initializes with transitions to goal states returning the given reward and all others returning 0.
 GoalBasedRF(StateConditionTest, double, double)  Constructor for class burlap.mdp.singleagent.common.GoalBasedRF

Initializes with transitions to goal states returning the given reward and all others returning 0.
 GoalBasedRF(TerminalFunction)  Constructor for class burlap.mdp.singleagent.common.GoalBasedRF

Initializes with transitions to goal states, indicated by the terminal function, returning a reward of 1 and all others returning 0
 GoalBasedRF(TerminalFunction, double)  Constructor for class burlap.mdp.singleagent.common.GoalBasedRF

Initializes with transitions to goal states, indicated by the terminal function, returning the given reward and all others returning 0.
 GoalBasedRF(TerminalFunction, double, double)  Constructor for class burlap.mdp.singleagent.common.GoalBasedRF

Initializes with transitions to goal states, indicated by the terminal function, returning the given reward and all others returning 0.
 goalCondition  Variable in class burlap.behavior.singleagent.planning.stochastic.montecarlo.uct.UCT

 GoalConditionTF  Class in burlap.mdp.auxiliary.common

Creates a terminal function that indicates terminal states are any states that satisfy a goal condition
where the goal condition is specified by a
StateConditionTest
object.
 GoalConditionTF(StateConditionTest)  Constructor for class burlap.mdp.auxiliary.common.GoalConditionTF

 goalReward  Variable in class burlap.domain.singleagent.frostbite.FrostbiteRF

 goalReward  Variable in class burlap.domain.singleagent.lunarlander.LunarLanderRF

The reward for landing on the landing pad
 goalReward  Variable in class burlap.mdp.singleagent.common.GoalBasedRF

 gradient(State)  Method in class burlap.behavior.functionapproximation.dense.DenseLinearVFA

 gradient(State, Action)  Method in class burlap.behavior.functionapproximation.dense.DenseLinearVFA

 gradient(State, Action)  Method in class burlap.behavior.functionapproximation.dense.DenseStateActionLinearVFA

 gradient(State, Action)  Method in interface burlap.behavior.functionapproximation.DifferentiableStateActionValue

Returns the gradient of this function.
 gradient(State)  Method in interface burlap.behavior.functionapproximation.DifferentiableStateValue

Returns the gradient of this function
 gradient  Variable in class burlap.behavior.functionapproximation.FunctionGradient.SparseGradient

A map from weight identifiers to their partial derivative
 gradient(State)  Method in class burlap.behavior.functionapproximation.sparse.LinearVFA

 gradient(State, Action)  Method in class burlap.behavior.functionapproximation.sparse.LinearVFA

 gradient(State, Action, State)  Method in class burlap.behavior.singleagent.learnfromdemo.mlirl.commonrfs.LinearStateActionDifferentiableRF

 gradient(State, Action, State)  Method in class burlap.behavior.singleagent.learnfromdemo.mlirl.commonrfs.LinearStateDifferentiableRF

 gradient(State, Action, State)  Method in class burlap.behavior.singleagent.learnfromdemo.mlirl.differentiableplanners.diffvinit.DiffVFRF

 gradient(State, Action, State)  Method in class burlap.behavior.singleagent.learnfromdemo.mlirl.differentiableplanners.diffvinit.LinearDiffRFVInit

 gradient(double[], FunctionGradient[])  Method in interface burlap.behavior.singleagent.learnfromdemo.mlirl.differentiableplanners.dpoperator.DifferentiableDPOperator

Returns the gradient of this DP operator, giving the Qvalues on which it operates, their gradient.
 gradient(double[], FunctionGradient[])  Method in class burlap.behavior.singleagent.learnfromdemo.mlirl.differentiableplanners.dpoperator.DifferentiableSoftmaxOperator

 gradient(double[], FunctionGradient[])  Method in class burlap.behavior.singleagent.learnfromdemo.mlirl.differentiableplanners.dpoperator.SubDifferentiableMaxOperator

 gradient(State, Action, State)  Method in interface burlap.behavior.singleagent.learnfromdemo.mlirl.support.DifferentiableRF

 gradient  Variable in class burlap.behavior.singleagent.learnfromdemo.mlirl.support.QGradientTuple

The gradient for the state and action.
 GradientDescentSarsaLam  Class in burlap.behavior.singleagent.learning.tdmethods.vfa

Gradient Descent SARSA(\lambda) implementation [1].
 GradientDescentSarsaLam(SADomain, double, DifferentiableStateActionValue, double, double)  Constructor for class burlap.behavior.singleagent.learning.tdmethods.vfa.GradientDescentSarsaLam

Initializes SARSA(\lambda) with 0.1 epsilon greedy policy and places no limit on the number of steps the
agent can take in an episode.
 GradientDescentSarsaLam(SADomain, double, DifferentiableStateActionValue, double, int, double)  Constructor for class burlap.behavior.singleagent.learning.tdmethods.vfa.GradientDescentSarsaLam

Initializes SARSA(\lambda) with 0.1 epsilon greedy policy.
 GradientDescentSarsaLam(SADomain, double, DifferentiableStateActionValue, double, Policy, int, double)  Constructor for class burlap.behavior.singleagent.learning.tdmethods.vfa.GradientDescentSarsaLam

Initializes SARSA(\lambda) By default the agent will only save the last learning episode and a call to the
GradientDescentSarsaLam.planFromState(State)
method
will cause the valueFunction to use only one episode for planning; this should probably be changed to a much larger value if you plan on using this
algorithm as a planning algorithm.
 GradientDescentSarsaLam.EligibilityTraceVector  Class in burlap.behavior.singleagent.learning.tdmethods.vfa

An object for keeping track of the eligibility traces within an episode for each VFA weight
 GraphAction()  Constructor for class burlap.domain.singleagent.graphdefined.GraphDefinedDomain.GraphActionType.GraphAction

 GraphAction(int)  Constructor for class burlap.domain.singleagent.graphdefined.GraphDefinedDomain.GraphActionType.GraphAction

 GraphActionType(int, Map<Integer, Map<Integer, Set<GraphDefinedDomain.NodeTransitionProbability>>>)  Constructor for class burlap.domain.singleagent.graphdefined.GraphDefinedDomain.GraphActionType

Initializes a graph action object for the given domain and for the action of the given number.
 GraphDefinedDomain  Class in burlap.domain.singleagent.graphdefined

A domain generator for generating domains that are represented as graphs.
 GraphDefinedDomain()  Constructor for class burlap.domain.singleagent.graphdefined.GraphDefinedDomain

Initializes the generator.
 GraphDefinedDomain(int)  Constructor for class burlap.domain.singleagent.graphdefined.GraphDefinedDomain

Initializes the generator to create a domain with the given number of state nodes in it.
 GraphDefinedDomain.GraphActionType  Class in burlap.domain.singleagent.graphdefined

An action class for defining actions that can be taken from state nodes.
 GraphDefinedDomain.GraphActionType.GraphAction  Class in burlap.domain.singleagent.graphdefined

 GraphDefinedDomain.GraphStateModel  Class in burlap.domain.singleagent.graphdefined

 GraphDefinedDomain.NodeTransitionProbability  Class in burlap.domain.singleagent.graphdefined

A class for specifying transition probabilities to result node states.
 GraphRF  Class in burlap.domain.singleagent.graphdefined

 GraphRF()  Constructor for class burlap.domain.singleagent.graphdefined.GraphRF

 GraphStateModel(Map<Integer, Map<Integer, Set<GraphDefinedDomain.NodeTransitionProbability>>>)  Constructor for class burlap.domain.singleagent.graphdefined.GraphDefinedDomain.GraphStateModel

 GraphStateNode  Class in burlap.domain.singleagent.graphdefined

 GraphStateNode()  Constructor for class burlap.domain.singleagent.graphdefined.GraphStateNode

 GraphStateNode(int)  Constructor for class burlap.domain.singleagent.graphdefined.GraphStateNode

 GraphTF  Class in burlap.domain.singleagent.graphdefined

 GraphTF(int...)  Constructor for class burlap.domain.singleagent.graphdefined.GraphTF

Initializes setting all states with the provide integer node ids to be terminal states
 gravity  Variable in class burlap.domain.singleagent.cartpole.CartPoleDomain.CPPhysicsParams

the force of gravity; should be *positive* for the correct mechanics.
 gravity  Variable in class burlap.domain.singleagent.cartpole.InvertedPendulum.IPPhysicsParams

the force of gravity; should be *positive* for the correct mechanics.
 gravity  Variable in class burlap.domain.singleagent.lunarlander.LunarLanderDomain.LLPhysicsParams

The force of gravity
 gravity  Variable in class burlap.domain.singleagent.mountaincar.MountainCar.MCPhysicsParams

The force of gravity
 greatestHeightBelow(BlockDudeState, int[][], int, int, int)  Method in class burlap.domain.singleagent.blockdude.BlockDudeModel

Returns the maximum height of the world at the provided x coordinate that is <= the value maxY.
 GreedyDeterministicQPolicy  Class in burlap.behavior.policy

A greedy policy that breaks ties by choosing the first action with the maximum value.
 GreedyDeterministicQPolicy()  Constructor for class burlap.behavior.policy.GreedyDeterministicQPolicy

 GreedyDeterministicQPolicy(QProvider)  Constructor for class burlap.behavior.policy.GreedyDeterministicQPolicy

Initializes with a QComputablePlanner
 GreedyQPolicy  Class in burlap.behavior.policy

A greedy policy that breaks ties by randomly choosing an action amongst the tied actions.
 GreedyQPolicy()  Constructor for class burlap.behavior.policy.GreedyQPolicy

 GreedyQPolicy(QProvider)  Constructor for class burlap.behavior.policy.GreedyQPolicy

Initializes with a QComputablePlanner
 GridAgent  Class in burlap.domain.singleagent.gridworld.state

 GridAgent()  Constructor for class burlap.domain.singleagent.gridworld.state.GridAgent

 GridAgent(int, int)  Constructor for class burlap.domain.singleagent.gridworld.state.GridAgent

 GridAgent(int, int, String)  Constructor for class burlap.domain.singleagent.gridworld.state.GridAgent

 gridDimension(Object, double, double, int)  Method in class burlap.behavior.singleagent.auxiliary.gridset.FlatStateGridder

Specify a state variable as a dimension of the grid
 gridDimension(Object, VariableGridSpec)  Method in class burlap.behavior.singleagent.auxiliary.gridset.FlatStateGridder

Specify a state variable as a dimension of a the grid
 GridGame  Class in burlap.domain.stochasticgames.gridgame

The GridGame domain is much like the GridWorld domain, except for arbitrarily many agents in
a stochastic game.
 GridGame()  Constructor for class burlap.domain.stochasticgames.gridgame.GridGame

 GridGame.GGJointRewardFunction  Class in burlap.domain.stochasticgames.gridgame

Specifies goal rewards and default rewards for agents.
 GridGame.GGTerminalFunction  Class in burlap.domain.stochasticgames.gridgame

Causes termination when any agent reaches a personal or universal goal location.
 GridGameStandardMechanics  Class in burlap.domain.stochasticgames.gridgame

This class defines the standard transition dynamics for a grid game.
 GridGameStandardMechanics(Domain)  Constructor for class burlap.domain.stochasticgames.gridgame.GridGameStandardMechanics

Initializes the mechanics for the given domain and sets the semiwall pass through probability to 0.5;
 GridGameStandardMechanics(Domain, double)  Constructor for class burlap.domain.stochasticgames.gridgame.GridGameStandardMechanics

Initializes the mechanics for the given domain and sets the semiwall pass through probability to semiWallPassThroughProb.
 GridLocation  Class in burlap.domain.singleagent.gridworld.state

 GridLocation()  Constructor for class burlap.domain.singleagent.gridworld.state.GridLocation

 GridLocation(int, int, String)  Constructor for class burlap.domain.singleagent.gridworld.state.GridLocation

 GridLocation(int, int, int, String)  Constructor for class burlap.domain.singleagent.gridworld.state.GridLocation

 gridObjectClass(String, FlatStateGridder)  Method in class burlap.behavior.singleagent.auxiliary.gridset.OOStateGridder

Specifies the gridding for a given object class.
 gridSpec(Object)  Method in class burlap.behavior.singleagent.auxiliary.gridset.FlatStateGridder

Returns the grid spec defined for the variable key
 gridSpecs  Variable in class burlap.behavior.singleagent.auxiliary.gridset.FlatStateGridder

 gridState(MutableState)  Method in class burlap.behavior.singleagent.auxiliary.gridset.FlatStateGridder

Grids the input state.
 gridState(MutableOOState)  Method in class burlap.behavior.singleagent.auxiliary.gridset.OOStateGridder

Generates a set of states spaced along along.
 gridStateHelper(MutableState, List<Map.Entry<Object, VariableGridSpec>>, int, List<State>)  Method in class burlap.behavior.singleagent.auxiliary.gridset.FlatStateGridder

 GridWorldDomain  Class in burlap.domain.singleagent.gridworld

A domain generator for basic grid worlds.
 GridWorldDomain(int, int)  Constructor for class burlap.domain.singleagent.gridworld.GridWorldDomain

Constructs an empty map with deterministic transitions
 GridWorldDomain(int[][])  Constructor for class burlap.domain.singleagent.gridworld.GridWorldDomain

Constructs a deterministic world based on the provided map.
 GridWorldDomain.AtLocationPF  Class in burlap.domain.singleagent.gridworld

Propositional function for determining if the agent is at the same position as a given location object
 GridWorldDomain.GridWorldModel  Class in burlap.domain.singleagent.gridworld

 GridWorldDomain.WallToPF  Class in burlap.domain.singleagent.gridworld

Propositional function for indicating if a wall is in a given position relative to the agent position
 GridWorldModel(int[][], double[][])  Constructor for class burlap.domain.singleagent.gridworld.GridWorldDomain.GridWorldModel

 GridWorldRewardFunction  Class in burlap.domain.singleagent.gridworld

This class is used for defining reward functions in grid worlds that are a function of cell of the world to which
the agent transitions.
 GridWorldRewardFunction(int, int, double)  Constructor for class burlap.domain.singleagent.gridworld.GridWorldRewardFunction

Initializes the reward function for a grid world of size width and height and initializes the reward values everywhere to initializingReward.
 GridWorldRewardFunction(int, int)  Constructor for class burlap.domain.singleagent.gridworld.GridWorldRewardFunction

Initializes the reward function for a grid world of size width and height and initializes the reward values everywhere to 0.
 GridWorldState  Class in burlap.domain.singleagent.gridworld.state

 GridWorldState()  Constructor for class burlap.domain.singleagent.gridworld.state.GridWorldState

 GridWorldState(int, int, GridLocation...)  Constructor for class burlap.domain.singleagent.gridworld.state.GridWorldState

 GridWorldState(GridAgent, GridLocation...)  Constructor for class burlap.domain.singleagent.gridworld.state.GridWorldState

 GridWorldState(GridAgent, List<GridLocation>)  Constructor for class burlap.domain.singleagent.gridworld.state.GridWorldState

 GridWorldTerminalFunction  Class in burlap.domain.singleagent.gridworld

This class is used for setting a terminal function for GridWorlds that is based on the location of the agent in the world.
 GridWorldTerminalFunction()  Constructor for class burlap.domain.singleagent.gridworld.GridWorldTerminalFunction

Initializes without any terminal positions specified.
 GridWorldTerminalFunction(int, int)  Constructor for class burlap.domain.singleagent.gridworld.GridWorldTerminalFunction

Initializes with a terminal position at the specified agent x and y locaiton.
 GridWorldTerminalFunction(GridWorldTerminalFunction.IntPair...)  Constructor for class burlap.domain.singleagent.gridworld.GridWorldTerminalFunction

 GridWorldTerminalFunction.IntPair  Class in burlap.domain.singleagent.gridworld

A pair class for two ints.
 GridWorldVisualizer  Class in burlap.domain.singleagent.gridworld

Returns a visualizer for grid worlds in which walls are rendered as black squares or black lines, the agent is a gray circle and the location objects are colored squares.
 GridWorldVisualizer.CellPainter  Class in burlap.domain.singleagent.gridworld

A painter for a grid world cell which will fill the cell with a given color and where the cell position
is indicated by the x and y attribute for the mapped object instance
 GridWorldVisualizer.LocationPainter  Class in burlap.domain.singleagent.gridworld

A painter for location objects which will fill the cell with a given color and where the cell position
is indicated by the x and y attribute for the mapped object instance
 GridWorldVisualizer.MapPainter  Class in burlap.domain.singleagent.gridworld

A static painter class for rendering the walls of the grid world as black squares or black lines for 1D walls.
 GrimTrigger  Class in burlap.behavior.stochasticgames.agents.twoplayer.repeatedsinglestage

A class for an agent that plays grim trigger.
 GrimTrigger(SGDomain, Action, Action)  Constructor for class burlap.behavior.stochasticgames.agents.twoplayer.repeatedsinglestage.GrimTrigger

Initializes with the specified cooperate and defect actions for both players.
 GrimTrigger(SGDomain, Action, Action, Action)  Constructor for class burlap.behavior.stochasticgames.agents.twoplayer.repeatedsinglestage.GrimTrigger

Initializes with differently specified cooperate and defect actions for both players.
 grimTrigger  Variable in class burlap.behavior.stochasticgames.agents.twoplayer.repeatedsinglestage.GrimTrigger

Whether this agent will play its defect action or not.
 GrimTrigger.GrimTriggerAgentFactory  Class in burlap.behavior.stochasticgames.agents.twoplayer.repeatedsinglestage

An agent factory for GrimTrigger
 GrimTriggerAgentFactory(SGDomain, Action, Action)  Constructor for class burlap.behavior.stochasticgames.agents.twoplayer.repeatedsinglestage.GrimTrigger.GrimTriggerAgentFactory

Initializes with the specified cooperate and defect actions for both players.
 GrimTriggerAgentFactory(SGDomain, Action, Action, Action)  Constructor for class burlap.behavior.stochasticgames.agents.twoplayer.repeatedsinglestage.GrimTrigger.GrimTriggerAgentFactory

Initializes with differently specified cooperate and defect actions for both players.
 GroundedProp  Class in burlap.mdp.core.oo.propositional

Propositional functions are defined to be evaluated on object parameters and this class provides a
definition for a grounded propositional function; that is, it specifies specific object parameters
on which the propositional function should be evaluated.
 GroundedProp(PropositionalFunction, String[])  Constructor for class burlap.mdp.core.oo.propositional.GroundedProp

Initializes a grounded propositional function