 dataset  Variable in class burlap.behavior.singleagent.learning.lspi.LSPI

The SARS dataset on which LSPI is performed
 dataset  Variable in class burlap.behavior.singleagent.learning.lspi.SARSData

 datasets  Variable in class burlap.behavior.stochasticgames.auxiliary.performance.MultiAgentPerformancePlotter.DatasetsAndTrials

The series datasets for this agent
 DatasetsAndTrials(String)  Constructor for class burlap.behavior.stochasticgames.auxiliary.performance.MultiAgentPerformancePlotter.DatasetsAndTrials

Initializes for an agent with the given name.
 DDPlannerPolicy  Class in burlap.behavior.singleagent.planning.deterministic

This is a dynamic deterministic valueFunction policy, which means
if the source deterministic valueFunction has not already computed
and cached the plan for a query state, then this policy
will first compute a plan using the valueFunction and then return the
answer
 DDPlannerPolicy()  Constructor for class burlap.behavior.singleagent.planning.deterministic.DDPlannerPolicy

 DDPlannerPolicy(DeterministicPlanner)  Constructor for class burlap.behavior.singleagent.planning.deterministic.DDPlannerPolicy

Initializes with the deterministic valueFunction
 DEBUG_CODE_RF_WEIGHTS  Static variable in class burlap.behavior.singleagent.learnfromdemo.apprenticeship.ApprenticeshipLearning

 DEBUG_CODE_SCORE  Static variable in class burlap.behavior.singleagent.learnfromdemo.apprenticeship.ApprenticeshipLearning

 debugCode  Variable in class burlap.behavior.singleagent.auxiliary.performance.LearningAlgorithmExperimenter

The debug code used for debug printing.
 debugCode  Variable in class burlap.behavior.singleagent.interfaces.rlglue.RLGlueAgent

Debug code used for printing debug information.
 debugCode  Variable in class burlap.behavior.singleagent.learnfromdemo.mlirl.MLIRL

The debug code used for printing information to the terminal.
 debugCode  Variable in class burlap.behavior.singleagent.learnfromdemo.mlirl.MultipleIntentionsMLIRL

The debug code used for printing information to the terminal.
 debugCode  Variable in class burlap.behavior.singleagent.MDPSolver

The debug code use for calls to
DPrint
 debugCode  Variable in class burlap.behavior.stochasticgames.auxiliary.performance.MultiAgentExperimenter

The debug code used for debug printing.
 debugCode  Variable in class burlap.behavior.stochasticgames.auxiliary.performance.MultiAgentPerformancePlotter

The debug code used for debug printing.
 debugCode  Variable in class burlap.behavior.stochasticgames.madynamicprogramming.dpplanners.MAValueIteration

The debug code used for printing VI progress.
 DebugFlags  Class in burlap.debugtools

A data structure for specifying debug flags that can be accessed and modified from any class
 debugID  Static variable in class burlap.behavior.singleagent.auxiliary.StateReachability

The debugID used for making calls to
DPrint
.
 debugId  Variable in class burlap.mdp.stochasticgames.tournament.Tournament

 debugId  Variable in class burlap.mdp.stochasticgames.world.World

 decayConstantShift  Variable in class burlap.behavior.learningrate.SoftTimeInverseDecayLR

The division scale offset
 decayRate  Variable in class burlap.behavior.learningrate.ExponentialDecayLR

The exponential base by which the learning rate is decayed
 deepCopyActionNameMapArray(SingleStageNormalFormGame.ActionNameMap[])  Static method in class burlap.domain.stochasticgames.normalform.SingleStageNormalFormGame.ActionNameMap

 DeepCopyState  Annotation Type in burlap.mdp.core.state.annotations

A marker for
State
implementations that indicates that their copy operation is deep.
 DeepOOState  Class in burlap.mdp.core.oo.state.generic

 DeepOOState()  Constructor for class burlap.mdp.core.oo.state.generic.DeepOOState

 DeepOOState(OOState)  Constructor for class burlap.mdp.core.oo.state.generic.DeepOOState

 DeepOOState(ObjectInstance...)  Constructor for class burlap.mdp.core.oo.state.generic.DeepOOState

 deepTouchLocations()  Method in class burlap.domain.singleagent.gridworld.state.GridWorldState

 deepTouchObstacles()  Method in class burlap.domain.singleagent.lunarlander.state.LLState

 deepTouchPlatforms()  Method in class burlap.domain.singleagent.frostbite.state.FrostbiteState

 DEFAULT_EPSILON  Static variable in class burlap.behavior.singleagent.learnfromdemo.apprenticeship.ApprenticeshipLearningRequest

 DEFAULT_MAXITERATIONS  Static variable in class burlap.behavior.singleagent.learnfromdemo.apprenticeship.ApprenticeshipLearningRequest

 DEFAULT_POLICYCOUNT  Static variable in class burlap.behavior.singleagent.learnfromdemo.apprenticeship.ApprenticeshipLearningRequest

 DEFAULT_USEMAXMARGIN  Static variable in class burlap.behavior.singleagent.learnfromdemo.apprenticeship.ApprenticeshipLearningRequest

 defaultCost(int, JointAction)  Method in class burlap.domain.stochasticgames.gridgame.GridGame.GGJointRewardFunction

Returns a default cost for an agent assuming the agent didn't transition to a goal state.
 defaultMultiple  Variable in class burlap.statehashing.discretized.DiscConfig

The default multiple to use for any continuous attributes that have not been specifically set.
 defaultMultiple  Variable in class burlap.statehashing.maskeddiscretized.DiscMaskedConfig

The default multiple to use for any continuous attributes that have not been specifically set.
 defaultQ  Variable in class burlap.behavior.stochasticgames.agents.naiveq.SGNaiveQFactory

The default Qvalue to which Qvalues will be initialized
 defaultReward  Variable in class burlap.domain.singleagent.frostbite.FrostbiteRF

 defaultReward  Variable in class burlap.domain.singleagent.lunarlander.LunarLanderRF

The default reward received for moving through the air
 defaultReward  Variable in class burlap.mdp.singleagent.common.GoalBasedRF

 defaultToLowerValueAfterPlanning  Variable in class burlap.behavior.singleagent.planning.stochastic.rtdp.BoundedRTDP

Sets what the
DynamicProgramming
valueFunction reference points to (the lower bound or upperbound) once a planning rollout is complete.
 defaultWeight  Variable in class burlap.behavior.functionapproximation.dense.DenseLinearVFA

A default weight value for the functions weights.
 defaultWeight  Variable in class burlap.behavior.functionapproximation.dense.DenseStateActionLinearVFA

A default weight value for the functions weights.
 defaultWeight  Variable in class burlap.behavior.functionapproximation.sparse.LinearVFA

A default weight for the functions
 definedFor(State)  Method in class burlap.behavior.policy.BoltzmannQPolicy

 definedFor(State)  Method in class burlap.behavior.policy.CachedPolicy

 definedFor(State)  Method in class burlap.behavior.policy.EpsilonGreedy

 definedFor(State)  Method in class burlap.behavior.policy.GreedyDeterministicQPolicy

 definedFor(State)  Method in class burlap.behavior.policy.GreedyQPolicy

 definedFor(State)  Method in interface burlap.behavior.policy.Policy

Specifies whether this policy is defined for the input state.
 definedFor(State)  Method in class burlap.behavior.policy.RandomPolicy

 definedFor(State)  Method in class burlap.behavior.singleagent.learnfromdemo.apprenticeship.ApprenticeshipLearning.StationaryRandomDistributionPolicy

 definedFor(State)  Method in class burlap.behavior.singleagent.learning.actorcritic.actor.BoltzmannActor

 definedFor(State)  Method in class burlap.behavior.singleagent.learning.modellearning.rmax.UnmodeledFavoredPolicy

 definedFor(State)  Method in class burlap.behavior.singleagent.planning.deterministic.DDPlannerPolicy

 definedFor(State)  Method in class burlap.behavior.singleagent.planning.deterministic.SDPlannerPolicy

 definedFor(State)  Method in class burlap.behavior.singleagent.planning.stochastic.montecarlo.uct.UCTTreeWalkPolicy

 definedFor(State)  Method in class burlap.behavior.stochasticgames.madynamicprogramming.policies.ECorrelatedQJointPolicy

 definedFor(State)  Method in class burlap.behavior.stochasticgames.madynamicprogramming.policies.EGreedyJointPolicy

 definedFor(State)  Method in class burlap.behavior.stochasticgames.madynamicprogramming.policies.EGreedyMaxWellfare

 definedFor(State)  Method in class burlap.behavior.stochasticgames.madynamicprogramming.policies.EMinMaxPolicy

 definedFor(State)  Method in class burlap.behavior.stochasticgames.PolicyFromJointPolicy

 delay  Variable in class burlap.behavior.singleagent.auxiliary.performance.PerformancePlotter

the delay in milliseconds between which the charts are updated automatically
 delay  Variable in class burlap.behavior.stochasticgames.auxiliary.performance.MultiAgentPerformancePlotter

the delay in milliseconds between which the charts are updated automatically
 delegate  Variable in class burlap.mdp.singleagent.environment.extensions.EnvironmentServer

 DelegatedModel  Class in burlap.mdp.singleagent.model

An implementation of
FullModel
that will delegate transition estimates for different actions to different
SampleModel
or
FullModel
implementations.
 DelegatedModel(SampleModel)  Constructor for class burlap.mdp.singleagent.model.DelegatedModel

 delta  Static variable in class burlap.testing.TestPlanning

 DenseBeliefVector  Interface in burlap.mdp.singleagent.pomdp.beliefstate

An interface to be used in conjunction with
BeliefState
instances
for belief states that can generate a dense belief vector representation.
 DenseCrossProductFeatures  Class in burlap.behavior.functionapproximation.dense

Class that generates stateaction features as cross product of underlying statefeatures with the action set.
 DenseCrossProductFeatures(DenseStateFeatures, int)  Constructor for class burlap.behavior.functionapproximation.dense.DenseCrossProductFeatures

 DenseCrossProductFeatures(DenseStateFeatures, int, Map<Action, Integer>)  Constructor for class burlap.behavior.functionapproximation.dense.DenseCrossProductFeatures

 DenseLinearVFA  Class in burlap.behavior.functionapproximation.dense

This class can be used to perform linear value function approximation, either for a states or stateactions (Qvalues).
 DenseLinearVFA(DenseStateFeatures, double)  Constructor for class burlap.behavior.functionapproximation.dense.DenseLinearVFA

Initializes.
 DenseStateActionFeatures  Interface in burlap.behavior.functionapproximation.dense

 DenseStateActionLinearVFA  Class in burlap.behavior.functionapproximation.dense

 DenseStateActionLinearVFA(DenseStateActionFeatures, double)  Constructor for class burlap.behavior.functionapproximation.dense.DenseStateActionLinearVFA

 DenseStateActionLinearVFA(DenseStateActionFeatures, double[], double)  Constructor for class burlap.behavior.functionapproximation.dense.DenseStateActionLinearVFA

 DenseStateFeatures  Interface in burlap.behavior.functionapproximation.dense

Many functions approximation techniques require a fixed feature vector to work and in many cases, using abstract features from
the state attributes is useful.
 depth  Variable in class burlap.behavior.singleagent.planning.stochastic.montecarlo.uct.UCTStateNode

The depth the UCT tree
 depthMap  Variable in class burlap.behavior.singleagent.planning.deterministic.informed.astar.DynamicWeightedAStar

Data structure for storing the depth of explored states
 DeterministicPlanner  Class in burlap.behavior.singleagent.planning.deterministic

This class extends the OOMDPlanner to provide the interface and common mechanisms for classic deterministic forward search planners.
 DeterministicPlanner()  Constructor for class burlap.behavior.singleagent.planning.deterministic.DeterministicPlanner

 DeterministicPlanner.PlanningFailedException  Exception in burlap.behavior.singleagent.planning.deterministic

Exception class for indicating that a solution failed to be found by the planning algorithm.
 deterministicPlannerInit(SADomain, StateConditionTest, HashableStateFactory)  Method in class burlap.behavior.singleagent.planning.deterministic.DeterministicPlanner

Initializes the valueFunction.
 deterministicPolicyDistribution(Policy, State)  Static method in class burlap.behavior.policy.PolicyUtils

A helper method for defining deterministic policies.
 deterministicTransition(SampleModel, State, Action)  Static method in class burlap.mdp.singleagent.model.FullModel.Helper

 deterministicTransition(SampleStateModel, State, Action)  Static method in class burlap.mdp.singleagent.model.statemodel.FullStateModel.Helper

 deterministicTransition(JointModel, State, JointAction)  Static method in class burlap.mdp.stochasticgames.model.FullJointModel.Helper

A helper method for deterministic transition dynamics.
 DFS  Class in burlap.behavior.singleagent.planning.deterministic.uninformed.dfs

Implements depthfirst search.
 DFS(SADomain, StateConditionTest, HashableStateFactory)  Constructor for class burlap.behavior.singleagent.planning.deterministic.uninformed.dfs.DFS

Basic constructor for standard DFS without a depth limit
 DFS(SADomain, StateConditionTest, HashableStateFactory, int)  Constructor for class burlap.behavior.singleagent.planning.deterministic.uninformed.dfs.DFS

Basic constructor for standard DFS with a depth limit
 DFS(SADomain, StateConditionTest, HashableStateFactory, int, boolean)  Constructor for class burlap.behavior.singleagent.planning.deterministic.uninformed.dfs.DFS

Constructor of DFS with specification of depth limit and whether to maintain a closed list that affects exploration.
 DFS(SADomain, StateConditionTest, HashableStateFactory, int, boolean, boolean)  Constructor for class burlap.behavior.singleagent.planning.deterministic.uninformed.dfs.DFS

Constructor of DFS with specification of depth limit, whether to maintain a closed list that affects exploration, and whether paths
generated by options should be explored first.
 dfs(SearchNode, int, Set<HashableState>)  Method in class burlap.behavior.singleagent.planning.deterministic.uninformed.dfs.DFS

Runs DFS from a given search node, keeping track of its current depth.
 dfs(SearchNode, int, Set<HashableState>)  Method in class burlap.behavior.singleagent.planning.deterministic.uninformed.dfs.LimitedMemoryDFS

Runs DFS from a given search node, keeping track of its current depth.
 DFSInit(SADomain, StateConditionTest, HashableStateFactory, int, boolean, boolean)  Method in class burlap.behavior.singleagent.planning.deterministic.uninformed.dfs.DFS

Constructor of DFS with specification of depth limit, whether to maintain a closed list that affects exploration, and whether paths
generated by options should be explored first.
 dheight  Variable in class burlap.domain.singleagent.gridworld.GridWorldVisualizer.CellPainter

 dheight  Variable in class burlap.domain.singleagent.gridworld.GridWorldVisualizer.LocationPainter

 dheight  Variable in class burlap.domain.singleagent.gridworld.GridWorldVisualizer.MapPainter

 DifferentiableDP  Class in burlap.behavior.singleagent.learnfromdemo.mlirl.differentiableplanners

A class for performing dynamic programming with a differentiable value backup operator.
 DifferentiableDP()  Constructor for class burlap.behavior.singleagent.learnfromdemo.mlirl.differentiableplanners.DifferentiableDP

 DifferentiableDPOperator  Interface in burlap.behavior.singleagent.learnfromdemo.mlirl.differentiableplanners.dpoperator

 DifferentiableQFunction  Interface in burlap.behavior.singleagent.learnfromdemo.mlirl.support

An interface for a valueFunction that can produce Qvalue gradients.
 DifferentiableRF  Interface in burlap.behavior.singleagent.learnfromdemo.mlirl.support

An interface for defining differentiable reward functions.
 DifferentiableSoftmaxOperator  Class in burlap.behavior.singleagent.learnfromdemo.mlirl.differentiableplanners.dpoperator

 DifferentiableSoftmaxOperator()  Constructor for class burlap.behavior.singleagent.learnfromdemo.mlirl.differentiableplanners.dpoperator.DifferentiableSoftmaxOperator

 DifferentiableSoftmaxOperator(double)  Constructor for class burlap.behavior.singleagent.learnfromdemo.mlirl.differentiableplanners.dpoperator.DifferentiableSoftmaxOperator

 DifferentiableSparseSampling  Class in burlap.behavior.singleagent.learnfromdemo.mlirl.differentiableplanners

A Differentiable finite horizon valueFunction that can also use sparse sampling over the transition dynamics when the
transition function is very large or infinite.
 DifferentiableSparseSampling(SADomain, DifferentiableRF, double, HashableStateFactory, int, int, double)  Constructor for class burlap.behavior.singleagent.learnfromdemo.mlirl.differentiableplanners.DifferentiableSparseSampling

Initializes.
 DifferentiableSparseSampling.DiffStateNode  Class in burlap.behavior.singleagent.learnfromdemo.mlirl.differentiableplanners

A class for value differentiable state nodes.
 DifferentiableSparseSampling.QAndQGradient  Class in burlap.behavior.singleagent.learnfromdemo.mlirl.differentiableplanners

A tuple for storing Qvalues and their gradients.
 DifferentiableSparseSampling.VAndVGradient  Class in burlap.behavior.singleagent.learnfromdemo.mlirl.differentiableplanners

A tuple for storing a state value and its gradient.
 DifferentiableStateActionValue  Interface in burlap.behavior.functionapproximation

 DifferentiableStateValue  Interface in burlap.behavior.functionapproximation

 DifferentiableValueFunction  Interface in burlap.behavior.singleagent.learnfromdemo.mlirl.support

 DifferentiableVI  Class in burlap.behavior.singleagent.learnfromdemo.mlirl.differentiableplanners

Performs Differentiable Value Iteration using the Boltzmann backup operator and a
DifferentiableRF
.
 DifferentiableVI(SADomain, DifferentiableRF, double, double, HashableStateFactory, double, int)  Constructor for class burlap.behavior.singleagent.learnfromdemo.mlirl.differentiableplanners.DifferentiableVI

Initializes the valueFunction.
 DifferentiableVIFactory(HashableStateFactory)  Constructor for class burlap.behavior.singleagent.learnfromdemo.mlirl.support.QGradientPlannerFactory.DifferentiableVIFactory

 DifferentiableVIFactory(HashableStateFactory, TerminalFunction, double, int)  Constructor for class burlap.behavior.singleagent.learnfromdemo.mlirl.support.QGradientPlannerFactory.DifferentiableVIFactory

Initializes.
 DifferentiableVInit  Interface in burlap.behavior.singleagent.learnfromdemo.mlirl.differentiableplanners.diffvinit

An interface for value function initialization that is differentiable with respect to some parameters.
 DiffStateNode(HashableState, int)  Constructor for class burlap.behavior.singleagent.learnfromdemo.mlirl.differentiableplanners.DifferentiableSparseSampling.DiffStateNode

 DiffVFRF  Class in burlap.behavior.singleagent.learnfromdemo.mlirl.differentiableplanners.diffvinit

A differentiable reward function wrapper for use with
MLIRL
when
the reward function is known, but the value function initialization for leaf nodes is to be learned.
 DiffVFRF(RewardFunction, DifferentiableVInit)  Constructor for class burlap.behavior.singleagent.learnfromdemo.mlirl.differentiableplanners.diffvinit.DiffVFRF

 diffVInit  Variable in class burlap.behavior.singleagent.learnfromdemo.mlirl.differentiableplanners.diffvinit.DiffVFRF

 dim  Variable in class burlap.behavior.singleagent.learnfromdemo.mlirl.commonrfs.LinearStateActionDifferentiableRF

The dimension of this reward function
 dim  Variable in class burlap.behavior.singleagent.learnfromdemo.mlirl.commonrfs.LinearStateDifferentiableRF

The dimension of this reward function
 dim  Variable in class burlap.behavior.singleagent.learnfromdemo.mlirl.differentiableplanners.diffvinit.DiffVFRF

 dim  Variable in class burlap.behavior.singleagent.learnfromdemo.mlirl.differentiableplanners.diffvinit.LinearDiffRFVInit

 dim  Variable in class burlap.behavior.singleagent.learnfromdemo.mlirl.differentiableplanners.diffvinit.LinearStateDiffVF

 dimensionMask  Variable in class burlap.behavior.functionapproximation.sparse.tilecoding.Tiling

The dimensions on which this tiling are dependent
 dir  Variable in class burlap.domain.singleagent.blockdude.state.BlockDudeAgent

 dir(String)  Method in class burlap.domain.singleagent.mountaincar.MountainCar.MCModel

 directEpisodes  Variable in class burlap.behavior.singleagent.auxiliary.EpisodeSequenceVisualizer

 directGameEpisodes  Variable in class burlap.behavior.stochasticgames.auxiliary.GameSequenceVisualizer

 direction  Variable in class burlap.behavior.singleagent.auxiliary.valuefunctionvis.common.ArrowActionGlyph

The direction of the arrow.0: north; 1: south; 2: east; 3:west
 DiscConfig  Class in burlap.statehashing.discretized

 DiscConfig()  Constructor for class burlap.statehashing.discretized.DiscConfig

 DiscConfig(double)  Constructor for class burlap.statehashing.discretized.DiscConfig

 DiscConfig(Map<Object, Double>, double)  Constructor for class burlap.statehashing.discretized.DiscConfig

 DiscMaskedConfig  Class in burlap.statehashing.maskeddiscretized

 DiscMaskedConfig()  Constructor for class burlap.statehashing.maskeddiscretized.DiscMaskedConfig

 DiscMaskedConfig(double)  Constructor for class burlap.statehashing.maskeddiscretized.DiscMaskedConfig

 DiscMaskedConfig(Set<Object>, Set<String>, Map<Object, Double>, double)  Constructor for class burlap.statehashing.maskeddiscretized.DiscMaskedConfig

 discount  Variable in class burlap.behavior.singleagent.interfaces.rlglue.RLGlueAgent

The RLGlue specified discount factor
 discount  Variable in class burlap.behavior.singleagent.options.EnvironmentOptionOutcome

The discount factor to apply to the value of time steps immediately following the application of an
Option
.
 discount  Variable in class burlap.behavior.singleagent.options.model.BFSMarkovOptionModel

 discount  Variable in class burlap.behavior.singleagent.shaping.potential.PotentialShapedRF

The discount factor the MDP (required for this to shaping to preserve policy optimality)
 discount  Variable in class burlap.behavior.stochasticgames.agents.madp.MADPPlannerFactory.MAVIPlannerFactory

The discount factor in [0, 1]
 discount  Variable in class burlap.behavior.stochasticgames.agents.maql.MAQLFactory

 discount  Variable in class burlap.behavior.stochasticgames.agents.maql.MultiAgentQLearning

The discount factor
 discount  Variable in class burlap.behavior.stochasticgames.agents.naiveq.history.SGQWActionHistoryFactory

The discount rate the Qlearning algorithm will use
 discount  Variable in class burlap.behavior.stochasticgames.agents.naiveq.SGNaiveQFactory

The discount rate the Qlearning algorithm will use
 discount  Variable in class burlap.behavior.stochasticgames.agents.naiveq.SGNaiveQLAgent

The discount factor
 discount  Variable in class burlap.behavior.stochasticgames.madynamicprogramming.MADynamicProgramming

The discount factor in [0, 1]
 discount  Variable in class burlap.domain.singleagent.rlglue.RLGlueEnvironment

The discount factor of the task
 discountedReturn(double)  Method in class burlap.behavior.singleagent.Episode

Will return the discounted return received from the first state in the episode to the last state in the episode.
 DiscreteObservationFunction  Interface in burlap.mdp.singleagent.pomdp.observations

Defines additional methods for an
ObservationFunction
for the case when the set of observations are
discrete and able to be enumerated.
 DiscretizingHashableStateFactory  Class in burlap.statehashing.discretized

A factory for producing
HashableState
objects that computes hash codes
and test for state equality after discretizing any real values (Float or Double).
 DiscretizingHashableStateFactory(double)  Constructor for class burlap.statehashing.discretized.DiscretizingHashableStateFactory

Initializes with object identifier independence and no hash code caching.
 DiscretizingHashableStateFactory(boolean, double)  Constructor for class burlap.statehashing.discretized.DiscretizingHashableStateFactory

Initializes with non hash code caching
 DiscretizingMaskedHashableStateFactory  Class in burlap.statehashing.maskeddiscretized

A class for producing
HashableState
objects that computes hash codes and tests
for
State
equality by discretizing realvalued attributes and by masking (ignoring)
either state variables and/or
OOState
clasees.
 DiscretizingMaskedHashableStateFactory(double)  Constructor for class burlap.statehashing.maskeddiscretized.DiscretizingMaskedHashableStateFactory

Initializes with object identifier independence, no hash code caching and object class or attribute masks.
 DiscretizingMaskedHashableStateFactory(boolean, double)  Constructor for class burlap.statehashing.maskeddiscretized.DiscretizingMaskedHashableStateFactory

Initializes with non hash code caching and object class or attribute masks
 DiscretizingMaskedHashableStateFactory(boolean, double, boolean, String...)  Constructor for class burlap.statehashing.maskeddiscretized.DiscretizingMaskedHashableStateFactory

Initializes with a specified attribute or object class mask.
 displayPlots  Variable in class burlap.behavior.singleagent.auxiliary.performance.LearningAlgorithmExperimenter

Whether the performance should be visually plotted (by default they will)
 displayPlots  Variable in class burlap.behavior.stochasticgames.auxiliary.performance.MultiAgentExperimenter

Whether the performance should be visually plotted (by default they will)
 distance(double[], double[])  Method in interface burlap.behavior.functionapproximation.dense.rbf.DistanceMetric

Returns the distance between state s0 and state s1.
 distance(double[], double[])  Method in class burlap.behavior.functionapproximation.dense.rbf.metrics.EuclideanDistance

 DistanceMetric  Interface in burlap.behavior.functionapproximation.dense.rbf

An interface for defining the distance between two states that are represented with double arrays.
 domain  Variable in class burlap.behavior.singleagent.auxiliary.EpisodeSequenceVisualizer

 domain  Variable in class burlap.behavior.singleagent.auxiliary.StateEnumerator

The domain whose states will be enumerated
 domain  Variable in class burlap.behavior.singleagent.interfaces.rlglue.RLGlueAgent

The BURLAP
Domain
specifying the RLGlue problem representation and action space.
 domain  Variable in class burlap.behavior.singleagent.learnfromdemo.IRLRequest

The domain in which IRL is to be performed
 domain  Variable in class burlap.behavior.singleagent.learnfromdemo.RewardValueProjection

 domain  Variable in class burlap.behavior.singleagent.learning.actorcritic.actor.BoltzmannActor

The domain in which this agent will act
 domain  Variable in class burlap.behavior.singleagent.MDPSolver

The domain to solve
 domain  Variable in class burlap.behavior.stochasticgames.agents.madp.MADPPlanAgentFactory

 domain  Variable in class burlap.behavior.stochasticgames.agents.madp.MADPPlannerFactory.MAVIPlannerFactory

The domain in which planning is to be performed
 domain  Variable in class burlap.behavior.stochasticgames.agents.maql.MAQLFactory

 domain  Variable in class burlap.behavior.stochasticgames.agents.naiveq.history.SGQWActionHistoryFactory

The stochastic games domain in which the agent will act
 domain  Variable in class burlap.behavior.stochasticgames.agents.naiveq.SGNaiveQFactory

The stochastic games domain in which the agent will act
 domain  Variable in class burlap.behavior.stochasticgames.agents.SetStrategySGAgent.SetStrategyAgentFactory

The domain in which the agent will play
 domain  Variable in class burlap.behavior.stochasticgames.agents.twoplayer.repeatedsinglestage.GrimTrigger.GrimTriggerAgentFactory

The domain in which the agent will play
 domain  Variable in class burlap.behavior.stochasticgames.agents.twoplayer.repeatedsinglestage.TitForTat.TitForTatAgentFactory

The domain in which the agent will play
 domain  Variable in class burlap.behavior.stochasticgames.auxiliary.GameSequenceVisualizer

 domain  Variable in class burlap.behavior.stochasticgames.madynamicprogramming.MADynamicProgramming

The domain in which planning is to be performed
 domain  Variable in class burlap.domain.singleagent.rlglue.RLGlueEnvironment

The BURLAP domain
 Domain  Interface in burlap.mdp.core

This is a marker interface for a problem domain.
 domain(Object)  Method in interface burlap.mdp.core.state.vardomain.StateDomain

Returns the numeric domain of the variable for the given key.
 domain  Variable in class burlap.mdp.singleagent.common.VisualActionObserver

The domain this visualizer is rendering
 domain  Variable in class burlap.mdp.singleagent.pomdp.beliefstate.TabularBeliefState

The POMDP domain with which this belief state is associated.
 domain  Variable in class burlap.mdp.singleagent.pomdp.beliefstate.TabularBeliefUpdate

 domain  Variable in class burlap.mdp.stochasticgames.agent.SGAgentBase

 domain  Variable in class burlap.mdp.stochasticgames.common.VisualWorldObserver

The domain this visualizer is rendering
 domain  Variable in class burlap.mdp.stochasticgames.tournament.common.ConstantWorldGenerator

 domain  Variable in class burlap.mdp.stochasticgames.world.World

 domain  Variable in class burlap.shell.BurlapShell

 domain  Variable in class burlap.shell.command.env.AddStateObjectCommand

 domain  Variable in class burlap.shell.command.env.ExecuteActionCommand

 domain  Variable in class burlap.shell.command.world.AddStateObjectSGCommand

 domain  Variable in class burlap.shell.visual.SGVisualExplorer

 domain  Variable in class burlap.shell.visual.VisualExplorer

 DomainGenerator  Interface in burlap.mdp.auxiliary

This class provides a simple interface for constructing domains, but it is not required to create domains.
 domains  Variable in class burlap.behavior.functionapproximation.dense.NormalizedVariableFeatures

 domainSet  Variable in class burlap.behavior.singleagent.interfaces.rlglue.RLGlueAgent

A variable for synchronized checking if the domain has been set.
 door  Variable in class burlap.domain.singleagent.pomdp.tiger.TigerState

 DOOR_RESET  Static variable in class burlap.domain.singleagent.pomdp.tiger.TigerDomain

The observation value for when reaching a new pair of doors (occurs after opening a door)
 dot(double[], double[])  Static method in class burlap.behavior.stochasticgames.solvers.GeneralBimatrixSolverTools

Returns the dot product of two vectors
 doubleEpislon  Static variable in class burlap.behavior.stochasticgames.agents.twoplayer.singlestage.equilibriumplayer.BimatrixEquilibriumSolver

The epislon difference used to test for double equality.
 doubleEquality(double, double)  Static method in class burlap.behavior.stochasticgames.agents.twoplayer.singlestage.equilibriumplayer.BimatrixEquilibriumSolver

 dp  Variable in class burlap.behavior.singleagent.planning.deterministic.DDPlannerPolicy

 dp  Variable in class burlap.behavior.singleagent.planning.deterministic.SDPlannerPolicy

 DPOperator  Interface in burlap.behavior.singleagent.planning.stochastic.dpoperator

Defines a function for applying a dynamic programming operator (e.g., reducing the Qvalues into a state value).
 DPPInit(SADomain, double, HashableStateFactory)  Method in class burlap.behavior.singleagent.learnfromdemo.mlirl.differentiableplanners.DifferentiableDP

 DPPInit(SADomain, double, HashableStateFactory)  Method in class burlap.behavior.singleagent.planning.stochastic.DynamicProgramming

 DPrint  Class in burlap.debugtools

A class for managing debug print statements.
 dwidth  Variable in class burlap.domain.singleagent.gridworld.GridWorldVisualizer.CellPainter

 dwidth  Variable in class burlap.domain.singleagent.gridworld.GridWorldVisualizer.LocationPainter

 dwidth  Variable in class burlap.domain.singleagent.gridworld.GridWorldVisualizer.MapPainter

 DynamicProgramming  Class in burlap.behavior.singleagent.planning.stochastic

A class for performing dynamic programming operations: updating the value function using a Bellman backup.
 DynamicProgramming()  Constructor for class burlap.behavior.singleagent.planning.stochastic.DynamicProgramming

 DynamicWeightedAStar  Class in burlap.behavior.singleagent.planning.deterministic.informed.astar

Dynamic Weighted A* [1] uses a dynamic heuristic weight that is based on depth of the current search tree and based on an expected depth of the search.
 DynamicWeightedAStar(SADomain, StateConditionTest, HashableStateFactory, Heuristic, double, int)  Constructor for class burlap.behavior.singleagent.planning.deterministic.informed.astar.DynamicWeightedAStar

Initializes