A B C D E F G H I J K L M N O P Q R S T U V W X Y Z 

E

ECorrelatedQJointPolicy - Class in burlap.behavior.stochasticgames.madynamicprogramming.policies
A joint policy that computes the correlated equilibrium using the Q-values of the agents as input and then either follows that policy or returns a random action with probability epsilon.
ECorrelatedQJointPolicy(double) - Constructor for class burlap.behavior.stochasticgames.madynamicprogramming.policies.ECorrelatedQJointPolicy
Initializes with the epislon probability of a random joint action.
ECorrelatedQJointPolicy(CorrelatedEquilibriumSolver.CorrelatedEquilibriumObjective, double) - Constructor for class burlap.behavior.stochasticgames.madynamicprogramming.policies.ECorrelatedQJointPolicy
Initializes with the correlated equilibrium objective and the epsilon probability of a random joint action.
EGreedyJointPolicy - Class in burlap.behavior.stochasticgames.madynamicprogramming.policies
An epsilon greedy joint policy, in which the joint action with the highest Q-value for a given target agent is returned a 1-epsilon fraction of the time, and a random joint action an epsilon fraction of the time.
EGreedyJointPolicy(double) - Constructor for class burlap.behavior.stochasticgames.madynamicprogramming.policies.EGreedyJointPolicy
Initializes for a given epsilon value.
EGreedyJointPolicy(MultiAgentQLearning, double) - Constructor for class burlap.behavior.stochasticgames.madynamicprogramming.policies.EGreedyJointPolicy
Initializes for a multi-agent Q-learning object and epsilon value.
EGreedyMaxWellfare - Class in burlap.behavior.stochasticgames.madynamicprogramming.policies
An epsilon greedy joint policy, in which the joint aciton with the highest aggregate Q-values for each agent is returned a 1-epsilon fraction of the time and a random joint action an epsilon fraction of the time.
EGreedyMaxWellfare(double) - Constructor for class burlap.behavior.stochasticgames.madynamicprogramming.policies.EGreedyMaxWellfare
Initializes for a given epsilon value.
EGreedyMaxWellfare(double, boolean) - Constructor for class burlap.behavior.stochasticgames.madynamicprogramming.policies.EGreedyMaxWellfare
Initializes for a given epsilon value and whether to break ties randomly.
EGreedyMaxWellfare(MultiAgentQLearning, double) - Constructor for class burlap.behavior.stochasticgames.madynamicprogramming.policies.EGreedyMaxWellfare
Initializes for a multi-agent Q-learning object and epsilon value.
EGreedyMaxWellfare(MultiAgentQLearning, double, boolean) - Constructor for class burlap.behavior.stochasticgames.madynamicprogramming.policies.EGreedyMaxWellfare
Initializes for a multi-agent Q-learning object and epsilon value.
eligibility - Variable in class burlap.behavior.singleagent.learning.actorcritic.critics.TDLambda.StateEligibilityTrace
The eligibility value
eligibility - Variable in class burlap.behavior.singleagent.learning.tdmethods.SarsaLam.EligibilityTrace
The eligibility value
eligibilityValue - Variable in class burlap.behavior.singleagent.learning.tdmethods.vfa.GradientDescentSarsaLam.EligibilityTraceVector
The eligibility value
EMinMaxPolicy - Class in burlap.behavior.stochasticgames.madynamicprogramming.policies
Class for following a minmax joint policy.
EMinMaxPolicy(double) - Constructor for class burlap.behavior.stochasticgames.madynamicprogramming.policies.EMinMaxPolicy
Initializes for a given epsilon value; the fraction of the time a random joint action is selected
EMinMaxPolicy(MultiAgentQLearning, double) - Constructor for class burlap.behavior.stochasticgames.madynamicprogramming.policies.EMinMaxPolicy
Initializes for a given Q-learning agent and epsilon value.
encodePlanIntoPolicy(SearchNode) - Method in class burlap.behavior.singleagent.planning.deterministic.DeterministicPlanner
Encodes a solution path found by the valueFunction into this class's internal policy structure.
endAllAgents() - Method in class burlap.behavior.singleagent.auxiliary.performance.PerformancePlotter
Informs the plotter that all data for all agents has been collected.
endAllTrials() - Method in class burlap.behavior.stochasticgames.auxiliary.performance.MultiAgentPerformancePlotter
Specifies that all trials are complete and that the average trial results and error bars should be plotted.
endAllTrialsForAgent(String) - Method in class burlap.behavior.stochasticgames.auxiliary.performance.MultiAgentPerformancePlotter
Ends all the trials, plotting the average trial data for the agent with the given name
endAllTrialsHelper() - Method in class burlap.behavior.stochasticgames.auxiliary.performance.MultiAgentPerformancePlotter
The end all trial methods helper called at the end of a swing update.
endEpisode() - Method in class burlap.behavior.singleagent.auxiliary.performance.PerformancePlotter
Informs the plotter that all data for the last episode has been collected.
endEpisode() - Method in interface burlap.behavior.singleagent.learning.actorcritic.Critic
This method is called whenever a learning episode terminates
endEpisode() - Method in class burlap.behavior.singleagent.learning.actorcritic.critics.TDLambda
 
endEpisode() - Method in class burlap.behavior.singleagent.learning.actorcritic.critics.TimeIndexedTDLambda
 
endTrial() - Method in class burlap.behavior.singleagent.auxiliary.performance.PerformancePlotter
Informs the plotter that all data for the current trial as been collected.
endTrial() - Method in class burlap.behavior.stochasticgames.auxiliary.performance.MultiAgentPerformancePlotter
Ends the current trial data and updates the plots accordingly.
endTrialsForCurrentAgent() - Method in class burlap.behavior.singleagent.auxiliary.performance.PerformancePlotter
Informs the plotter that all trials for the current agent have been collected and causes the average plots to be set and displayed.
entrySet() - Method in class burlap.datastructures.HashedAggregator
The entry set for stored keys and values.
enumerable() - Method in class burlap.behavior.singleagent.options.DeterministicTerminationOption
Returns true if the initiation states and termination states of this option are iterable; false if either of them are not.
EnumerableBeliefState - Interface in burlap.oomdp.singleagent.pomdp.beliefstate
An interface to be used by BeliefState implementations that also can enumerate the set of states that have probability mass.
EnumerableBeliefState.StateBelief - Class in burlap.oomdp.singleagent.pomdp.beliefstate
A class for specifying the probability mass of an MDP state in a BeliefState.
EnumerableBeliefState.StateBelief(State, double) - Constructor for class burlap.oomdp.singleagent.pomdp.beliefstate.EnumerableBeliefState.StateBelief
Initializes
enumeration - Variable in class burlap.behavior.singleagent.auxiliary.StateEnumerator
The forward state enumeration map
enumerator - Variable in class burlap.domain.singleagent.tabularized.TabulatedDomainWrapper
The state enumerator used for enumerating (or tabulating) all states
env - Variable in class burlap.oomdp.singleagent.explorer.VisualExplorer
 
env - Variable in class burlap.shell.EnvironmentShell
 
env_cleanup() - Method in class burlap.oomdp.singleagent.interfaces.rlglue.RLGlueEnvironment
 
env_init() - Method in class burlap.oomdp.singleagent.interfaces.rlglue.RLGlueEnvironment
 
env_message(String) - Method in class burlap.oomdp.singleagent.interfaces.rlglue.RLGlueEnvironment
 
env_start() - Method in class burlap.oomdp.singleagent.interfaces.rlglue.RLGlueEnvironment
 
env_step(Action) - Method in class burlap.oomdp.singleagent.interfaces.rlglue.RLGlueEnvironment
 
Environment - Interface in burlap.oomdp.singleagent.environment
Environments define a current observation represetned with a State and manage state and reward transitions when an action is executed in the environment through the Environment.executeAction(burlap.oomdp.singleagent.GroundedAction) method.
environment - Variable in class burlap.oomdp.singleagent.pomdp.BeliefAgent
The POMDP environment.
EnvironmentDelegation - Interface in burlap.oomdp.singleagent.environment
 
EnvironmentDelegation.EnvDelegationTools - Class in burlap.oomdp.singleagent.environment
A class that provides tools for working with Environment delegates
EnvironmentDelegation.EnvDelegationTools() - Constructor for class burlap.oomdp.singleagent.environment.EnvironmentDelegation.EnvDelegationTools
 
EnvironmentObserver - Interface in burlap.oomdp.singleagent.environment
A class that is told of interactions in an environment.
EnvironmentOptionOutcome - Class in burlap.behavior.singleagent.options.support
An EnvironmentOutcome class for reporting the effects of applying an Option in a given Environment.
EnvironmentOptionOutcome(State, GroundedAction, State, double, boolean, double, int) - Constructor for class burlap.behavior.singleagent.options.support.EnvironmentOptionOutcome
Initializes.
EnvironmentOutcome - Class in burlap.oomdp.singleagent.environment
A class for specifying the outcome of executing an action in an Environment.
EnvironmentOutcome(State, GroundedAction, State, double, boolean) - Constructor for class burlap.oomdp.singleagent.environment.EnvironmentOutcome
Initializes.
EnvironmentServer - Class in burlap.oomdp.singleagent.environment
A EnvironmentServerInterface implementation that delegates all Environment interactions and request to a provided Environment delegate.
EnvironmentServer(Environment, EnvironmentObserver...) - Constructor for class burlap.oomdp.singleagent.environment.EnvironmentServer
 
EnvironmentServer.StateSettableEnvironmentServer - Class in burlap.oomdp.singleagent.environment
 
EnvironmentServer.StateSettableEnvironmentServer(StateSettableEnvironment, EnvironmentObserver...) - Constructor for class burlap.oomdp.singleagent.environment.EnvironmentServer.StateSettableEnvironmentServer
 
EnvironmentServerInterface - Interface in burlap.oomdp.singleagent.environment
 
environmentSever - Variable in class burlap.behavior.singleagent.auxiliary.performance.LearningAlgorithmExperimenter
The EnvironmentServer that wraps the test Environment and tells a PerformancePlotter about the individual interactions.
EnvironmentShell - Class in burlap.shell
This is a subclass of BurlapShell for a shell with shell commands that manipulate or read an Environment.
EnvironmentShell(Domain, Environment, InputStream, PrintStream) - Constructor for class burlap.shell.EnvironmentShell
 
EpisodeAnalysis - Class in burlap.behavior.singleagent
This class is used to keep track of all events that occur in an episode.
EpisodeAnalysis() - Constructor for class burlap.behavior.singleagent.EpisodeAnalysis
Creates a new EpisodeAnalysis object.
EpisodeAnalysis(State) - Constructor for class burlap.behavior.singleagent.EpisodeAnalysis
Initializes a new EpisodeAnalysis object with the initial state in which the episode started.
episodeFiles - Variable in class burlap.behavior.singleagent.auxiliary.EpisodeSequenceVisualizer
 
episodeFiles - Variable in class burlap.behavior.stochasticgames.auxiliary.GameSequenceVisualizer
 
episodeHistory - Variable in class burlap.behavior.singleagent.learning.actorcritic.ActorCritic
The saved and most recent learning episodes this agent has performed.
episodeHistory - Variable in class burlap.behavior.singleagent.learning.lspi.LSPI
the saved previous learning episodes
episodeHistory - Variable in class burlap.behavior.singleagent.learning.modellearning.artdp.ARTDP
the saved previous learning episodes
episodeHistory - Variable in class burlap.behavior.singleagent.learning.modellearning.rmax.PotentialShapedRMax
the saved previous learning episodes
episodeHistory - Variable in class burlap.behavior.singleagent.learning.tdmethods.QLearning
the saved previous learning episodes
episodeHistory - Variable in class burlap.behavior.singleagent.learning.tdmethods.vfa.GradientDescentSarsaLam
the saved previous learning episodes
episodeList - Variable in class burlap.behavior.singleagent.auxiliary.EpisodeSequenceVisualizer
 
episodeList - Variable in class burlap.behavior.stochasticgames.auxiliary.GameSequenceVisualizer
 
EpisodeRecordingCommands - Class in burlap.shell.command.env
Two ShellCommands, rec and episode, for recording and browsing episodes of behavior that take place in the Environment.
EpisodeRecordingCommands() - Constructor for class burlap.shell.command.env.EpisodeRecordingCommands
 
EpisodeRecordingCommands.EpisodeBrowserCommand - Class in burlap.shell.command.env
 
EpisodeRecordingCommands.EpisodeBrowserCommand() - Constructor for class burlap.shell.command.env.EpisodeRecordingCommands.EpisodeBrowserCommand
 
EpisodeRecordingCommands.RecordCommand - Class in burlap.shell.command.env
 
EpisodeRecordingCommands.RecordCommand() - Constructor for class burlap.shell.command.env.EpisodeRecordingCommands.RecordCommand
 
episodes - Variable in class burlap.shell.command.env.EpisodeRecordingCommands
 
episodeScroller - Variable in class burlap.behavior.singleagent.auxiliary.EpisodeSequenceVisualizer
 
episodeScroller - Variable in class burlap.behavior.stochasticgames.auxiliary.GameSequenceVisualizer
 
EpisodeSequenceVisualizer - Class in burlap.behavior.singleagent.auxiliary
This class is used to visualize a set of episodes that have been saved to files in a common directory or which are provided to the object as a list of EpisodeAnalysis objects.
EpisodeSequenceVisualizer(Visualizer, Domain, String) - Constructor for class burlap.behavior.singleagent.auxiliary.EpisodeSequenceVisualizer
Initializes the EpisodeSequenceVisualizer.
EpisodeSequenceVisualizer(Visualizer, Domain, String, int, int) - Constructor for class burlap.behavior.singleagent.auxiliary.EpisodeSequenceVisualizer
Initializes the EpisodeSequenceVisualizer.
EpisodeSequenceVisualizer(Visualizer, Domain, List<EpisodeAnalysis>) - Constructor for class burlap.behavior.singleagent.auxiliary.EpisodeSequenceVisualizer
Initializes the EpisodeSequenceVisualizer with a programatically supplied list of EpisodeAnalysis objects to view.
EpisodeSequenceVisualizer(Visualizer, Domain, List<EpisodeAnalysis>, int, int) - Constructor for class burlap.behavior.singleagent.auxiliary.EpisodeSequenceVisualizer
Initializes the EpisodeSequenceVisualizer with a programatically supplied list of EpisodeAnalysis objects to view.
episodesListModel - Variable in class burlap.behavior.singleagent.auxiliary.EpisodeSequenceVisualizer
 
episodesListModel - Variable in class burlap.behavior.stochasticgames.auxiliary.GameSequenceVisualizer
 
episodeWeights - Variable in class burlap.behavior.singleagent.learnfromdemo.mlirl.MLIRLRequest
The weight assigned to each episode.
epsilon - Variable in class burlap.behavior.policy.EpsilonGreedy
 
epsilon - Variable in class burlap.behavior.singleagent.learnfromdemo.apprenticeship.ApprenticeshipLearningRequest
The maximum feature score to cause termination of Apprenticeship learning
epsilon - Variable in class burlap.behavior.singleagent.planning.deterministic.informed.astar.DynamicWeightedAStar
parameter > 1 indicating the maximum amount of greediness; the larger the more greedy.
epsilon - Variable in class burlap.behavior.singleagent.vfa.rbf.functions.FVGaussianRBF
The bandwidth parameter.
epsilon - Variable in class burlap.behavior.singleagent.vfa.rbf.functions.GaussianRBF
The bandwidth parameter.
epsilon - Variable in class burlap.behavior.stochasticgames.agents.naiveq.history.SGQWActionHistoryFactory
The epislon value for epislon greedy policy.
epsilon - Variable in class burlap.behavior.stochasticgames.madynamicprogramming.policies.ECorrelatedQJointPolicy
The epsilon parameter specifying how often random joint actions are returned
epsilon - Variable in class burlap.behavior.stochasticgames.madynamicprogramming.policies.EGreedyJointPolicy
The epsilon parameter specifying how often random joint actions are returned
epsilon - Variable in class burlap.behavior.stochasticgames.madynamicprogramming.policies.EGreedyMaxWellfare
The epsilon parameter specifying how often random joint actions are returned
epsilon - Variable in class burlap.behavior.stochasticgames.madynamicprogramming.policies.EMinMaxPolicy
The epsilon parameter specifying how often random joint actions are returned
EpsilonGreedy - Class in burlap.behavior.policy
This class defines a an epsilon-greedy policy over Q-values and requires a QComputable valueFunction to be specified.
EpsilonGreedy(double) - Constructor for class burlap.behavior.policy.EpsilonGreedy
Initializes with the value of epsilon, where epsilon is the probability of taking a random action.
EpsilonGreedy(QFunction, double) - Constructor for class burlap.behavior.policy.EpsilonGreedy
Initializes with the QComputablePlanner to use and the value of epsilon to use, where epsilon is the probability of taking a random action.
epsilonP1 - Variable in class burlap.behavior.singleagent.planning.deterministic.informed.astar.StaticWeightedAStar
The > 1 epsilon parameter.
epsilonWeight(int) - Method in class burlap.behavior.singleagent.planning.deterministic.informed.astar.DynamicWeightedAStar
Returns the weighted epsilon value at the given search depth
equals(Object) - Method in class burlap.behavior.policy.Policy.GroundedAnnotatedAction
 
equals(Object) - Method in class burlap.behavior.singleagent.planning.deterministic.informed.PrioritizedSearchNode
 
equals(Object) - Method in class burlap.behavior.singleagent.planning.deterministic.SearchNode
 
equals(Object) - Method in class burlap.behavior.singleagent.planning.stochastic.montecarlo.uct.UCTStateNode
 
equals(Object) - Method in class burlap.behavior.singleagent.planning.stochastic.sparsesampling.SparseSampling.HashedHeightState
 
equals(Object) - Method in class burlap.behavior.singleagent.planning.stochastic.valueiteration.PrioritizedSweeping.BPTRNode
 
equals(Object) - Method in class burlap.behavior.singleagent.vfa.cmac.FVTiling.FVTile
 
equals(Object) - Method in class burlap.behavior.singleagent.vfa.cmac.Tiling.ObjectTile
 
equals(Object) - Method in class burlap.behavior.singleagent.vfa.cmac.Tiling.StateTile
 
equals(Object) - Method in class burlap.behavior.singleagent.vfa.FunctionGradient.PartialDerivative
 
equals(Object) - Method in class burlap.behavior.stochasticgames.agents.interfacing.singleagent.SGToSADomain.GroundedSAAActionWrapper
 
equals(Object) - Method in class burlap.domain.singleagent.graphdefined.GraphDefinedDomain.NodeTransitionProbability
 
equals(Object) - Method in class burlap.domain.singleagent.gridworld.GridWorldTerminalFunction.IntPair
 
equals(Object) - Method in class burlap.domain.stochasticgames.normalform.SingleStageNormalFormGame.StrategyProfile
 
equals(Object) - Method in class burlap.oomdp.core.Attribute
 
equals(Object) - Method in class burlap.oomdp.core.GroundedProp
 
equals(Object) - Method in class burlap.oomdp.core.objects.ImmutableObjectInstance
 
equals(Object) - Method in class burlap.oomdp.core.objects.MutableObjectInstance
 
equals(Object) - Method in class burlap.oomdp.core.PropositionalFunction
 
equals(Object) - Method in class burlap.oomdp.core.states.FixedSizeImmutableState
 
equals(Object) - Method in class burlap.oomdp.core.states.ImmutableState
 
equals(Object) - Method in class burlap.oomdp.core.states.MutableState
 
equals(Object) - Method in class burlap.oomdp.core.values.DiscreteValue
 
equals(Object) - Method in class burlap.oomdp.core.values.DoubleArrayValue
 
equals(Object) - Method in class burlap.oomdp.core.values.IntArrayValue
 
equals(Object) - Method in class burlap.oomdp.core.values.IntValue
 
equals(Object) - Method in class burlap.oomdp.core.values.MultiTargetRelationalValue
 
equals(Object) - Method in class burlap.oomdp.core.values.RealValue
 
equals(Object) - Method in class burlap.oomdp.core.values.RelationalValue
 
equals(Object) - Method in class burlap.oomdp.core.values.StringValue
 
equals(Object) - Method in class burlap.oomdp.singleagent.Action
 
equals(Object) - Method in class burlap.oomdp.singleagent.GroundedAction
 
equals(Object) - Method in class burlap.oomdp.singleagent.ObjectParameterizedAction.ObjectParameterizedGroundedAction
 
equals(Object) - Method in class burlap.oomdp.singleagent.pomdp.beliefstate.tabular.HashableTabularBeliefStateFactory.HashableTabularBeliefState
 
equals(Object) - Method in class burlap.oomdp.singleagent.pomdp.beliefstate.tabular.TabularBeliefState
 
equals(Object) - Method in class burlap.oomdp.statehashing.HashableState
 
equals(Object) - Method in class burlap.oomdp.statehashing.ImmutableStateHashableStateFactory.ImmutableHashableState
 
equals(Object) - Method in class burlap.oomdp.statehashing.SimpleHashableStateFactory.SimpleCachedHashableState
 
equals(Object) - Method in class burlap.oomdp.statehashing.SimpleHashableStateFactory.SimpleHashableState
 
equals(Object) - Method in class burlap.oomdp.stochasticgames.agentactions.GroundedSGAgentAction
 
equals(Object) - Method in class burlap.oomdp.stochasticgames.agentactions.ObParamSGAgentAction.GroundedObParamSGAgentAction
 
equals(Object) - Method in class burlap.oomdp.stochasticgames.agentactions.SGAgentAction
 
equals(Object) - Method in class burlap.oomdp.stochasticgames.JointAction
 
equals(Object) - Method in class burlap.oomdp.stochasticgames.SGAgentType
 
EquilibriumPlayingSGAgent - Class in burlap.behavior.stochasticgames.agents.twoplayer.singlestage.equilibriumplayer
This agent plays an equilibrium solution for two player games based on the immediate joint rewards received for the given state, as if it is a single stage game.
EquilibriumPlayingSGAgent() - Constructor for class burlap.behavior.stochasticgames.agents.twoplayer.singlestage.equilibriumplayer.EquilibriumPlayingSGAgent
Initializes with the MaxMax solution concept.
EquilibriumPlayingSGAgent(BimatrixEquilibriumSolver) - Constructor for class burlap.behavior.stochasticgames.agents.twoplayer.singlestage.equilibriumplayer.EquilibriumPlayingSGAgent
Initializes with strategies formed usign the solution concept generated by the given solver.
EquilibriumPlayingSGAgent.BimatrixTuple - Class in burlap.behavior.stochasticgames.agents.twoplayer.singlestage.equilibriumplayer
A Bimatrix tuple.
EquilibriumPlayingSGAgent.BimatrixTuple(int, int) - Constructor for class burlap.behavior.stochasticgames.agents.twoplayer.singlestage.equilibriumplayer.EquilibriumPlayingSGAgent.BimatrixTuple
Initializes the payoff matrices for a bimatrix of the given row and column dimensionality
EquilibriumPlayingSGAgent.BimatrixTuple(double[][], double[][]) - Constructor for class burlap.behavior.stochasticgames.agents.twoplayer.singlestage.equilibriumplayer.EquilibriumPlayingSGAgent.BimatrixTuple
Initializes with a given row and column player payoffs.
eStepCounter - Variable in class burlap.behavior.singleagent.learning.tdmethods.QLearning
A counter for counting the number of steps in an episode that have been taken thus far
eStepCounter - Variable in class burlap.behavior.singleagent.learning.tdmethods.vfa.GradientDescentSarsaLam
A counter for counting the number of steps in an episode that have been taken thus far
estimateFeatureExpectation(EpisodeAnalysis, StateToFeatureVectorGenerator, Double) - Static method in class burlap.behavior.singleagent.learnfromdemo.apprenticeship.ApprenticeshipLearning
Calculates the Feature Expectations given one demonstration, a feature mapping and a discount factor gamma
estimateFeatureExpectation(List<EpisodeAnalysis>, StateToFeatureVectorGenerator, Double) - Static method in class burlap.behavior.singleagent.learnfromdemo.apprenticeship.ApprenticeshipLearning
Calculates the Feature Expectations given a list of demonstrations, a feature mapping and a discount factor gamma
estimateQs() - Method in class burlap.behavior.singleagent.learnfromdemo.mlirl.differentiableplanners.DifferentiableSparseSampling.DiffStateNode
 
estimateQs() - Method in class burlap.behavior.singleagent.planning.stochastic.sparsesampling.SparseSampling.StateNode
Estimates and returns the Q-values for this node.
estimateV() - Method in class burlap.behavior.singleagent.learnfromdemo.mlirl.differentiableplanners.DifferentiableSparseSampling.DiffStateNode
 
estimateV() - Method in class burlap.behavior.singleagent.planning.stochastic.sparsesampling.SparseSampling.StateNode
Returns the estimated Q-value if this node is closed, or estimates it and closes it otherwise.
EuclideanDistance - Class in burlap.behavior.singleagent.vfa.rbf.metrics
 
EuclideanDistance(StateToFeatureVectorGenerator) - Constructor for class burlap.behavior.singleagent.vfa.rbf.metrics.EuclideanDistance
 
evaluate(State, AbstractGroundedAction) - Method in class burlap.behavior.singleagent.vfa.common.LinearFVVFA
 
evaluate(State) - Method in class burlap.behavior.singleagent.vfa.common.LinearFVVFA
 
evaluate(State, AbstractGroundedAction) - Method in class burlap.behavior.singleagent.vfa.common.LinearVFA
 
evaluate(State) - Method in class burlap.behavior.singleagent.vfa.common.LinearVFA
 
evaluate(State, AbstractGroundedAction) - Method in interface burlap.behavior.singleagent.vfa.ParametricFunction.ParametricStateActionFunction
Sets the input of this function to the given State and AbstractGroundedAction and returns the value of it.
evaluate(State) - Method in interface burlap.behavior.singleagent.vfa.ParametricFunction.ParametricStateFunction
Sets the input of this function to the given State and returns the value of it.
evaluateBehavior(State, RewardFunction, TerminalFunction) - Method in class burlap.behavior.policy.Policy
This method will return the an episode that results from following this policy from state s.
evaluateBehavior(State, RewardFunction, TerminalFunction, int) - Method in class burlap.behavior.policy.Policy
This method will return the an episode that results from following this policy from state s.
evaluateBehavior(State, RewardFunction, int) - Method in class burlap.behavior.policy.Policy
This method will return the an episode that results from following this policy from state s.
evaluateBehavior(Environment) - Method in class burlap.behavior.policy.Policy
Evaluates this policy in the provided Environment.
evaluateBehavior(Environment, int) - Method in class burlap.behavior.policy.Policy
Evaluates this policy in the provided Environment.
evaluateDecomposesOptions - Variable in class burlap.behavior.policy.Policy
 
evaluateEpisode(EpisodeAnalysis) - Method in class burlap.testing.TestPlanning
 
evaluateEpisode(EpisodeAnalysis, Boolean) - Method in class burlap.testing.TestPlanning
 
evaluateMethodsShouldAnnotateOptionDecomposition(boolean) - Method in class burlap.behavior.policy.Policy
Sets whether options that are decomposed into primitives will have the option that produced them and listed.
evaluateMethodsShouldDecomposeOption(boolean) - Method in class burlap.behavior.policy.Policy
Sets whether the primitive actions taken during an options will be included as steps in produced EpisodeAnalysis objects.
evaluatePolicy(Policy, State) - Method in class burlap.behavior.singleagent.planning.stochastic.policyiteration.PolicyEvaluation
Computes the value function for the given policy after finding all reachable states from seed state s
evaluatePolicy(Policy) - Method in class burlap.behavior.singleagent.planning.stochastic.policyiteration.PolicyEvaluation
Computes the value function for the given policy over the states that have been discovered
evaluatePolicy() - Method in class burlap.behavior.singleagent.planning.stochastic.policyiteration.PolicyIteration
Computes the value function under following the current evaluative policy.
evaluativePolicy - Variable in class burlap.behavior.singleagent.planning.stochastic.policyiteration.PolicyIteration
The current policy to be evaluated
ExampleGridWorld - Class in burlap.tutorials.bd
 
ExampleGridWorld() - Constructor for class burlap.tutorials.bd.ExampleGridWorld
 
ExampleGridWorld.AgentPainter - Class in burlap.tutorials.bd
 
ExampleGridWorld.AgentPainter() - Constructor for class burlap.tutorials.bd.ExampleGridWorld.AgentPainter
 
ExampleGridWorld.AtLocation - Class in burlap.tutorials.bd
 
ExampleGridWorld.AtLocation(Domain) - Constructor for class burlap.tutorials.bd.ExampleGridWorld.AtLocation
 
ExampleGridWorld.ExampleRF - Class in burlap.tutorials.bd
 
ExampleGridWorld.ExampleRF(int, int) - Constructor for class burlap.tutorials.bd.ExampleGridWorld.ExampleRF
 
ExampleGridWorld.ExampleTF - Class in burlap.tutorials.bd
 
ExampleGridWorld.ExampleTF(int, int) - Constructor for class burlap.tutorials.bd.ExampleGridWorld.ExampleTF
 
ExampleGridWorld.LocationPainter - Class in burlap.tutorials.bd
 
ExampleGridWorld.LocationPainter() - Constructor for class burlap.tutorials.bd.ExampleGridWorld.LocationPainter
 
ExampleGridWorld.Movement - Class in burlap.tutorials.bd
 
ExampleGridWorld.Movement(String, Domain, int) - Constructor for class burlap.tutorials.bd.ExampleGridWorld.Movement
 
ExampleGridWorld.WallPainter - Class in burlap.tutorials.bd
 
ExampleGridWorld.WallPainter() - Constructor for class burlap.tutorials.bd.ExampleGridWorld.WallPainter
 
executeAction(GroundedAction) - Method in class burlap.behavior.singleagent.interfaces.rlglue.RLGlueEnvironmentInterface
 
executeAction(GroundedAction) - Method in class burlap.behavior.stochasticgames.agents.interfacing.singleagent.LearningAgentToSGAgentInterface
 
executeAction(GroundedAction) - Method in interface burlap.oomdp.singleagent.environment.Environment
Executes the specified action in this environment
executeAction(GroundedAction) - Method in class burlap.oomdp.singleagent.environment.EnvironmentServer
 
executeAction(GroundedAction) - Method in class burlap.oomdp.singleagent.environment.SimulatedEnvironment
 
executeAction(String[]) - Method in class burlap.oomdp.singleagent.explorer.VisualExplorer
Executes the action defined in string array with the first component being the action name and the rest the parameters.
executeAction(GroundedAction) - Method in class burlap.oomdp.singleagent.explorer.VisualExplorer
Executes the provided GroundedAction in the explorer's environment and records the result if episodes are being recorded.
executeAction(GroundedAction) - Method in class burlap.oomdp.singleagent.pomdp.SimulatedPOEnvironment
 
ExecuteActionCommand - Class in burlap.shell.command.env
A ShellCommand for executing an action in the Environment Use the -h option for help information.
ExecuteActionCommand(Domain) - Constructor for class burlap.shell.command.env.ExecuteActionCommand
 
executeCommand(String) - Method in class burlap.shell.BurlapShell
 
executeIn(Environment) - Method in class burlap.behavior.policy.Policy.GroundedAnnotatedAction
 
executeIn(State) - Method in class burlap.behavior.policy.Policy.GroundedAnnotatedAction
 
executeIn(Environment) - Method in class burlap.oomdp.singleagent.GroundedAction
Executes this grounded action in the specified Environment.
executeIn(State) - Method in class burlap.oomdp.singleagent.GroundedAction
Executes the grounded action on a given state
executeJointAction(JointAction) - Method in class burlap.oomdp.stochasticgames.World
Manually attempts to execute a joint action in the current world state, if a game is currently not running.
expandStateActionWeights(int) - Method in class burlap.behavior.singleagent.vfa.common.LinearFVVFA
Expands the state-action function weight vector by a fixed sized and initializes their value to the default weight value set for this object.
expectationSearchCutoffProb - Variable in class burlap.behavior.singleagent.options.Option
The minimum probability a possible terminal state being reached to be included in the computed transition dynamics
expectationStateHashingFactory - Variable in class burlap.behavior.singleagent.options.Option
State hash factory used to cache the transition probabilities so that they only need to be computed once for each state
expectedDepth - Variable in class burlap.behavior.singleagent.planning.deterministic.informed.astar.DynamicWeightedAStar
The expected depth required for a plan
expectedGap - Variable in class burlap.behavior.singleagent.planning.stochastic.rtdp.BoundedRTDP.StateSelectionAndExpectedGap
The expected margin/gap of the value function from the source transition
expectedPayoffs(double[][], double[][], double[], double[]) - Static method in class burlap.behavior.stochasticgames.solvers.GeneralBimatrixSolverTools
Computes the expected payoff for each player in a bimatrix game according to their strategies.
expectedPayoffs(double[][], double[][], double[][]) - Static method in class burlap.behavior.stochasticgames.solvers.GeneralBimatrixSolverTools
 
ExperimentalEnvironment - Interface in burlap.behavior.singleagent.auxiliary.performance
An interface to be used in conjunction with Environment implementations that can accept a message informing the environment that a new experiment for a LearningAgent has started.
experimentAndPlotter() - Method in class burlap.tutorials.bpl.BasicBehavior
 
expertEpisodes - Variable in class burlap.behavior.singleagent.learnfromdemo.IRLRequest
The input trajectories/episodes that are to be modeled.
explorationBias - Variable in class burlap.behavior.singleagent.planning.stochastic.montecarlo.uct.UCT
 
explorationQBoost(int, int) - Method in class burlap.behavior.singleagent.planning.stochastic.montecarlo.uct.UCT
Returns the extra value added to the average sample Q-value that is sued to produce the upper confidence Q-value.
explore() - Method in class burlap.oomdp.singleagent.explorer.TerminalExplorer
Deprecated.
Starts the shell.
explore() - Method in class burlap.oomdp.stochasticgames.explorers.SGTerminalExplorer
Deprecated.
 
ExponentialDecayLR - Class in burlap.behavior.learningrate
This class provides a learning rate that decays exponentially with time according to r^t, where r is in [0,1] and t is the time step, from an initial learning rate.
ExponentialDecayLR(double, double) - Constructor for class burlap.behavior.learningrate.ExponentialDecayLR
Initializes with an initial learning rate and decay rate for a state independent learning rate.
ExponentialDecayLR(double, double, double) - Constructor for class burlap.behavior.learningrate.ExponentialDecayLR
Initializes with an initial learning rate and decay rate for a state independent learning rate that will decay to a value no smaller than minimumLearningRate
ExponentialDecayLR(double, double, HashableStateFactory, boolean) - Constructor for class burlap.behavior.learningrate.ExponentialDecayLR
Initializes with an initial learning rate and decay rate for a state or state-action (or state feature-action) dependent learning rate.
ExponentialDecayLR(double, double, double, HashableStateFactory, boolean) - Constructor for class burlap.behavior.learningrate.ExponentialDecayLR
Initializes with an initial learning rate and decay rate for a state or state-action (or state feature-action) dependent learning rate that will decay to a value no smaller than minimumLearningRate If this learning rate function is to be used for state state features, rather than states, then the hashing factory can be null;
ExponentialDecayLR.MutableDouble - Class in burlap.behavior.learningrate
A class for storing a mutable double value object
ExponentialDecayLR.MutableDouble(double) - Constructor for class burlap.behavior.learningrate.ExponentialDecayLR.MutableDouble
 
ExponentialDecayLR.StateWiseLearningRate - Class in burlap.behavior.learningrate
A class for storing a learning rate for a state, or a learning rate for each action for a given state
ExponentialDecayLR.StateWiseLearningRate() - Constructor for class burlap.behavior.learningrate.ExponentialDecayLR.StateWiseLearningRate
 
externalTerminalFunction - Variable in class burlap.behavior.singleagent.options.Option
the terminal function of the MDP in which this option is to be executed.
A B C D E F G H I J K L M N O P Q R S T U V W X Y Z