Skip navigation links
A B C D E F G H I J K L M N O P Q R S T U V W X Y Z 

U

u(String) - Static method in class burlap.debugtools.DPrint
A universal print whose behavior is determined by the universalPrint field
UCT - Class in burlap.behavior.singleagent.planning.stochastic.montecarlo.uct
An implementation of UCT [1].
UCT(SADomain, double, HashableStateFactory, int, int, int) - Constructor for class burlap.behavior.singleagent.planning.stochastic.montecarlo.uct.UCT
Initializes UCT
UCTActionConstructor() - Constructor for class burlap.behavior.singleagent.planning.stochastic.montecarlo.uct.UCTActionNode.UCTActionConstructor
 
UCTActionNode - Class in burlap.behavior.singleagent.planning.stochastic.montecarlo.uct
UCT Action node that stores relevant action statics necessary for UCT.
UCTActionNode(Action) - Constructor for class burlap.behavior.singleagent.planning.stochastic.montecarlo.uct.UCTActionNode
Generates a new action node for a given action.
UCTActionNode.UCTActionConstructor - Class in burlap.behavior.singleagent.planning.stochastic.montecarlo.uct
A factory for generating UCTActionNode objects.
UCTInit(SADomain, double, HashableStateFactory, int, int, int) - Method in class burlap.behavior.singleagent.planning.stochastic.montecarlo.uct.UCT
 
UCTStateConstructor() - Constructor for class burlap.behavior.singleagent.planning.stochastic.montecarlo.uct.UCTStateNode.UCTStateConstructor
 
UCTStateNode - Class in burlap.behavior.singleagent.planning.stochastic.montecarlo.uct
UCT State Node that wraps a hashed state object and provided additional state statistics necessary for UCT.
UCTStateNode(HashableState, int, List<ActionType>, UCTActionNode.UCTActionConstructor) - Constructor for class burlap.behavior.singleagent.planning.stochastic.montecarlo.uct.UCTStateNode
Initializes the UCT state node.
UCTStateNode.UCTStateConstructor - Class in burlap.behavior.singleagent.planning.stochastic.montecarlo.uct
A factory for generating UCTStateNode objects
UCTTreeWalkPolicy - Class in burlap.behavior.singleagent.planning.stochastic.montecarlo.uct
This policy is for use with UCT.
UCTTreeWalkPolicy(UCT) - Constructor for class burlap.behavior.singleagent.planning.stochastic.montecarlo.uct.UCTTreeWalkPolicy
Initializes the policy with the UCT valueFunction
uf(String, Object...) - Static method in class burlap.debugtools.DPrint
A universal printf whose behavior is determined by the universalPrint field
ul(String) - Static method in class burlap.debugtools.DPrint
A universal print line whose behavior is determined by the universalPrint field
UniformCostRF - Class in burlap.mdp.singleagent.common
Defines a reward function that always returns -1.
UniformCostRF() - Constructor for class burlap.mdp.singleagent.common.UniformCostRF
 
UniformRandomSARSCollector(SADomain) - Constructor for class burlap.behavior.singleagent.learning.lspi.SARSCollector.UniformRandomSARSCollector
Initializes the collector's action set using the actions that are part of the domain.
UniformRandomSARSCollector(List<ActionType>) - Constructor for class burlap.behavior.singleagent.learning.lspi.SARSCollector.UniformRandomSARSCollector
Initializes this collector's action set to use for collecting data.
uniqueActionNames - Variable in class burlap.domain.stochasticgames.normalform.SingleStageNormalFormGame
The unique action names for the domain to be generated.
uniqueStatesInTree - Variable in class burlap.behavior.singleagent.planning.stochastic.montecarlo.uct.UCT
 
UniversalActionType - Class in burlap.mdp.core.action
An ActionType implementation for unparameterized actions (or at least a single action whose parameters are full specified at construction time of this ActionType) that have no preconditions (can be executed anywhere).
UniversalActionType(String) - Constructor for class burlap.mdp.core.action.UniversalActionType
Initializes with the type name and sets to return a SimpleAction whose action name is the same as the type name.
UniversalActionType(Action) - Constructor for class burlap.mdp.core.action.UniversalActionType
Initializes to return the given action.
UniversalActionType(String, Action) - Constructor for class burlap.mdp.core.action.UniversalActionType
Initializes.
universalLR - Variable in class burlap.behavior.learningrate.ExponentialDecayLR
The state independent learning rate
universalTime - Variable in class burlap.behavior.learningrate.SoftTimeInverseDecayLR
The universal number of learning rate polls
UnknownClassException - Exception in burlap.mdp.core.oo.state.exceptions
 
UnknownClassException(String) - Constructor for exception burlap.mdp.core.oo.state.exceptions.UnknownClassException
 
UnknownKeyException - Exception in burlap.mdp.core.state
A runtime exception for when a State variable key is unknown.
UnknownKeyException(Object) - Constructor for exception burlap.mdp.core.state.UnknownKeyException
 
UnknownObjectException - Exception in burlap.mdp.core.oo.state.exceptions
An exception for when an OOState is queried for an unknown object.
UnknownObjectException(String) - Constructor for exception burlap.mdp.core.oo.state.exceptions.UnknownObjectException
 
unmarkAllTerminalPositions() - Method in class burlap.domain.singleagent.gridworld.GridWorldTerminalFunction
Unmarks all agent positions as terminal positions.
unmarkTerminalPosition(int, int) - Method in class burlap.domain.singleagent.gridworld.GridWorldTerminalFunction
Unmarks an agent position as a terminal position.
unmodeledActions(KWIKModel, List<ActionType>, State) - Static method in class burlap.behavior.singleagent.learning.modellearning.KWIKModel.Helper
 
UnmodeledFavoredPolicy - Class in burlap.behavior.singleagent.learning.modellearning.rmax
 
UnmodeledFavoredPolicy(Policy, KWIKModel, List<ActionType>) - Constructor for class burlap.behavior.singleagent.learning.modellearning.rmax.UnmodeledFavoredPolicy
 
unstack(BlocksWorldState, ObjectParameterizedAction) - Method in class burlap.domain.singleagent.blocksworld.BWModel
 
UnstackActionType(String) - Constructor for class burlap.domain.singleagent.blocksworld.BlocksWorld.UnstackActionType
 
update(double) - Method in class burlap.behavior.singleagent.planning.stochastic.montecarlo.uct.UCTActionNode
Updates the node statistics with a sample return
update(BeliefState, State, Action) - Method in interface burlap.mdp.singleagent.pomdp.beliefstate.BeliefUpdate
Computes a new belief distribution from a previous belief and given a new observation received after taking a specific action.
update(BeliefState, State, Action) - Method in class burlap.mdp.singleagent.pomdp.beliefstate.TabularBeliefUpdate
 
updateAERSeris() - Method in class burlap.behavior.singleagent.auxiliary.performance.PerformancePlotter
Updates the average reward by episode series.
updateAERSeris(MultiAgentPerformancePlotter.DatasetsAndTrials) - Method in class burlap.behavior.stochasticgames.auxiliary.performance.MultiAgentPerformancePlotter
Updates the average reward by episode series.
updateAndWait(State) - Method in class burlap.mdp.stochasticgames.common.VisualWorldObserver
 
updateCERSeries() - Method in class burlap.behavior.singleagent.auxiliary.performance.PerformancePlotter
Updates the cumulative reward by episode series.
updateCERSeries(MultiAgentPerformancePlotter.DatasetsAndTrials) - Method in class burlap.behavior.stochasticgames.auxiliary.performance.MultiAgentPerformancePlotter
Updates the cumulative reward by episode series.
updateCSESeries() - Method in class burlap.behavior.singleagent.auxiliary.performance.PerformancePlotter
Updates the cumulative steps by episode series.
updateCSESeries(MultiAgentPerformancePlotter.DatasetsAndTrials) - Method in class burlap.behavior.stochasticgames.auxiliary.performance.MultiAgentPerformancePlotter
Updates the cumulative steps by episode series.
updateCSRSeries() - Method in class burlap.behavior.singleagent.auxiliary.performance.PerformancePlotter
Updates the cumulative reward by step series.
updateCSRSeries(MultiAgentPerformancePlotter.DatasetsAndTrials) - Method in class burlap.behavior.stochasticgames.auxiliary.performance.MultiAgentPerformancePlotter
Updates the cumulative reward by step series.
updateDatasetWithLearningEpisode(Episode) - Method in class burlap.behavior.singleagent.learning.lspi.LSPI
Updates this object's SARSData to include the results of a learning episode.
updateFromCritique(CritiqueResult) - Method in class burlap.behavior.singleagent.learning.actorcritic.actor.BoltzmannActor
 
updateFromCritique(CritiqueResult) - Method in class burlap.behavior.singleagent.learning.actorcritic.Actor
Causes this object to update its behavior is response to a critique of its behavior.
updateGBConstraint(GridBagConstraints, int) - Method in class burlap.behavior.singleagent.auxiliary.performance.PerformancePlotter
Increments the x-y position of a constraint to the next position.
updateGBConstraint(GridBagConstraints, int) - Method in class burlap.behavior.stochasticgames.auxiliary.performance.MultiAgentPerformancePlotter
Increments the x-y position of a constraint to the next position.
updateLatestQValue() - Method in class burlap.behavior.stochasticgames.agents.maql.MultiAgentQLearning
Updates the Q-value for the most recent observation if it has not already been updated
updateMERSeris() - Method in class burlap.behavior.singleagent.auxiliary.performance.PerformancePlotter
Updates the median reward by episode series.
updateMERSeris(MultiAgentPerformancePlotter.DatasetsAndTrials) - Method in class burlap.behavior.stochasticgames.auxiliary.performance.MultiAgentPerformancePlotter
Updates the median reward by episode series.
updateModel(EnvironmentOutcome) - Method in interface burlap.behavior.singleagent.learning.modellearning.LearnedModel
Updates this model with respect to the observed EnvironmentOutcome.
updateModel(EnvironmentOutcome) - Method in class burlap.behavior.singleagent.learning.modellearning.models.TabularModel
 
updateModel(EnvironmentOutcome) - Method in class burlap.behavior.singleagent.learning.modellearning.rmax.RMaxModel
 
updateMostRecentSeriesHelper() - Method in class burlap.behavior.stochasticgames.auxiliary.performance.MultiAgentPerformancePlotter
Updates the series data for the most recent trial plots.
updateMotion(LLState, double) - Method in class burlap.domain.singleagent.lunarlander.LunarLanderModel
Updates the position of the agent/lander given the provided thrust force that has been exerted
updateOpen(HashIndexedHeap<PrioritizedSearchNode>, PrioritizedSearchNode, PrioritizedSearchNode) - Method in class burlap.behavior.singleagent.planning.deterministic.informed.astar.AStar
 
updateOpen(HashIndexedHeap<PrioritizedSearchNode>, PrioritizedSearchNode, PrioritizedSearchNode) - Method in class burlap.behavior.singleagent.planning.deterministic.informed.astar.DynamicWeightedAStar
 
updateOpen(HashIndexedHeap<PrioritizedSearchNode>, PrioritizedSearchNode, PrioritizedSearchNode) - Method in class burlap.behavior.singleagent.planning.deterministic.informed.BestFirst
This method is called whenever a search node already in the openQueue needs to have its information or priority updated to reflect a new search node.
updatePropTextArea(State) - Method in class burlap.behavior.singleagent.auxiliary.EpisodeSequenceVisualizer
 
updatePropTextArea(State) - Method in class burlap.shell.visual.SGVisualExplorer
 
updatePropTextArea(State) - Method in class burlap.shell.visual.VisualExplorer
Updates the propositional function evaluation text display for the given state.
updater - Variable in class burlap.mdp.singleagent.pomdp.BeliefAgent
The belief update to use
updater - Variable in class burlap.mdp.singleagent.pomdp.BeliefMDPGenerator.BeliefModel
 
updateRenderedStateAction(State, Action) - Method in class burlap.visualizer.StateActionRenderLayer
Updates the State and Action that will be rendered the next time this class draws
updateSESeries() - Method in class burlap.behavior.singleagent.auxiliary.performance.PerformancePlotter
Updates the steps by episode series.
updateSESeries(MultiAgentPerformancePlotter.DatasetsAndTrials) - Method in class burlap.behavior.stochasticgames.auxiliary.performance.MultiAgentPerformancePlotter
Updates the steps by episode series.
updateState(State, double) - Method in class burlap.domain.singleagent.cartpole.model.IPModel
Updates the given state object given the control force.
updateState(State) - Method in class burlap.shell.visual.SGVisualExplorer
Updates the currently visualized state to the input state.
updateState(State) - Method in class burlap.shell.visual.VisualExplorer
Updates the currently visualized state to the input state.
updateState(State) - Method in class burlap.visualizer.StateRenderLayer
Updates the state that needs to be painted
updateState(State) - Method in class burlap.visualizer.Visualizer
Updates the state that needs to be painted and repaints.
updateStateAction(State, Action) - Method in class burlap.visualizer.Visualizer
Updates the state and action for the StateRenderLayer and StateActionRenderLayer; then repaints.
updateTimeSeries() - Method in class burlap.behavior.singleagent.auxiliary.performance.PerformancePlotter
Updates all the most recent trial time series with the latest data
updateTimeSeries() - Method in class burlap.behavior.stochasticgames.auxiliary.performance.MultiAgentPerformancePlotter
Updates all the most recent trial time series with the latest data
upper - Variable in class burlap.mdp.core.state.vardomain.VariableDomain
The upper value of the domain
upperBoundV - Variable in class burlap.behavior.singleagent.planning.stochastic.rtdp.BoundedRTDP
The upperbound value function
upperVal - Variable in class burlap.behavior.singleagent.auxiliary.gridset.VariableGridSpec
The upper value of the variable on the grid
upperVInit - Variable in class burlap.behavior.singleagent.planning.stochastic.rtdp.BoundedRTDP
The upperbound value function initialization
useAllDomains(StateDomain) - Method in class burlap.behavior.functionapproximation.dense.NormalizedVariableFeatures
Goes through the state and sets the ranges for all variables that have a VariableDomain set.
useBatch - Variable in class burlap.behavior.singleagent.planning.stochastic.rtdp.RTDP
If set to use batch mode; Bellman updates will be stalled until a rollout is complete and then run in reverse.
useCorrectModel - Variable in class burlap.domain.singleagent.cartpole.CartPoleDomain.CPPhysicsParams
Specifies whether the correct Cart Pole physical model should be used or the classic, but incorrect, Barto Sutton and Anderson model [1].
usedConstructorState - Variable in class burlap.domain.singleagent.rlglue.RLGlueEnvironment
Whether the state generated from the state generator to gather auxiliary information (like the number of objects of each class) has yet be used as a starting state for an RLGlue episode.
useFeatureWiseLearningRate - Variable in class burlap.behavior.singleagent.learning.tdmethods.vfa.GradientDescentSarsaLam
Whether the learning rate polls should be based on the VFA state features or OO-MDP state.
useGoalConditionStopCriteria(StateConditionTest) - Method in class burlap.behavior.singleagent.planning.stochastic.montecarlo.uct.UCT
Tells the valueFunction to stop planning if a goal state is ever found.
useMaxMargin - Variable in class burlap.behavior.singleagent.learnfromdemo.apprenticeship.ApprenticeshipLearningRequest
If true, use the full max margin method (expensive); if false, use the cheaper projection method
useReplacingTraces - Variable in class burlap.behavior.singleagent.learning.tdmethods.vfa.GradientDescentSarsaLam
Whether to use accumulating or replacing eligibility traces.
useRewardFunction(RewardFunction) - Method in class burlap.mdp.singleagent.model.FactoredModel
 
useRewardFunction(RewardFunction) - Method in interface burlap.mdp.singleagent.model.TaskFactoredModel
Tells this model to use the corresponding RewardFunction
useStateActionWise - Variable in class burlap.behavior.learningrate.ExponentialDecayLR
Whether the learning rate is dependent on state-actions
useStateActionWise - Variable in class burlap.behavior.learningrate.SoftTimeInverseDecayLR
Whether the learning rate is dependent on state-actions
useStateWise - Variable in class burlap.behavior.learningrate.ExponentialDecayLR
Whether the learning rate is dependent on the state
useStateWise - Variable in class burlap.behavior.learningrate.SoftTimeInverseDecayLR
Whether the learning rate is dependent on the state
useTerminalFunction(TerminalFunction) - Method in class burlap.mdp.singleagent.model.FactoredModel
 
useTerminalFunction(TerminalFunction) - Method in interface burlap.mdp.singleagent.model.TaskFactoredModel
Tells this model to use the corresponding TerminalFunction
useValueRescaling(boolean) - Method in class burlap.behavior.singleagent.auxiliary.valuefunctionvis.StateValuePainter
Enabling value rescaling allows the painter to adjust to the minimum and maximum values passed to it.
useVariableC - Variable in class burlap.behavior.singleagent.learnfromdemo.mlirl.differentiableplanners.DifferentiableSparseSampling
Whether the number of transition dynamic samples should scale with the depth of the node.
useVariableC - Variable in class burlap.behavior.singleagent.planning.stochastic.sparsesampling.SparseSampling
Whether the number of transition dyanmic samples should scale with the depth of the node.
usingOptionModel - Variable in class burlap.behavior.singleagent.MDPSolver
 
Utilitarian - Class in burlap.behavior.stochasticgames.agents.twoplayer.singlestage.equilibriumplayer.equilibriumsolvers
Finds the maximum utilitarian value joint action and retuns a detemrinistic strategy respecting it.
Utilitarian() - Constructor for class burlap.behavior.stochasticgames.agents.twoplayer.singlestage.equilibriumplayer.equilibriumsolvers.Utilitarian
 
A B C D E F G H I J K L M N O P Q R S T U V W X Y Z 
Skip navigation links