public class UCTTreeWalkPolicy extends Policy implements SolverDerivedPolicy
Policy.ActionProb, Policy.GroundedAnnotatedAction, Policy.PolicyUndefinedExceptionannotateOptionDecomposition, evaluateDecomposesOptions| Constructor and Description |
|---|
UCTTreeWalkPolicy(UCT planner)
Initializes the policy with the UCT valueFunction
|
| Modifier and Type | Method and Description |
|---|---|
void |
computePolicyFromTree()
computes a hash-backed policy for every state visited along the greedy path of the UCT tree.
|
AbstractGroundedAction |
getAction(State s)
This method will return an action sampled by the policy for the given state.
|
java.util.List<Policy.ActionProb> |
getActionDistributionForState(State s)
This method will return action probability distribution defined by the policy.
|
protected UCTActionNode |
getQGreedyNode(UCTStateNode snode)
Returns the
UCTActionNode with the highest average sample return. |
boolean |
isDefinedFor(State s)
Specifies whether this policy is defined for the input state.
|
boolean |
isStochastic()
Indicates whether the policy is stochastic or deterministic.
|
void |
setSolver(MDPSolverInterface solver)
Sets the valueFunction whose results affect this policy.
|
evaluateBehavior, evaluateBehavior, evaluateBehavior, evaluateBehavior, evaluateBehavior, evaluateMethodsShouldAnnotateOptionDecomposition, evaluateMethodsShouldDecomposeOption, followAndRecordPolicy, followAndRecordPolicy, getDeterministicPolicy, getProbOfAction, getProbOfActionGivenDistribution, getProbOfActionGivenDistribution, sampleFromActionDistributionpublic UCTTreeWalkPolicy(UCT planner)
planner - the UCT valueFunction whose tree should be walked.public void setSolver(MDPSolverInterface solver)
SolverDerivedPolicysetSolver in interface SolverDerivedPolicysolver - the solver from which this policy is derivedpublic void computePolicyFromTree()
protected UCTActionNode getQGreedyNode(UCTStateNode snode)
UCTActionNode with the highest average sample return. Note that this does not use the upper confidence since
planning is completed.snode - the UCTStateNode for which to get the best UCTActionNode.UCTActionNode with the highest average sample return.public AbstractGroundedAction getAction(State s)
Policypublic java.util.List<Policy.ActionProb> getActionDistributionForState(State s)
PolicygetActionDistributionForState in class Policys - the state for which an action distribution should be returnedpublic boolean isStochastic()
PolicyisStochastic in class Policypublic boolean isDefinedFor(State s)
PolicyisDefinedFor in class Policys - the input state to test for whether this policy is definedState s, false otherwise.