summaryrefslogtreecommitdiff
path: root/tex/thesis/contribution/contribution.tex
diff options
context:
space:
mode:
Diffstat (limited to 'tex/thesis/contribution/contribution.tex')
-rw-r--r--tex/thesis/contribution/contribution.tex221
1 files changed, 154 insertions, 67 deletions
diff --git a/tex/thesis/contribution/contribution.tex b/tex/thesis/contribution/contribution.tex
index 7d12299..66d5b4a 100644
--- a/tex/thesis/contribution/contribution.tex
+++ b/tex/thesis/contribution/contribution.tex
@@ -134,7 +134,7 @@ $O(n^3)$ right-hand-sides, and thus an approach which can evaluate
smaller portions of the system in each iteration would be a
significant improvement.
-\subsection{W-DFS algorithm}
+\subsection{W-DFS algorithm} \ref{sec:wdfs}
The W-DFS algorithm presented by Seidl, et al. takes into account some
form of data-dependencies as it solves the system. This gives it the
@@ -255,7 +255,7 @@ problem it is possible
\end{algorithm}
-\subsection{Adapted W-DFS algorithm}
+\subsection{Adapted W-DFS algorithm} \ref{sec:adapted-wdfs}
This, then, allows us to use the W-DFS algorithm to re-evaluate only
those parts of the strategy which have changed. Listing
@@ -327,17 +327,26 @@ be evaluated.
The new \emph{Local Demand-driven Strategy Improvement} algorithm,
\emph{LDSI}, seeks to do this. By adding an ``invalidate''
function to both W-DFS instances we provide an interface for the two
-sides of the algorithm to indicate which values have changed. The
-fixpoint-iteration operation can then be provided with a ``stabilise''
-function to solve a partially-unstable system.
+sides of the algorithm to indicate which values have changed and
+``destabilise'' that portion of the system.
This essentially results in a $\max$-strategy iteration which, at each
strategy-improvement step, invalidates a portion of the current
-fixpoint iteration. The fixpoint iteration then re-stabilises itself
-by evaluating what values have been changed before returning control
-to the $\max$-strategy iteration to change the system again. The back
-and forth continues in this way until the $\max$-strategy can not
-improve the strategy any further.
+fixpoint iteration which may depend on the changed strategy. The
+fixpoint iteration then re-stabilises itself by evaluating what values
+have been changed, but only when such values are requested by the
+$\max$-strategy iteration. The $\max$-strategy can continue as normal,
+invalidating the fixpoint-iteration as required, and simply assume
+that the fixpoint-iteration will provide it with the correct values
+from the greatest fixpoint.
+
+
+This entire approach is demand driven, and so any necessary evaluation
+is delayed until the point when it is actually required. Additionally,
+if it is not necessary to evaluate a particular right hand side in
+order to make a decision then the algorithm will attempt to avoid
+evaluating it.
+
@@ -356,23 +365,23 @@ correctness of this new algorithm is argued in
\begin{algorithmic}
\Globals
\begin{tabularx}{0.9\textwidth}{rX}
- $V$ & $\{X \to \D\}$ - a mapping from variables to values,
+ $D$ & $X \to \D$ - a mapping from variables to values,
starting at $\{ x \mapsto \infty \}$ \\
- $I_{X,X}$ & $\{X \to X\}$ - a mapping from a variable to the
+ $\sigma$ & $E_{\max} \to E$ - a mapping from $\max$-expressions
+ to their sub-expressions (a $\max$-strategy) \\
+
+ $I_{X,X}$ & $X \to X$ - a mapping from a variable to the
variables it influences \\
- $I_{\max,\max}$ & $\{E_{\max} \to E_{\max}\}$ - a mapping from a
+ $I_{\max,\max}$ & $E_{\max} \to E_{\max}$ - a mapping from a
$\max$-expression to the $\max$-expressions it influences \\
- $I_{X,\max}$ & $\{X \to E_{\max}\}$ - a mapping from a variable
- to the $\max$-expressions it influences \\
-
$U_{X}$ & The set of all variables whose values have not
stabilised to a final fixpoint value (unstable variables) \\
- $U_{\max}$ & The set of all $\max$ expressions whose strategies
- have not yet stabilised to their final strategy (unstable
+ $S_{\max}$ & The set of all $\max$ expressions whose strategies
+ have stabilised to their final strategy (stable
$\max$-expressions) \\
$\varepsilon$ & The equation system, a mapping from a variable
@@ -384,12 +393,27 @@ correctness of this new algorithm is argued in
\label{algo:ldsi:fixpoint:globals}
\end{algorithm}
+A few things are of particular note for the global variables. In
+particular the difference between $U_X$ being an unstable set and
+$S_{\max}$ being a stable set. In reality these two are entirely
+equivalent, but because the fixpoint-iteration will be started as
+being entirely ``stable'' (with values of $-\infty$) it is of more
+practical benefit to avoid the extra work populating the ``stable''
+set by instead storing unstable values.
+
+The other variables are just the state from each of the previous
+algorithms for intelligently performing $\max$-strategy iteration and
+fixpoint iteration (as were presented in Sections \ref{sec:wdfs}
+and \ref{sec:adapted-wdfs}). $D$ and $I_{X,X}$ are taken from the
+W-DFS algorithm, while $\sigma$ and $I_{\max,\max}$ are taken from the
+Adapted W-DFS algorithm.
+
\begin{algorithm}[H]
\begin{algorithmic}
- \Function {evalfixpoint} {$x$, $y$}
+ \Function {evalfixpoint} {$x \in X$, $y \in X$}
\State $\solve \fixpoint(y)$
\State $I_{X,X}[y] = I_{X,X}[y] \cup \{x\}$
\State \Return $D[y]$
@@ -399,9 +423,17 @@ correctness of this new algorithm is argued in
\label{algo:ldsi:fixpoint:eval}
\end{algorithm}
+This procedure is exactly the same as the equivalent method in the
+W-DFS algorithm. It allows us to more easily track dependencies
+between fixpoint variables by injecting this function as our
+variable-lookup function. It then both calculates a new value for the
+variable (the $\solve \fixpoint$ call) and notes the dependency
+between $x$ and $y$.
+
\begin{algorithm}[H]
\begin{algorithmic}
- \Function {invalidatefixpoint} {$x$}
+ \Function {invalidatefixpoint} {$x \in X$}
+ \Comment{Invalidate a fixpoint variable}
\If {$x \not \in U_X$}
\State $U_X = U_X \cup \{x\}$
\State $D[x] = \infty$
@@ -418,6 +450,16 @@ correctness of this new algorithm is argued in
\label{algo:ldsi:fixpoint:invalidate}
\end{algorithm}
+This procedure is not called in the fixpoint iteration process, but is
+rather the method by which the $\max$-strategy iteration can
+communicate with the fixpoint-iteration. It allows the $\max$-strategy
+iteration to inform the fixpoint-iteration which values have changed
+and will require re-evaluation. This makes it possible to only
+re-evaluate a partial system (the solving of which is also be delayed
+until requested by the $\solve \fixpoint$ procedure).
+
+
+
\begin{algorithm}[H]
\begin{algorithmic}
\Function {solvefixpoint} {$x$}
@@ -425,7 +467,7 @@ correctness of this new algorithm is argued in
\If {$x \in U_X$}
\State $f = \system[x]$
\State $U_X = U_X \backslash \{x\}$
- \State $v = f( \lambda y . \eval(x, y) )$
+ \State $v = \sigma(\system[x])( \lambda y . \eval \fixpoint(x, y) )$
\If {$v \ne D[x]$}
\State $D = \{ x \mapsto v \} \oplus D$
\State $W = I_{X,X}[x]$
@@ -446,62 +488,110 @@ correctness of this new algorithm is argued in
After an evaluation of the $\solve \fixpoint$ procedure, the variable
supplied as its argument will have been stabilised within the current
-$\max$-strategy.
+$\max$-strategy. This means that it will have taken on the same value
+as it would take in the greatest fixpoint of this system. Because the
+$\solve \fixpoint$ calls need not be made immediately it's possible
+for a variable to be invalidated by several $\max$-strategy
+improvements without being re-evaluated.
+
-After an evaluation of the $\stabilise \fixpoint$ procedure all
-currently ``unstable'' variables in the current fixpoint will be
-solved. This correlates to performing an entire fixpoint-iteration for
-the greatest-fixpoint of the unstable portion of the equation
-system. This is meant to be used to perform a sufficient
-fixpoint-iteration to allow for the next $\max$-strategy improvement
-step, but this relies on the $\max$-strategy solver to correctly
-invalidate the fixpoint-iteration portion of the algorithm.
\begin{algorithm}[H]
\begin{algorithmic}
- \Function {evalstrategy} {$x$, $y$}
- \Comment{Evaluate $y$ for its value and note that when $y$
- changes, $x$ must be re-evaluated}
- \State $\solve(y)$
- \State $I[y] = I[y] \cup \{x\}$
+ \Function {evalstrategy} {$x \in E_{\max}$, $y \in E_{\max}$}
+ \Comment{Evaluate $y$ for its value and note that when $y$
+ changes, $x$ must be re-evaluated}
+ \State $\solve \strategy(y)$
+ \State $I_{\max,\max}[y] = I_{\max,\max}[y] \cup \{x\}$
\State \Return $\sigma[y]$
\EndFunction
+ \Function {evalstrategyfixpoint} {$x \in E_{\max}$, $y \in X$}
+ \Comment{Evaluate $y$ for its value and note that when $y$
+ changes, $x$ must be re-evaluated}
+ \State $\solve \fixpoint(y)$
+ \State $I_{\max,\max}[y] = I_{\max,\max}[\system[y]] \cup \{x\}$
+ \State \Return $D[y]$
+ \EndFunction
+ \end{algorithmic}
+ \label{algo:ldsi:strategy:eval}
+ \caption{Evaluate a portion of the $\max$-strategy and note a
+ dependency}
+\end{algorithm}
+
+The $\eval \strategy$ function is essentially the same as the $\eval
+\fixpoint$ function, except that it notes the dependencies in a
+different variable, $I_{\max,\max}$. This is because the dependency
+information for the $\max$-strategy iteration is entirely separate to
+that of the fixpoint iteration.
+
+The $\solve \fixpoint$ calls in $\eval \strategy \fixpoint$ are
+top-level calls, meaning that upon their return we know that $D$
+contains the value it would take in the greatest fixpoint of the
+current strategy $\sigma$. This function, therefore, acts as a simple
+intermediate layer between the fixpoint-iteration and the
+$\max$-strategy iteration to allow for dependencies to be tracked.
+
+\begin{algorithm}[H]
+ \begin{algorithmic}
\Function {invalidatestrategy} {$x \in X$} \Comment{$x$ is a
\emph{variable}}
- \If {$x \not \in U_X$}
- \State $U_X = U_X \cup \{x\}$
- \State $W = I[x]$
- \State $I[x] = \emptyset$
- \For {$v \in W$}
- \State $\invalidate \fixpoint(v)$
+ \For {$v \in I_{\max,\max}(\system[x])$} \Comment{$v$ is
+ influenced by $x$}
+ \State $\invalidate \strategy (v)$
+ \EndFor
+ \EndFunction
+
+ \Function {invalidatestrategy} {$x \in E_{\max}$} \Comment{$x$ is
+ a \emph{$\max$-expression}}
+ \If {$x \in S_{\max}$}
+ \State $S_{\max} = S_{\max} \backslash \{x\}$
+ \For {$v \in I_{\max,\max}$} \Comment {$v$ is influenced by $x$}
+ \State $\invalidate \strategy (v)$
\EndFor
\EndIf
\EndFunction
+ \end{algorithmic}
+ \label{algo:ldsi:strategy:invalidate}
+ \caption{Evaluate a portion of the $\max$-strategy and note a
+ dependency}
+\end{algorithm}
+
+Invalidating the $\max$-strategy iteration is slightly more
+complicated than invalidating the fixpoint iteration stage. As the
+invalidation interface consists of variables, we must first translate
+a variable into a $\max$-expression (which is easily done by looking
+it up in the equation system). We must then invalidate the strategies
+for each variable which depends on this resultant
+$\max$-expression. The invalidation for $\max$-expressions consists of
+transitively invalidating everything which depends on the
+$\max$-expression, as well as itself.
+
+
- \Function {solvestrategy} {$x$}
+\begin{algorithm}[H]
+ \begin{algorithmic}
+ \Function {solvestrategy} {$x \in E_{\max}$}
\Comment{Solve a specific variable and place its value in $\sigma$}
- \If {$x \not \in \stable$}
- \State $f = \system[x]$
- \State $\stable = \stable \cup \{x\}$
- \State $v = bestStrategy(f, \sigma(), \lambda y . \eval(x, y))$
- \If {$v \ne \sigma[x]$}
- \State $\sigma = \{ x \mapsto v \} \oplus \sigma$
- \State $W = I[x]$
- \State $I(x) = \emptyset$
- \State $\invalidate \fixpoint(\mathsf{lookupVarFor}(x))$
- \State $\stable = \stable \backslash W$
- \For {$v \in W$}
+ \If {$x \not \in S_{\max}$}
+ \State $S_{\max} = S_{\max} \cup \{x\}$
+ \State $\sigma_{dynamic} = \lambda y . \eval \strategy(x,y)$
+ \State $e = P_{\max}(\sigma_{dynamic},
+ \lambda y . \eval \strategy \fixpoint(x, y))(x)$
+ \If {$e \ne \sigma[x]$}
+ \State $\sigma = \{ x \mapsto e \} \oplus \sigma$
+ \State $\invalidate \fixpoint(\system^{-1}(x))$
+ \State $S_{\max} = S_{\max} \backslash I[x]$
+ \For {$v \in I[x]$}
\State $\solve(v)$
\EndFor
\EndIf
\EndIf
\EndFunction
\end{algorithmic}
-
\caption{The $\max$-strategy portion of the Combined W-DFS.}
- \label{algo:combined-max}
+ \label{algo:ldsi:solve}
\end{algorithm}
@@ -516,15 +606,12 @@ combination algorithm is correct. For this it is sufficient to show
that the invalidate calls in both directions preserve the correctness
of the original algorithm.
-// TODO finish this.
-General idea:
-\begin{itemize}
- \item
- Invalidate calls from fixpoint $\to$ max strategy are correct if
- the calls the other way are, because it completely re-solves the
- equations
- \item
- Invalidate calls from max strategy $\to$ fixpoint are correct
- because they effectively ``reset'' that part of the system,
- creating it to be entirely re-calculated.
-\end{itemize}
+The fixpoint iteration step invalidates anything that \emph{might}
+depend on $x$ while it invalidates $x$, thereby ensuring that any
+further calls to $\solve \fixpoint$ will result in a correct value for
+the given strategy.
+
+The strategy-iteration step invalidates anything that \emph{might}
+depend on $\system[x]$ while it invalidates $x$, thereby ensuring that
+any further calls to $\solve \strategy$ will result in a correct value
+for the given strategy.