summaryrefslogtreecommitdiff
path: root/tex/thesis/contribution
diff options
context:
space:
mode:
Diffstat (limited to 'tex/thesis/contribution')
-rw-r--r--tex/thesis/contribution/contribution.tex250
1 files changed, 166 insertions, 84 deletions
diff --git a/tex/thesis/contribution/contribution.tex b/tex/thesis/contribution/contribution.tex
index e41a39f..acf9cdd 100644
--- a/tex/thesis/contribution/contribution.tex
+++ b/tex/thesis/contribution/contribution.tex
@@ -18,10 +18,14 @@
\newcommand\all{\mathsf{all}}
\newcommand\fix{\mathsf{fix}}
-\newcommand\soln{\mathsf{soln}}
+\newcommand\solnFP{\mathsf{soln^{FP}}}
+\newcommand\RFP{\mathsf{R^{FP}}}
+\newcommand\solnSI{\mathsf{soln^{SI}}}
+\newcommand\RSI{\mathsf{R^{SI}}}
\newcommand\post{\mathsf{post}}
\newcommand\pre{\mathsf{pre}}
+\newcommand\last{\mathsf{last}}
\newcommand\Operators{\mathcal{O}}
\newcommand\Systems{\mathbb{E}}
@@ -62,7 +66,7 @@ The existing algorithm as presented in Section
fixpoint iteration and max-strategy iteration. Each of these
operations consists of naively ``evaluating'' the system repeatedly
until a further evaluation results in no change. Gawlitza et
-al.~proved that these iterations must converge in a finite number of
+al.~proved that these iterations converge in a finite number of
steps\cite{Gawlitza:2007:PFC:1762174.1762203} , but in practice this
naive approach performs a large number of re-calculations of results
which are already known.
@@ -293,7 +297,7 @@ presented in Listing \ref{algo:wdfs}.
\If {$v \ne D[x]$}
\State $D = \{ x \mapsto v \} \oplus D$
\State $W = \inflFP[x]$
- \State $\inflFP(x) = \emptyset$
+ \State $\inflFP[x] = \emptyset$
\State $\stableFP = \stableFP \backslash W$
\For {$v \in W$}
\State $\solve(v)$
@@ -323,7 +327,7 @@ the values of other variables through the $\eval$ function.
In the case of mutually-recursive variables this would result in
infinite recursion, as each time the variable's value is calculated we
-must first calculate the variable's value. In order to prevent this a
+first calculate the variable's value. In order to prevent this a
$\stable$ set is maintained of variables whose values are either
currently being computed or whose values have already been
computed. This set provides a ``base'' for the recursion to terminate,
@@ -423,9 +427,10 @@ In the particular case of \emph{monotonic}, \emph{expansive} operators
it has been shown\cite{Gawlitza:2007:PFC:1762174.1762203} that this
process will take a finite number of iterations if we ensure at each
iteration that we make a \emph{strict} improvement on our
-strategy. This means that each time we improve our strategy we must
-make it greater in at least one $\max$-expression, and no worse in the
-others.
+strategy. This means that each time we improve our strategy we make it
+greater in at least one $\max$-expression, and no worse in the
+others. That is, we must satisfy $\sigma_{n+1}(\rho) >
+\sigma_n(\rho)$, for the $\rho$ in which we are improving.
A new function, $\improve: ((E_{\max} \to E) \times (X \to D)) \to
(E_{\max} \to E)$, is used below as a \emph{strategy improvement
@@ -539,6 +544,8 @@ solve $\max$-strategy iteration problems.
$\stableSI$ & $\subseteq E_{\max}$, the set of all
$\max$-expressions whose strategies are stable \\
+ $\rho$ & $: X \to \CZ$, a variable assignment \\
+
$\system$ & $\in \Systems$, an equation system \\
$\improve$ & $: ((E_{\max} \to E) \times (X \to \CZ)) \to (E_{\max}
@@ -570,17 +577,24 @@ solve $\max$-strategy iteration problems.
\Procedure {solve} {$x \in E_{\max}$}
\If {$x \not \in \stableSI$}
\State $\stableSI = \stableSI \cup \{x\}$
- \State $\rho = \solve \fixpoint(\sigma(\system))$
\State $v = \improve(\lambda y . \eval(x, y), \rho)[x]$
\If {$v \ne \sigma[x]$}
\State $\sigma = \{ x \mapsto v\} \oplus \sigma$
\State $W = \inflSI[x]$
- \State $\inflSI(x) = \emptyset$
+ \State $\inflSI[x] = \emptyset$
\State $\stableSI = \stableSI \backslash W$
\For {$v \in W$}
- \State $\solve(v)$
+ \State $\solve(v)$
\EndFor
\EndIf
+ \State $\rho_{\last} = \rho$
+ \State $\rho = \solve \fixpoint(\sigma(\system))$
+ \For {$y \in X$}
+ \If {$\rho_{\last}[y] = \rho[y]$}
+ \State $\stableSI = \stableSI \backslash \{\system[y]\}$
+ \State $\solve(\system[y])$
+ \EndIf
+ \EndFor
\EndIf
\EndProcedure
\end{algorithmic}
@@ -594,7 +608,8 @@ fixpoints, but applies them to $\max$-strategy iteration
problems. Dependencies between $\max$-expressions are considered and
an improvement on the strategy at an expression $e$ is only attempted
if some of the $\max$-expressions which influence $e$ themselves
-improve strategy.
+improve strategy. We also destabilise $\max$-expressions which have
+their values changed in the fixpoint iteration step.
For this particular algorithm to work, however, we must assume another
property of $\improve$. In Listing \ref{algo:adapted-wdfs} we take the
@@ -622,7 +637,8 @@ result in the $\max$-strategy improvement at each step being more
expensive. The extra work to improve other portions of the strategy
will be discarded with only $\improve(...)[x]$ being used. The
particular $\improve$ that we are using (see Listing \ref{algo:pmax})
-satisfies this property by way of its anonymous function.
+satisfies this property using an anonymous function to delay the
+computation.
\section{Local Demand-driven Strategy Improvement (LDSI)}
@@ -997,6 +1013,13 @@ themselves destabilised as well.
\label{algo:ldsi:solve}
\end{algorithm}
+$\solve \strategy$ is very similar to the other $\solve$ calls we have
+already seen. The procedure attempts a strategy improvement for the
+$\max$-expression $x$ and, if any improvement is possible, it improves
+the strategy $\sigma$ before invalidating and re-evaluating any other
+$\max$-expressions which may depend on $x$. We here assume that
+$\improve$ has the same properties as those which are assumed in
+Section \ref{sec:adapted-wdfs}.
@@ -1011,107 +1034,166 @@ and $\max$-strategy iteration.
The fixpoint iteration portion of the algorithm is concerned with
-finding the greatest fixpoint, $\soln(s) : X \to D$ for the system $s
+finding the greatest fixpoint, $\solnFP(s) : X \to D$ for the system $s
= \sigma(\system)$. If we consider all valid fixpoints, $\fix(s) = \{
-\rho \mid \rho = s(\rho) \}$, then we find that $\soln(s) \in \fix(s)$
-and $\soln(s) \ge \rho ~ \forall \rho \in \fix(s)$. Our general
-approach will be to consider our variable assignment $D \ge \soln(s)$
+\rho \mid \rho = s(\rho) \}$, then we find that $\solnFP(s) \in \fix(s)$
+and $\solnFP(s) \ge \rho ~ \forall \rho \in \fix(s)$. Our general
+approach will be to consider our variable assignment $D \ge \solnFP(s)$
and show that it decreases until $D \in \fix(s)$ and thus $D =
-\soln(s)$.
+\solnFP(s)$.
-We begin by stating two invariants of the LDSI algorithm. The set
-$\depFP^{+}(x)$ is the set of all variables which $x$ may depend
+We begin by stating three invariants of the LDSI algorithm. We use
+$\depFP^{+}(x)$ to be the set of all variables which $x$ may depend
on. $\depFP^{+}$ is defined as the transitive closure of $\depFP$,
-which is defined by $\depFP(x) = \{ y \in X \mid x \in \inflFP(y)
-\}$. $R \subseteq X$ is defined as the variables currently on our
-recursion stack.
+$\depFP[x] = \{ y \in X \mid x \in \inflFP[y] \}$. $\RFP \subseteq X$
+is defined as the variables currently on our recursion stack.
\begin{align}
- D &\ge \soln(s) \label{eqn:fp-inv1} \\
+ D &\ge \solnFP(s) \label{eqn:fp-inv1} \\
%
- D &= \soln(s)[x] & \mbox{if }
- x \in \stableFP,
- x \not \in R,
- \depFP^{+}(x) \cap R = \emptyset
- \label{eqn:fp-inv2} \\
+ x &\in \inflFP[y] & \forall x \in \stableFP, y \in \depFP[x]
+ \label{eqn:fp-inv3} \\
%
- x \in \inflFP[y] & \forall x \in \stableFP, y \in \depFP(x)
- \label{eqn:fp-inv3}
+ D &= \solnFP(s)[x] & \mbox{if }
+ x \not \in \RFP,
+ \depFP^{+}[x] \cap \RFP = \emptyset,
+ x \in \stableFP
+ \label{eqn:fp-inv2}
\end{align}
(\ref{eqn:fp-inv1}) follows from monotonicity. We begin our iteration
-at some variable assignment $D \ge \soln(s)$ and monotonically descend
-from there. Each evaluation of a variable remains greater than
-$\soln(s)[x]$ because our variable assignment $D$ always satisfies $D
-\ge soln(s)$. We cannot jump below the greatest fixpoint because to do
-so would imply that we could ``turn around'' for at least one
-variable, thereby breaking our monotonicity assumption.
+at some variable assignment $D \ge \solnFP(s)$ and monotonically descend
+from there. We know that $s(D)[x] \ge soln(s)[x] ~\forall D \ge
+\solnFP(s), x \in X$, thus each evaluation of $s[x]$ in the context of
+$D$ will result in a new value for $D[x]$ which is either closer to or
+equal to $\solnFP(s)[x]$. As we only ever evaluate in the context of
+variables taken from $D$, and $D[x]$ is set to $\infty$ when $x$ is
+invalidated, we know that $D \ge \solnFP(s)$ holds.
+
+(\ref{eqn:fp-inv3}) follows from the use of the $\eval \fixpoint$
+function. Each variable lookup results in a direct insertion into
+$\inflFP$, so during the evaluation of $x$ all relevant $\inflFP$ sets
+are updated to reflect the current dependencies.
For (\ref{eqn:fp-inv2}) we will examine what has happened when each of
the conditions is false:
\begin{itemize}
\item
- If $x \not \in \stableFP$ then a re-evaluation of $x$ will occur
- with the next call to $\solve \fixpoint(x)$. This may result in a
- change to the value of $D[x]$, or it may leave $D[x]$ stable for the
- moment. In either case, the value of $D[x]$ will be decreased, given
- the value of the other variables in $X \backslash {x}$. If we denote
- the variable assignment before the evaluation of $x$ by $D_{\pre}$,
- and the variable assignment after the evaluation of $x$ by
- $D_{\post}$ then we find that $D_{\post} \le D_{\pre}$.
+ If $x \in \RFP$ then we are currently in the context of an earlier
+ evaluation of $x$, so the value of $D[x]$ will be changed again by
+ that earlier call. In addition, we conclude that $x \in
+ \depFP^{+}[x]$ and thus, from the following condition, that $x$ will
+ be re-evaluated at some point after this one.
\item
- If $\depFP^{+}(x) \cap R \ne \emptyset$ then a variable which $x$
- depends on is in $R$ at the moment. This will result in the
+ If $\depFP^{+}[x] \cap \RFP \ne \emptyset$ then a variable which $x$
+ depends on is in $\RFP$ at the moment. This will result in the
re-evaluation of $x$ if the value of any of the variables in
- $\depFP^{+}(x) \cap R$ have changed.
+ $\depFP^{+}[x] \cap \RFP$ have changed.
\item
- If $x \in R$ then we are currently in the context of an earlier
- evaluation of $x$, so the value of $D[x]$ will be changed again by
- that earlier call. In addition, we conclude that $x \in
- \depFP^{+}(x)$ and thus, from the previous condition, that $x$ will
- be entirely re-evaluated at some point after this one.
+ If $x \not \in \stableFP$ then a re-evaluation of $x$ will occur
+ with the next call to $\solve \fixpoint(x)$. This may result in a
+ change to the value of $D[x]$, or it may leave $D[x]$ stable for the
+ moment. If we denote the variable assignment before the evaluation
+ of $x$ by $D_{\pre}$, and the variable assignment after the
+ evaluation of $x$ by $D_{\post}$ then we find that $D_{\post} \le
+ D_{\pre}$, due to the monotonicity of $s$. After each change of
+ $D[x]$, everything which \emph{may} have depended on $D[x]$ is
+ re-evaluated (see the previous two items), leading to a final point
+ of stability when $D[x] = \solnFP(s)[x]$, by the definition of
+ $\solnFP(s)[x]$ and (\ref{eqn:fp-inv1}).
\end{itemize}
-(\ref{eqn:fp-inv3}) follows from the use of the $\eval \fixpoint$
-function. Each variable lookup results in a direct insertion into
-$\inflFP$, so during the evaluation of $x$ all relevant $\inflFP$ sets
-are updated.
-
After an evaluation of $\solve \fixpoint(x)$ we know that $x \in
-\solve \fixpoint(x)$. If it is also the case that $R = \emptyset$, as
-is the case for top-level calls to $\solve \fixpoint$, then we know
-that $D[x] = \soln(s)[x]$. This means that a function $\lambda x
+\stableFP$. If it is also the case that $\RFP = \emptyset$, as is the
+case for top-level calls to $\solve \fixpoint$, then we know that
+$D[x] = \solnFP(s)[x]$. This means that the function $\lambda x
. (\solve \fixpoint(x); D[x])$ will act as a variable assignment
solving for the greatest fixpoint of $s = \sigma(\system)$, as is
required by our $\max$-strategy iteration. As the $\max$-strategy
iteration changes $\sigma$ it also induces a change in $s =
\sigma(\system)$, so in order to maintain the correctness of this
-algorithm we must show that our above are maintained by this process.
-
-When the $\max$-strategy iteration portion invalidates a part of the
-fixpoint iteration it does two things: it removes the variable $x$
-from the $\stableFP$ set, and it sets $D[x] = \infty$. The
-invalidation of variables is then propagated to each variable which
-transitively depends on $x$: $\{ y \in X \mid x \in \depFP^{+}(y) \}$,
-or $\{ y \in X \mid y \in \inflFP^{+}(x) \}$, where $\inflFP^{+}$ is
-the transitive closure of $\inflFP$. We know from (\ref{eqn:fp-inv3})
-that this transitive closure of $\inflFP$ will identify the entire
-subsystem which may depend on the value of $x$. The invalidation of
-transitive dependencies ensures that (\ref{eqn:fp-inv1}) and
-(\ref{eqn:fp-inv2}) hold for the entire subsystem which is affected by
-the change in strategy, and thus that our invariants hold for our
-entire system.
-
-This is sufficient for us to be confident that the fixpoint iteration
-portion of the algorithm is correct. The invariants ensure that $D$ is
-approaching $\soln(s)$ and we know that our invariants are maintained
-by the $\max$-strategy invalidating the fixpoint iteration.
-
-
+algorithm we must show that our above invariants are maintained by
+this process.
+
+When the $\max$-strategy iteration changes the current $\max$-strategy
+$\sigma$ at the variable $x$ it changes the equation system $s =
+\sigma(\system)$ that the fixpoint iteration uses. The $\max$-strategy
+iteration then invalidates the affected portion of the fixpoint
+iteration by doing two things: it removes the variable $x$ from the
+$\stableFP$ set, and it sets $D[x] = \infty$. The invalidation of
+variables is then propagated to each variable which transitively
+depends on $x$: $\{ y \in X \mid x \in \depFP^{+}[y] \} = \{ y \in X
+\mid y \in \inflFP^{+}[x] \}$, where $\inflFP^{+}$ is the transitive
+closure of $\inflFP$. We know from (\ref{eqn:fp-inv3}) that this
+transitive closure of $\inflFP$ will identify the entire subsystem
+which may depend on the value of $x$. The invalidation of transitive
+dependencies ensures that (\ref{eqn:fp-inv1}) holds for the changed
+subsystem, as $\infty \ge z ~ \forall z \in \CZ$. From
+(\ref{eqn:fp-inv1}) we can then conclude that (\ref{eqn:fp-inv2})
+holds as well, as the removal of $x$ from $\stableFP$ combined with
+(\ref{eqn:fp-inv1}) leads to (\ref{eqn:fp-inv2}). These invariants now
+hold for the affected sub-system and are still true for the unaffected
+sub-system. Thus our invariants hold for our entire system and our
+fixpoint iteration will continue to be correct in the presence of
+invalidation by the $\max$-strategy iteration.
+
+
+We move now to the $\max$-strategy iteration. We will use $\rho: X \to
+\CZ$ as $\rho = \lambda x . (\solve \fixpoint(x); D[x])$, a variable
+assignment which will always calculate the greatest fixpoint of
+$\sigma(\system)$ for the current strategy $\sigma$. Each time $\rho$
+is queried for a variable's value it will also record which variables
+have had their values changed, whether or not those changed values are
+final, in the set $\touchedFP$.
+
+We can establish similar invariants for the $\max$-strategy iteration
+as we had for the fixpoint iteration. We denote the optimal strategy
+by $\solnSI(\system)$ and we define $\RSI \subseteq E_{\max}$ to be
+the set of $\max$-expressions on our recursion stack.
+\begin{align}
+ e &\in \inflSI[y] & \forall e \in \stableSI, y \in \depSI[e]
+ \label{eqn:si-inv1} \\
+%
+ \sigma &= \solnSI(\system)[e] & \mbox{if }
+ e \not \in \RSI,
+ \depSI^{+}[e] \cap \RSI = \emptyset,
+ e \in \stableSI
+ \label{eqn:si-inv2}
+\end{align}
-We move now to the $\max$-strategy iteration.
-% TODO: finish the proof
+(\ref{eqn:si-inv1}) follows from the same logic as (\ref{eqn:fp-inv3})
+for the fixpoint iteration. The $\eval \strategy$ directly updates the
+$\inflSI$ sets whenever they are used. This ensures our dependency
+information is correct.
+
+(\ref{eqn:si-inv2}) follows a similar argument to (\ref{eqn:fp-inv2})
+for the fixpoint iteration. At each step we must make a strategy
+improvement, such that $\sigma_{n+1}(\system)(\rho) >
+\sigma_n(\system)(\rho)$. Every step must be an improvement and, by the
+assumptions made in Section \ref{sec:adapted-wdfs}, we know that we
+will make an improvement for each individual variable if it is
+possible to do so. The invalidation for a changed strategy is
+identical to that which is performed for variables in the fixpoint
+iteration, so it will not be re-considered here. An additional
+complexity is that $\rho$ is not a constant mapping during
+$\max$-strategy iteration. In order to maintain (\ref{eqn:si-inv2}) we
+ensure that on changes in $\rho$ we invalidate some of our
+$\max$-expressions. The set $\touchedFP$ over-approximates the set of
+changed variables, so we are certain that all changes will be found by
+enumerating $\touchedFP$. We only invalidate those $\max$-expressions
+which are the right-hand side of a stable, changed variable. If a
+variable has not stabilised then it's value is has not been used
+directly (see (\ref{eqn:fp-inv2})) and thus cannot directly influence
+the $\max$-strategy. If the variable has stabilised to the same value
+as it had previously then it has not changed and thus has no effect on
+the $\max$-strategy.
+
+If both the $\max$-strategy iteration and the fixpoint iteration
+correctly solve their respective portions of the problem, and the
+communication between them is correct, then we know that the overall
+algorithm is correct.
\subsection{Implementation} \label{sec:ldsi:implementation}
+