summaryrefslogtreecommitdiff
path: root/tex/thesis/contribution/contribution.tex
diff options
context:
space:
mode:
Diffstat (limited to 'tex/thesis/contribution/contribution.tex')
-rw-r--r--tex/thesis/contribution/contribution.tex152
1 files changed, 72 insertions, 80 deletions
diff --git a/tex/thesis/contribution/contribution.tex b/tex/thesis/contribution/contribution.tex
index acf9cdd..3e3ed6c 100644
--- a/tex/thesis/contribution/contribution.tex
+++ b/tex/thesis/contribution/contribution.tex
@@ -57,7 +57,7 @@ In this chapter we will begin by presenting the Work-list Depth First
Search (W-DFS) fixpoint algorithm developed by Seidl et
al.\cite{DBLP:tr/trier/MI96-11}. We will then discuss a modification
of the algorithm to allow it to perform $\max$-strategy iteration. The
-chapter will then conclude with our Local Demand-driven Strategy
+chapter will conclude with our Local Demand-driven Strategy
Improvement (LDSI) algorithm to perform efficient strategy iteration
and fixpoint iteration. % TODO: fix me, or something
@@ -74,6 +74,7 @@ which are already known.
\section{Preliminaries}
+% TODO: something about equation systems, variables and domains (very generally)
In order to aid our explanation of these algorithms we will now define
a few terms and notations. We will assume a fixed, finite set $X$ of
variables. We will also define $\CZ = \Z \cup \{-\infty, \infty\}$.
@@ -86,7 +87,7 @@ variables. We will also define $\CZ = \Z \cup \{-\infty, \infty\}$.
variable assignment $\{ x \mapsto a \mid \forall x \in X \}$.
Variable assignments $\rho : X \to \CZ$ and $\varrho : X \to \CZ$
- may be composed with the $\oplus$ operator in the following way:
+ may be composed with the $\oplus$ operator as follows:
\begin{align*}
(\rho \oplus \varrho)(x) = \left\{\begin{array}{lc}
\varrho(x) & x \in \mathsf{domain}(\varrho) \\
@@ -96,9 +97,7 @@ variables. We will also define $\CZ = \Z \cup \{-\infty, \infty\}$.
\end{definition}
\begin{definition}
- \textbf{Expressions:}
-
- The class of expressions, $E$, is defined inductively:
+ The class of \textbf{expressions}, $E$, is defined inductively:
\begin{itemize}
\item
If $z \in \CZ$ then $z \in E$
@@ -111,7 +110,7 @@ variables. We will also define $\CZ = \Z \cup \{-\infty, \infty\}$.
Nothing else is in E.
\end{itemize}
- We then define the semantics of these expressions by:
+ We define the semantics of these expressions by:
\begin{align*}
\llb z \rrb &= \lambda \rho . z & \forall z \in \CZ \\
\llb x \rrb &= \lambda \rho . \rho(x) & \forall x \in X \\
@@ -119,8 +118,8 @@ variables. We will also define $\CZ = \Z \cup \{-\infty, \infty\}$.
\lambda \rho . f(\llb e_1 \rrb(\rho),...,\llb e_k \rrb(\rho))
\end{align*}
- By abusing our notation we will consider expressions, $e \in E$, as
- $e : (X \to \CZ) \to \CZ$ defined by $e = \llb e \rrb$.
+ By abusing our notation we consider expressions, $e \in E$, as $e :
+ (X \to \CZ) \to \CZ$ defined by $e = \llb e \rrb$.
The subset of expressions of the form $\max\{ e_1, e_2, ... e_k \}$,
with $e_1, e_2, ..., e_k \in E$ are referred to
@@ -128,8 +127,8 @@ variables. We will also define $\CZ = \Z \cup \{-\infty, \infty\}$.
We define a function $\parents: E \to 2^E$, taking an expression,
$e$, to the set of expressions containing $e$, $\{ e_\parent \mid
- e_\parent = f\{ ..., e, ... \}, f: \CZ^k \to \CZ \}$. $e$ is then
- said to be a \emph{subexpression} of $e_{\parent}$.
+ e_\parent = f\{ ..., e, ... \}, f: \CZ^k \to \CZ \}$. The expression
+ $e$ is said to be a \emph{subexpression} of $e_{\parent}$.
\end{definition}
\begin{definition}
@@ -146,15 +145,15 @@ variables. We will also define $\CZ = \Z \cup \{-\infty, \infty\}$.
e_x(\rho)$, taking a variable assignment $\rho$ and returning the
result of each $e_x$'s evaluation in $\rho$. We use this second form
as a convenience for when we are evaluating the entire equation
- system in the context of a variable assignment: we get a new value
- for each variable and thus a new variable assignment as a result.
+ system in the context of a variable assignment: we obtain a new
+ value for each variable and thus a new variable assignment as a
+ result.
- These two functional forms are equivalent and correspond to changing
+ The two functional forms are equivalent and correspond to changing
the argument order in the following way: $\system(\rho)[x] =
- \system[x](\rho)$. We use both forms as a convenience for when we
- only wish to use one argument. The representations are
- differentiated by context (as the arguments are of differing types)
- as well as by whether parentheses or brackets are used.
+ \system[x](\rho)$. The representations are differentiated by context
+ (as the arguments are of differing types) as well as by whether
+ parentheses or brackets are used.
The expressions in $E_\system$ are referred to as ``right hand
sides'' of the equation system $\system$. We denote the set of all
@@ -171,8 +170,8 @@ variables. We will also define $\CZ = \Z \cup \{-\infty, \infty\}$.
\end{definition}
\begin{definition}
- \textbf{Dependencies:} A variable or expression $x$ is said to
- \emph{depend on} $y$ if $x(\rho) \ne x(\varrho)$ and
+ A variable or expression $x$ is said to \textbf{depend} on $y$ if
+ $x(\rho) \ne x(\varrho)$ and
\begin{align*}
\rho(z) &= \varrho(z) & \forall z \in X \backslash \{y\} \\
\rho(y) &\ne \varrho(y)
@@ -399,7 +398,7 @@ other subsystem. To compute the value of $x$ in the greatest fixpoint
it is unnecessary to compute the value of the variable $d$, but it is
necessary to compute the value of $y$. The W-DFS algorithm, will only
evaluate the necessary subsystem for a requested variable, leaving the
-other system untouched. In order to capture this property we will
+other system unevaluated. In order to capture this property we will
define the notion of a \emph{local} solver.
\begin{definition}
@@ -408,35 +407,31 @@ define the notion of a \emph{local} solver.
solve for $e_x$ without evaluating every element of $E_\system$.
\end{definition}
-W-DFS provides us with a \emph{local}, \emph{demand-driven} solver for
-greatest-fixpoint problems.
-
-
\section{$\max$-strategy Iteration}
-The problem of $\max$-strategy iteration is trying to find a mapping
-$\sigma: E_{\max} \to E$ from each $\max$-expression to its greatest
-subexpression. We will use $\sigma: \Systems \to \Systems$ as a
-shorthand to denote the system obtained by replacing each
-$\max$-expression, $x$, in $\system$ with $\sigma(x)$. Similarly
-$\sigma_{\all}: E \to E$ will denote the expression obtained by
-replacing each $\max$-subexpression in $e$ with $\sigma(e)$.
+$\max$-strategy iteration finds a mapping $\sigma: E_{\max} \to E$
+from each $\max$-expression to its greatest subexpression. We use
+$\sigma: \Systems \to \Systems$ as a shorthand to denote the system
+obtained by replacing each $\max$-expression, $x$, in $\system$ with
+$\sigma(x)$. Similarly $\sigma_{\all}: E \to E$ denotes the expression
+obtained by replacing each $\max$-subexpression in $e$ with
+$\sigma(e)$.
In the particular case of \emph{monotonic}, \emph{expansive} operators
it has been shown\cite{Gawlitza:2007:PFC:1762174.1762203} that this
process will take a finite number of iterations if we ensure at each
iteration that we make a \emph{strict} improvement on our
-strategy. This means that each time we improve our strategy we make it
-greater in at least one $\max$-expression, and no worse in the
-others. That is, we must satisfy $\sigma_{n+1}(\rho) >
-\sigma_n(\rho)$, for the $\rho$ in which we are improving.
+strategy. Each time we improve our strategy we make it greater in at
+least one $\max$-expression, and no worse in the others. We must
+satisfy $\sigma_{n+1}(\rho) > \sigma_n(\rho)$, for the $\rho$ in which
+we are improving.
A new function, $\improve: ((E_{\max} \to E) \times (X \to D)) \to
(E_{\max} \to E)$, is used below as a \emph{strategy improvement
operator}. $\improve$ takes a $\max$-strategy and a variable
assignment and returns a new $\max$-strategy which constitutes an
-`improvement' of the strategy in the variable assignment $\rho$. If
+`improvement' of the strategy in the variable assignment $\rho$. If we
have a $\max$-strategy $\sigma$, a variable assignment $\rho$ and
$\varsigma = \improve(\sigma, \rho)$. Then
\begin{align*}
@@ -449,8 +444,7 @@ $\sigma$ unchanged. One possible implementation of $\improve$ is
presented in Listing \ref{algo:pmax}. This implementation will try
every option of subexpressions for this $\max$-expression and will
then return the best. The use of an anonymous function as the returned
-strategy is so the evaluation of each strategy is only done when
-requested.
+strategy is so the evaluation of each strategy is demand-driven.
\begin{algorithm}
\begin{algorithmic}
@@ -476,9 +470,9 @@ requested.
The problem of $\max$-strategy iteration is one of constantly
improving until no further improvement can be made, so a simple
-approach is simply to perform a straightforward loop, improving the
-strategy at each step, until an equilibrium point is reached. This is
-the approach presented in Listing \ref{algo:naive-strategy}.
+approach is to perform a loop, improving the strategy at each step,
+until an equilibrium point is reached. The approach is presented in
+Listing \ref{algo:naive-strategy}.
\begin{algorithm}[H]
\begin{algorithmic}
@@ -512,24 +506,24 @@ the approach presented in Listing \ref{algo:naive-strategy}.
\label{algo:naive-strategy}
\end{algorithm}
-This approach is quite similar to that of Kleene iteration, and from
-the results of Gawlitza et al.\cite{Gawlitza:2007:PFC:1762174.1762203}
-it is known that this iteration is guaranteed to terminate. This naive
+This approach is similar to that of Kleene iteration, and from the
+results of Gawlitza et al.\cite{Gawlitza:2007:PFC:1762174.1762203} it
+is known that this iteration is guaranteed to terminate. This naive
approach is inefficient, however, in the same way as Kleene
iteration. A large amount of time will be spent attempting to improve
portions of the strategy for which no improvement can be made.
\subsection{Adapted W-DFS algorithm} \label{sec:adapted-wdfs}
-The $\max$-strategy iteration can be viewed as an accelerated fixpoint
+The $\max$-strategy iteration can be seen as an accelerated fixpoint
problem. We are attempting to find a strategy, $\sigma: E_{\max} \to
-E$ that will result in the greatest value for each $e \in
-E_{\max}$. Therefore if we consider our ``variables'' to be
+E$ that will result in the greatest value for each $\max$-expression
+$e \in E_{\max}$. Therefore, if we consider our ``variables'' to be
$\max$-expressions and our ``values'' to be their subexpressions then
we can solve for the best $\max$-strategy using a similar approach to
the W-DFS algorithm presented above in Section \ref{sec:wdfs}. Listing
-\ref{algo:adapted-wdfs} presents one variation on W-DFS to allow it to
-solve $\max$-strategy iteration problems.
+\ref{algo:adapted-wdfs} presents one variation on W-DFS to solve
+$\max$-strategy iteration problems.
\begin{algorithm}[H]
\begin{algorithmic}
@@ -643,31 +637,29 @@ computation.
\section{Local Demand-driven Strategy Improvement (LDSI)}
W-DFS speeds up both the $\max$-strategy iteration and the fixpoint
-iteration as two independent processes, but each time we seek to
-improve our strategy we compute at least a subset of the greatest
-fixpoint from a base of no information. We would like to be able to
-provide some channel for communication between the two W-DFS instances
-to reduce the amount of work that must be done at each stage. By
-supplying each other with information about which parts of the system
-have changed we can jump in partway through the process of finding a
-fixpoint, thereby avoiding repeating work that we have already done
-when it is unnecessary to do so.
+iteration as two independent processes, but each time we improve our
+strategy we compute at least a subset of the greatest fixpoint from a
+base of no information. We provide a channel for communication between
+the two W-DFS instances to reduce the amount of work that must be done
+at each stage. By supplying each other with information about which
+parts of the system have changed we can jump in partway through the
+process of finding a fixpoint, thereby avoiding repeating
+calculations.
The new \emph{Local Demand-driven Strategy Improvement} algorithm,
\emph{LDSI}, seeks to do this. By adding an ``invalidate'' procedure
to the fixpoint iteration we provide a method for the $\max$-strategy
-to invalidate fixpoint variables, and we can modify the fixpoint
-iteration algorithm to report which variables it has modified with
-each fixpoint iteration step. We then have a $\max$-strategy iteration
-which, at each strategy-improvement step, invalidates a portion of the
-current fixpoint iteration which may depend on the changed
-strategy. The fixpoint iteration then re-stabilises each variable as
-it is requested, tracking which values have changed since the last
-time they were stabilised. The $\max$-strategy iteration then uses
-this list of changed variables to determine which portions of the
-current strategy must be re-stabilised once more. This process
-continues until the $\max$-strategy stabilises (at which point the
-fixpoint will also stabilise).
+to invalidate fixpoint variables, and we modify the fixpoint iteration
+algorithm to report which variables it has modified with each fixpoint
+iteration step. We then have a $\max$-strategy iteration which, at
+each strategy-improvement step, invalidates a portion of the current
+fixpoint iteration which may depend on the changed strategy. The
+fixpoint iteration then re-stabilises each variable as it is
+requested, tracking which values have changed since the last time they
+were stabilised. The $\max$-strategy iteration then uses this list of
+changed variables to determine which portions of the current strategy
+are now unstable. This process continues until the $\max$-strategy
+stabilises (at which point the fixpoint will also stabilise).
Both portions of the LDSI algorithm are \emph{demand driven}, and so
any necessary evaluation of strategies or fixpoint values is delayed
@@ -948,13 +940,13 @@ function in Listing \ref{algo:adapted-wdfs}, except that it calls the
$\solve \strategy$ procedure.
The $\solve \fixpoint$ calls in $\eval \strategy \fixpoint$ are
-top-level fixpoint-iteration calls, meaning that upon their return we
-know that $D$ contains the value it would take in the greatest
-fixpoint of the current strategy $\sigma$. This function, therefore,
-acts as a simple intermediate layer between the fixpoint-iteration and
-the $\max$-strategy iteration to allow for dependencies to be tracked.
+top-level fixpoint-iteration calls, upon their return we know that the
+value of $D[x]$ is the value that $x$ takes in the greatest fixpoint
+of the current strategy $sigma$. This function, therefore, acts as a
+intermediate layer between the fixpoint-iteration and the
+$\max$-strategy iteration to allow for dependencies to be tracked.
-Upon the conclusion of the $\solve \fixpoint$ we also must inspect the
+Upon the conclusion of the $\solve \fixpoint$ we inspect the
$\touchedFP$ set to determine which variables have values which may
have changed. If their values have stabilised since being invalidated,
and if they stabilised to a different value to their previous
@@ -983,10 +975,10 @@ $\invalidate \strategy$.
Invalidating the $\max$-strategy iteration is essentially the same as
the invalidation in the fixpoint-iteration stage, except that we do
not need to keep track of the last stable values, nor which
-$\max$-expressions we have invalidated. All we need to do is remove
-them from the $\stableSI$ set, thereby indicating that they need to be
-re-stabilised by a $\solve \strategy$ call. All $\max$-expressions which (transitively) depend on $x$ are
-themselves destabilised as well.
+$\max$-expressions we have invalidated. We remove them from the
+$\stableSI$ set, thereby indicating that they need to be re-stabilised
+by a $\solve \strategy$ call. All $\max$-expressions which
+(transitively) depend on $x$ are destabilised recursively.
\begin{algorithm}[H]