summaryrefslogtreecommitdiff
path: root/tex
diff options
context:
space:
mode:
Diffstat (limited to 'tex')
-rw-r--r--tex/thesis/contribution/contribution.tex156
1 files changed, 82 insertions, 74 deletions
diff --git a/tex/thesis/contribution/contribution.tex b/tex/thesis/contribution/contribution.tex
index 987d1e0..ff75e9b 100644
--- a/tex/thesis/contribution/contribution.tex
+++ b/tex/thesis/contribution/contribution.tex
@@ -7,13 +7,13 @@
\newcommand\stableSI{\mathsf{stable^{SI}}}
\newcommand\stable{\mathsf{stable}}
-\newcommand\system{\varepsilon} %\mathsf{system}}
+\newcommand\system{\mathcal{E}} %\mathsf{system}}
\newcommand\parent{\mathsf{parent}}
\newcommand\parents{\mathsf{parents}}
\newcommand\old{\mathsf{old}}
\newcommand\Operators{\mathcal{O}}
-\newcommand\Systems{\mathcal{E}}
+\newcommand\Systems{\varepsilon}
\newcommand\init{\mathsf{\textsc{init}}}
@@ -31,34 +31,34 @@
\chapter{Contribution} \label{chap:contribution}
In this chapter the main theoretical contribution of this thesis is
-presented: an improvement on a $\max$-strategy improvement algorithm
-for solving fixpoint equations over the integers with monotonic
-operators\cite{Gawlitza:2007:PFC:1762174.1762203}. The original
-algorithm is presented in Section \ref{sec:gawlitza-algorithm}. We
-employ the ideas of Seidl, et al. to design an algorithm which runs in
-considerably less time (in the best case, and in most practical cases)
-than the existing solver.
+presented: an extension on a $\max$-strategy iteration algorithm for
+solving fixpoint equations over the integers with monotonic
+operators\cite{Gawlitza:2007:PFC:1762174.1762203} (see Section
+\ref{sec:gawlitza-algorithm}). We devise an algorithm which runs in
+linear time in the best case.
In this chapter we will begin by presenting the Work-list Depth First
-Search (W-DFS) fixpoint algorithm developed by Seidl, et
-al.\cite{DBLP:tr/trier/MI96-11}. We will then present a modification
-to the algorithm to allow it to perform $\max$-strategy iteration
-rather than fixpoint iteration. The chapter will then conclude with
-our Local Demand-driven Strategy Improvement (LDSI) algorithm.
+Search (W-DFS) fixpoint algorithm developed by Seidl et
+al.\cite{DBLP:tr/trier/MI96-11}. We will then discuss a modification
+of the algorithm to allow it to perform $\max$-strategy iteration. The
+chapter will then conclude with our Local Demand-driven Strategy
+Improvement (LDSI) algorithm to perform efficient strategy iteration
+and fixpoint iteration. % TODO: fix me, or something
The existing algorithm as presented in Section
\ref{section:basic-algorithm} consists of two iterative operations:
fixpoint iteration and max-strategy iteration. Each of these
operations consists of naively ``evaluating'' the system repeatedly
-until a further evaluation yields no change. It is shown by Gawlitza,
-et al. that these iterations must converge in a finite number of
-steps\cite{Gawlitza:2007:PFC:1762174.1762203}, but in practice this
+until a further evaluation results in no change. Gawlitza et
+al.~proved that these iterations must converge in a finite number of
+steps\cite{Gawlitza:2007:PFC:1762174.1762203} , but in practice this
naive approach performs many more operations than are necessary, in
-many cases merely re-calculating results which are already known.
+many cases re-calculating results which are already known.
+% TODO: add something here about variable dependencies
-By making use of some data-dependencies within the equation systems it
-is possible to reduce the amount of work that is to be done quite
-considerably.
+%By considering which variables depend on, and influence, each other it
+%is possible to reduce the amount of work that is to be done quite
+%considerably.
\section{Preliminaries}
@@ -67,37 +67,42 @@ a few terms and notations. We will assume a fixed, finite set $X$ of
variables. We will also define $\CZ = \Z \cup \{-\infty, \infty\}$.
\begin{definition}
- \textbf{Variable Assignments:} $X \to \CZ$. A function from a
- variable to a value in our domain. An underlined value
- (eg. $\underline{\infty}$) indicates a variable assignment mapping
- everything to that value. Variable assignments may be combined with
- $\oplus$ in the following way:
+ A \textbf{Variable Assignment} is a partial function, $\rho: X \to
+ \CZ$, from a variable to a value. The domain of a variable
+ assignment, $\rho$, is represented by $\mathsf{domain}(\rho)
+ \subseteq X$.
+
+ An underlined value, $\underline{a}$ indicates a variable assignment
+ $\{ x \mapsto a \mid \forall x \in X \}$.
+
+ Variable assignments $\rho \in X \to \CZ$ and $\varrho \in X \to
+ \CZ$ may be composed with the $\oplus$ operator in the following
+ way:
+
\begin{align*}
\rho \oplus \varrho = \left\{\begin{array}{lc}
\varrho(x) & x \in \mathsf{domain}(\varrho) \\
\rho(x) & \mbox{otherwise}
\end{array}\right.
- \end{align*}
+ \end{align*}
\end{definition}
\begin{definition}
- \textbf{Operators:} An operator is a function $o: 2^{\CZ} \to
- \CZ$. They belong to the set $\Operators$.
-\end{definition}
-
-\begin{definition}
- \textbf{Expressions:} For the purposes of this discussion we will
- consider expressions, $e \in E$, as $e : (X \to \CZ) \to \CZ$, a
- mapping from a variable assignment to the expression's value in that
- assignment.
-
- Expressions are either of the form $e = o\{ e_1, e_2, ..., e_k \}$
- (where $o \in O$ and $e_1, e_2, ... e_k \in E$) or else they are of
- the form $e \in \CZ$ (a constant value) or $e \in X$ (a variable).
+ \textbf{Expressions:} We will consider expressions, $e \in E$, as $e
+ : (X \to \CZ) \to \CZ$, a mapping from a variable assignment to the
+ expression's value in that assignment.
+
+ Expressions have two possible forms:
+ \begin{itemize}
+ \item $e = f( e_1, e_2, ..., e_k )$, with $f: \CZ^k \to \CZ$ and
+ $e_1, e_2, ... e_k \in E$ and $e(\rho) = f(e_1(\rho),
+ e_2(\rho), ..., e_k(\rho))$.
+ \item $e = x \in X$, with $e(\rho) = \rho(v)$
+ \end{itemize}
The subset of expressions of the form $\max\{ e_1, e_2, ... e_k \}$,
- with $e_1, e_2, ..., e_k \in E$ are referred to as
- \emph{$\max$-expressions}, denoted by $E_{\max} \subset E$.
+ with $e_1, e_2, ..., e_k \in E$ are referred to
+ as \emph{$\max$-expressions}, denoted by $E_{\max} \subset E$.
For an expression $e$, the set of expressions containing it, $\{
e_\parent \mid e_\parent = o\{ e_1, e_2, ... e_k \}, o \in \O, e_i
@@ -112,27 +117,30 @@ variables. We will also define $\CZ = \Z \cup \{-\infty, \infty\}$.
function $\system : X \to ((X \to \CZ) \to \CZ)$ (which is
equivalent to the above).
- An equation system can also be considered as a function $\system
- : (X \to \CZ) \to (X \to \CZ)$, $\system(\rho)[x] = e_x(\rho)$,
- taking a variable assignment $\rho$ and returning the result of each
- $e_x$'s evaluation in $\rho$.
+ Alternatively an equation system can be represented as a function
+ $\system : (X \to \CZ) \to (X \to \CZ)$, $\system(\rho)[x] =
+ e_x(\rho)$, taking a variable assignment $\rho$ and returning the
+ result of each $e_x$'s evaluation in $\rho$.
These two functional forms are equivalent and correspond to changing
the argument order in the following way: $\system(\rho)[x] =
- \system[x](\rho)$. It is only for convenience that we use both
- forms.
+ \system[x](\rho)$. We use both forms as a convenience for when we
+ only wish to use one argument. The representations are
+ differentiated by context (as the arguments are of differing types)
+ as well as by whether parentheses or brackets are used.
The expressions in $E_\system$ are referred to as ``right hand
- sides''. The set of all equation systems is denoted by $\Systems$.
+ sides'' of the equation system $\system$. We denote the set of all
+ equation systems by $\Systems$.
- An inverse mapping, $E \to X$ also exists for each system, denoted
- by $\system^{-1}$. This is defined as:
+ We define an inverse mapping, $\system^{-1} : E \to 2^X$, taking an
+ expression, $e$, to the set of variables, $x$ for which it is a
+ ``subexpression'' of $\system[x]$. This is captured in the following
+ definition:
\begin{align*}
\system^{-1}(e) &= \{ x \mid system(x) = e \} \cup \{
\system^{-1}(e_\parent) \mid e_\parent \in parents(e) \}
\end{align*}
- It provides a mapping from each expression to the variables for
- which it is a `subexpression' of the variable's right hand side.
\end{definition}
\begin{definition}
@@ -142,14 +150,14 @@ variables. We will also define $\CZ = \Z \cup \{-\infty, \infty\}$.
\rho(z) &= \varrho(z) & \forall z \in X \backslash \{y\} \\
\rho(y) &\ne \varrho(y)
\end{align*}
- If $x$ depends on $y$ then $y$ is said to
- \emph{influence} $x$.
+
+ If $x$ depends on $y$ then $y$ is said to \emph{influence} $x$.
\end{definition}
\begin{definition}
- \textbf{Local:} A solver is said be local if, for some system,
- $\system$, and some $e \in E_\system$, it will for $e_x$ without
- evaluating every element of $E_\system$.
+ \textbf{Local:} A solver is said be local if, for some equation
+ system, $\system$, and some $e \in E_\system$, it will solve for
+ $e_x$ without evaluating every element of $E_\system$.
\end{definition}
\section{Fixpoint Iteration}
@@ -188,19 +196,19 @@ iteration would be a significant improvement.
An additional deficiency of Kleene iteration is that it is not
guaranteed to terminate for all fixpoints. An example system is
-presented in Figure \ref{fix:kleene-infinite}. In this case our first
+presented in Figure \ref{fig:kleene-infinite}. In this case our first
iteration will give us $\{ x \mapsto 0, y \mapsto 0 \}$, leading to a
second iteration resulting in $\{ x \mapsto -1, y \mapsto 0\}$. This
will continue without bound resulting
- In some cases Kleene iteration must iterate
-an infinite number of times in order to reach a fixpoint. An example
-system is presented as Figure \ref{fig:kleene-infinite}. In this case
-$x$ will take the value of $0$ in the first iteration, then $y$ will
-evaluate to $-1$. In the next iteration $x$ will also take the value
-$-1$, thereby requiring $y$ to take the value $-2$. This will continue
-without bound, resulting in the Kleene iteration never reaching the
-greatest fixpoint of $\{ x \mapsto -\infty, y \mapsto -\infty \}$.
+In some cases Kleene iteration must iterate an infinite number of
+times in order to reach a fixpoint. An example system is presented as
+Figure \ref{fig:kleene-infinite}. In this case $x$ will take the value
+of $0$ in the first iteration, then $y$ will evaluate to $-1$. In the
+next iteration $x$ will also take the value $-1$, thereby requiring
+$y$ to take the value $-2$. This will continue without bound,
+resulting in the Kleene iteration never reaching the greatest fixpoint
+of $\{ x \mapsto -\infty, y \mapsto -\infty \}$.
\begin{figure}[H]
\begin{align*}
@@ -215,15 +223,15 @@ greatest fixpoint of $\{ x \mapsto -\infty, y \mapsto -\infty \}$.
This particular non-termination of Kleene iteration is not a
significant problem in our application, however, as it has been shown
-in the work of Gawlitza
-et. al. \cite{Gawlitza:2007:PFC:1762174.1762203}, that Kleene
-iteration in this specific case is restricted to at most $|X|$
-iterations. It does, however, require $\Theta(|X|)$ right hand side
-evaluations, which is very inefficient.
+in the work of Gawlitza et
+al.\cite{Gawlitza:2007:PFC:1762174.1762203}, that Kleene iteration in
+this specific case is restricted to at most $|X|$ iterations. It does,
+however, require $\Theta(|X|)$ right hand side evaluations, which is
+very inefficient.
\subsection{W-DFS algorithm} \label{sec:wdfs}
-The W-DFS algorithm of Seidl, et al. takes into account some form of
+The W-DFS algorithm of Seidl et al.~takes into account some form of
data-dependencies as it solves the system. This gives it the ability
to leave portions of the system unevaluated when it is certain that
those values have not changed. The algorithm is presented in Listing
@@ -906,7 +914,7 @@ themselves destabilised as well.
\subsection{Correctness} \label{sec:ldsi:correctness}
In order to argue that LDSI is correct we must first assume that the
-W-DFS algorithm of Seidl et al. is correct and will solve the equation
+W-DFS algorithm of Seidl et al.~is correct and will solve the equation
system for its least/greatest fixpoint.