summaryrefslogtreecommitdiff
path: root/tex/thesis/contribution
diff options
context:
space:
mode:
Diffstat (limited to 'tex/thesis/contribution')
-rw-r--r--tex/thesis/contribution/contribution.tex423
1 files changed, 423 insertions, 0 deletions
diff --git a/tex/thesis/contribution/contribution.tex b/tex/thesis/contribution/contribution.tex
new file mode 100644
index 0000000..6d4aeca
--- /dev/null
+++ b/tex/thesis/contribution/contribution.tex
@@ -0,0 +1,423 @@
+\floatname{algorithm}{Listing}
+
+\newcommand\stable{\mathsf{stable}}
+\newcommand\eval{\mathsf{\textsc{eval}}}
+\newcommand\solve{\mathsf{\textsc{solve}}}
+\newcommand\system{\mathsf{system}}
+\algblockx[Globals]{Globals}{EndGlobals}{\textbf{Globals:\\}}{}
+\algblockx[Assume]{Assumptions}{EndAssumptions}{\textbf{Assume:\\}}{}
+
+
+\chapter{Contribution} \label{chap:contribution}
+
+The main theoretical contribution of this paper is an improvement on a
+$\max$-strategy improvement algorithm for solving fixpoint equations
+over the integers with monotonic
+operators\cite{Gawlitza:2007:PFC:1762174.1762203}. The original
+algorithm is presented in Section \ref{section:basic-algorithm}. We
+employ the ideas of Seidl, et al. to design an algorithm which runs in
+considerably less time than the existing solver.
+
+In this chapter we will begin by presenting the Work-list Depth First
+Search (W-DFS) fixpoint algorithm developed by Seidl, et
+al.\cite{DBLP:tr/trier/MI96-11}. We will then present a modification
+to the algorithm to allow it to perform $\max$-strategy iteration
+rather than fixpoint iteration. The chapter will then conclude with
+our Local Demand-driven Strategy Improvement (LDSI) algorithm.
+
+The existing algorithm as presented in Section
+\ref{section:basic-algorithm} consists of two iterative operations:
+fixpoint iteration and max-strategy iteration. Each of these
+operations consists of naively ``evaluating'' the system repeatedly
+until a further evaluation yields no change. It is shown by Gawlitza,
+et al. that these iterations must converge in a finite number of
+steps\cite{Gawlitza:2007:PFC:1762174.1762203}, but in practice this
+naive approach performs many more operations than are necessary, in
+many cases merely re-calculating results which are already known.
+
+By making use of some data-dependencies within the equation systems it
+is possible to reduce the amount of work that is to be done quite
+considerably.
+
+In order to aid our explanation of these algorithms we will now define
+a few terms and notations. All variables are taken from the set $X$
+and all values from the set $\D$.
+
+\begin{definition}
+ \textbf{Variable Assignments:} $X \to \D$. A function from a
+ variable to a value in our domain. An underlined value
+ (eg. $\underline{\infty}$) indicates a variable assignment mapping
+ everything to that value. Variable assignments may be combined with
+ $\oplus$ in the following way:
+ \begin{align*}
+ \rho \oplus \varrho = \left\{\begin{array}{lc}
+ \varrho(x) & x \in \mathsf{domain}(\varrho) \\
+ \rho(x) & \mbox{otherwise}
+ \end{array}\right.
+ \end{align*}
+\end{definition}
+
+\begin{definition}
+ \textbf{Expressions:} For the purposes of this discussion we will
+ consider expressions, $e \in E$, as $e : (X \to \D) \to \D$, a
+ mapping from a variable assignment to the expression's value in that
+ assignment.
+\end{definition}
+
+\begin{definition}
+ \textbf{Equation System:} $\{ x = e_x \mid x \in X, e_x \in E \}$. An
+ equation system can also be considered as a function $\varepsilon :
+ (X \to D) \to (X \to D)$; $\varepsilon[\rho](x) = e_x(\rho)$.
+\end{definition}
+
+\begin{definition}
+ \textbf{Dependencies:} A variable $x$ is said to \emph{depend on}
+ $y$ if a change to the value of $y$ induces a change in the value of
+ $x$.
+\end{definition}
+
+\section{Fixpoint Iteration}
+\subsection{Kleene Iteration}
+
+A simple approach to fixpoint iteration over monotonic equations is to
+simply iterate over the system repeatedly until a reevaluation results
+in no change to the values of any variables. This approach will always
+reach the least/greatest solution if there is one to be found, but it
+will often perform many more evaluations than are necessary. This
+algorithm is presented in Listing \ref{algo:kleene}.
+
+\begin{algorithm}
+ \begin{algorithmic}
+ \Assumptions
+ \begin{tabularx}{0.9\textwidth}{rX}
+ $\rho $:&$ X \to \D$, a variable assignment \\
+ $\varepsilon $:&$ (X \to \D) \to (X \to \D)$, an equation system
+ \end{tabularx}
+ \EndAssumptions
+
+ \State $n = 0$
+ \State $\rho_0 = \underline{\infty}$
+ \Repeat
+ \State $\rho_{n+1} = \varepsilon[ \rho_{n} ]$
+ \State $n = n + 1$
+ \Until {$\rho_{n-1} = \rho_n$}
+ \State \Return $\rho_n$
+ \end{algorithmic}
+ \caption{The Kleene iteration algorithm for solving fixpoint
+ equations for their greatest solutions.}
+ \label{algo:kleene}
+\end{algorithm}
+
+For each iteration the entire system is evaluated, irrespective of
+whether it could possibly have changed value. This results in a
+considerable inefficiency in practice and thus an approach which can
+evaluate smaller portions of the system in each iteration would be a
+sensible improvement.
+
+\subsection{W-DFS algorithm}
+
+The W-DFS algorithm presented by Seidl, et al. takes into account some
+form of data-dependencies as it solves the system. This gives it the
+ability to leave portions of the system unevaluated when it is certain
+that those values have not changed.
+
+\begin{algorithm}
+ \begin{algorithmic}
+ \Globals
+ \begin{tabularx}{0.9\textwidth}{rX}
+ $D : X \to \D$ & a mapping from variables to their current values, starting
+ at $\{ x \mapsto \infty | \forall x \in X \}$ \\
+ I & A mapping from a variable to the variables which depend on
+ it in their evaluation \\
+ stable & The set of all variables whose values have stabilised \\
+ system & The equation system, a mapping from a variable to its
+ associated function \\
+ \end{tabularx}
+ \EndGlobals
+ something something
+ \end{algorithmic}
+
+ \begin{algorithmic}
+ \Function {eval} {$x$, $y$}
+ \Comment{Evaluate $y$ for its value and note that when $y$
+ changes, $x$ must be re-evaluated}
+ \State $\solve(y)$
+ \State $I[y] = I[y] \cup \{x\}$
+ \State \Return $D[y]$
+ \EndFunction
+ \end{algorithmic}
+
+ \begin{algorithmic}
+ \Function {solve} {$x$}
+ \Comment{Solve a specific variable and place its value in $D$}
+ \If {$x \not \in \stable$}
+ \State $f = \system[x]$
+ \State $\stable = \stable \cup \{x\}$
+ \State $v = f( \lambda y . \eval(x, y) )$
+ \If {$v \ne D[x]$}
+ \State $D = \{ x \mapsto v, \alpha \mapsto D[\alpha] \}
+ \forall \alpha \ne x$
+ \State $W = I[x]$
+ \State $I(x) = \emptyset$
+ \State $\stable = \stable \backslash W$
+ \For {$v \in W$}
+ \State $\solve(v)$
+ \EndFor
+ \EndIf
+ \EndIf
+ \EndFunction
+ \end{algorithmic}
+
+ \caption{The W-DFS alluded to in \cite{DBLP:tr/trier/MI96-11} and
+ presented in \cite{fixpoint-slides}, modified to find
+ greatest-fixpoints of monotonic fixpoint equations}
+ \label{algo:w-dfs}
+\end{algorithm}
+
+
+
+\section{$\max$-strategy Iteration}
+\subsection{Naive approach}
+
+TODO: Explanation of the naive approach
+
+\subsection{Adapted W-DFS algorithm}
+
+The $\max$-strategy iteration can be viewed as a fixpoint problem. We
+are attempting to find a strategy, $\sigma: E_{\max} \to E$ that will
+result in the greatest value for each $e \in E_{\max}$. Therefore if
+we consider our ``variables'' to be $\max$-expressions, our ``values''
+to be their subexpressions and our ``comparison'' to be carried out
+using the result of evaluating the system with that strategy.
+
+This, then, allows us to use the W-DFS algorithm to re-evaluate only
+those parts of the strategy which have changed. Listing
+\ref{algo:w-dfs-max} presents this variation on W-DFS.
+
+\begin{algorithm}
+ \begin{algorithmic}
+ \Globals
+ \begin{tabularx}{0.9\textwidth}{rX}
+ $\sigma$ & A mapping from $\max$-expressions to their current
+ sub-expressions, starting by mapping to the first
+ sub-expression \\
+ I & A mapping from a $\max$-expression to the sub-expressions
+ which depend on it in their evaluation \\
+ stable & The set of all $\max$-expressions whose strategies have
+ stabilised \\
+ system & The equation system, a mapping from a variable to its
+ associated function \\
+ bestStrategy & A function $(E_{\max}, (X \to D)) \to E$ mapping
+ from an expression and a variable \\& assignment to the greatest
+ subexpression in that context
+ \end{tabularx}
+ \EndGlobals
+
+ \Function {eval} {$x$, $y$}
+ \Comment{Evaluate $y$ for its value and note that when $y$
+ changes, $x$ must be re-evaluated}
+ \State $\solve(y)$
+ \State $I[y] = I[y] \cup \{x\}$
+ \State \Return $\sigma[y]$
+ \EndFunction
+
+ \Function {solve} {$x$}
+ \Comment{Solve a specific expression and place its value in $\sigma$}
+ \If {$x \not \in \stable$}
+ \State $f = \system[x]$
+ \State $\stable = \stable \cup \{x\}$
+ \State $v = bestStrategy(x, \lambda y . \eval(x, y))$
+ \If {$v \ne \sigma[x]$}
+ \State $\sigma = \{ x \mapsto v, \alpha \mapsto \sigma[\alpha]
+ \} \forall \alpha \ne x $
+ \State $W = I[x]$
+ \State $I(x) = \emptyset$
+ \State $\stable = \stable \backslash W$
+ \For {$v \in W$}
+ \State $\solve(v)$
+ \EndFor
+ \EndIf
+ \EndIf
+ \EndFunction
+ \end{algorithmic}
+
+ \caption{W-DFS, this time modified to find the best $\max$-strategy.}
+ \label{algo:w-dfs-max}
+\end{algorithm}
+
+
+\section{Combined W-DFS}
+
+W-DFS can be used to speed up both the $\max$-strategy iteration and
+the fixpoint iteration as two independent processes, but each
+fixpoint-iteration step still requires at least one complete
+evaluation of the equation system per $\max$-strategy improvement
+step. Ideally we would be able to adapt the W-DFS algorithm so that
+the fixpoint-iteration and $\max$-strategy iteration steps could
+provide some information to each other about what values have changed
+so that at each step only a subset of the entire system would have to
+be evaluated.
+
+The new algorithm, \emph{Combined W-DFS} seeks to do this. By adding
+an ``invalidate'' method to both W-DFS instances we provide an
+interface for the two sides of the algorithm to indicate which values
+have changed. This gives the other side enough information to avoid
+evaluating irrelevant portions of the equation system.
+
+This algorithm is presented in two parts. Listing
+\ref{algo:combined-fixpoint} presents the fixpoint-iteration portion
+of the algorithm, while Listing \ref{algo:combined-max} presents the
+$\max$-strategy portion. The correctness of this new algorithm is
+argued in \ref{sec:combined-correctness}.
+
+
+\begin{algorithm}
+ \begin{algorithmic}
+ \Globals
+ \begin{tabularx}{0.9\textwidth}{rX}
+ D & A mapping from variables to their current values, starting
+ at $\{ x \mapsto \infty \}$ \\
+ I & A mapping from a variable to the variables which depend on
+ it in their evaluation \\
+ stable & The set of all variables whose values have stabilised \\
+ system & The equation system, a mapping from a variable to its
+ associated function \\
+ \end{tabularx}
+ \EndGlobals
+
+ \Function {eval} {$x$, $y$}
+ \Comment{Evaluate $y$ for its value and note that when $y$
+ changes, $x$ must be re-evaluated}
+ \State $\solve(y)$
+ \State $I[y] = I[y] \cup \{x\}$
+ \State \Return $D[y]$
+ \EndFunction
+
+ \Function {invalidate} {$x$}
+ \If {$x \in \stable$}
+ \State $\stable = \stable \backslash \{x\}$
+ \State $D[x] = \infty$
+ \State $W = I[x]$
+ \State $I[x] = \emptyset$
+ \For {$v \in W$}
+ invalidate(v)
+ \EndFor
+ \EndIf
+ \EndFunction
+
+ \Function {solve} {$x$}
+ \Comment{Solve a specific expression and place its value in $D$}
+ \If {$x \not \in \stable$}
+ \State $f = \system[x]$
+ \State $\stable = \stable \cup \{x\}$
+ \State $v = f( \lambda y . \eval(x, y) )$
+ \If {$v \ne D[x]$}
+ \State $D = \{ x \mapsto v, \alpha \mapsto D[\alpha] \}
+ \forall \alpha \ne x$
+ \State $W = I[x]$
+ \State $I(x) = \emptyset$
+ \State strategy::invalidate($x$)
+ \State $\stable = \stable \backslash W$
+ \For {$v \in W$}
+ \State $\solve(v)$
+ \EndFor
+ \EndIf
+ \EndIf
+ \EndFunction
+ \end{algorithmic}
+
+ \caption{The fixpoint portion of the Combined W-DFS.}
+ \label{algo:combined-fixpoint}
+\end{algorithm}
+
+
+\begin{algorithm}
+ \begin{algorithmic}
+ \Globals
+ \begin{tabularx}{0.9\textwidth}{rX}
+ $\sigma$ & A mapping from $\max$-expressions to their current
+ sub-expressions, starting by \\& mapping to the first
+ sub-expression \\
+ D & A \emph{dynamic} variable assignment which will stay updated
+ as $\sigma$ changes \\
+ $I$ & A mapping from a $\max$-expression to the sub-expressions
+ which depend on it \\& in their evaluation \\
+ $I_V$ & A mapping from a variable to the $\max-$ expressions which
+ depend on it in their \\& evaluation \\
+ stable & The set of all $\max$-expressions whose strategies have
+ stabilised \\
+ system & The equation system, a mapping from a variable to its
+ associated function \\
+ bestStrategy & A function $(E_{\max}, (X \to D)) \to E$ mapping
+ from an expression and a variable \\& assignment to the greatest
+ subexpression in that context
+ \end{tabularx}
+ \EndGlobals
+
+ \Function {eval} {$x$, $y$}
+ \Comment{Evaluate $y$ for its value and note that when $y$
+ changes, $x$ must be re-evaluated}
+ \State $\solve(y)$
+ \State $I[y] = I[y] \cup \{x\}$
+ \State \Return $\sigma[y]$
+ \EndFunction
+
+ \Function {invalidate} {$x \in X$} \Comment{X is vars}
+ \State $\stable = \stable \backslash I[x]$
+ \State $W = I_v[x]$
+ \State $I_V = \emptyset$
+ \For {$v \in W$}
+ \State solve(v)
+ \EndFor
+ \EndFunction
+
+ \Function {solve} {$x$}
+ \Comment{Solve a specific variable and place its value in $\sigma$}
+ \If {$x \not \in \stable$}
+ \State $f = \system[x]$
+ \State $\stable = \stable \cup \{x\}$
+ \State $v = bestStrategy(x,
+ \lambda y . \eval(x, y))$
+ \If {$v \ne \sigma[x]$}
+ \State $\sigma = \{ x \mapsto v, \alpha \mapsto \sigma[\alpha]
+ \} \forall \alpha \ne x $
+ \State $W = I[x]$
+ \State $I(x) = \emptyset$
+ \State fixpoint::invalidate$(\mathsf{lookupVarFor}(x))$
+ \State $\stable = \stable \backslash W$
+ \For {$v \in W$}
+ \State $\solve(v)$
+ \EndFor
+ \EndIf
+ \EndIf
+ \EndFunction
+ \end{algorithmic}
+
+ \caption{The $\max$-strategy portion of the Combined W-DFS.}
+ \label{algo:combined-max}
+\end{algorithm}
+
+
+\subsection{Correctness} \label{sec:combined-correctness}
+
+This algorithm relies on the correctness of the underlying W-DFS
+algorithm. This algorithm was presented in
+\cite{DBLP:tr/trier/MI96-11}.
+
+If we assume that W-DFS is correct then we only have to prove that the
+combination algorithm is correct. For this it is sufficient to show
+that the invalidate calls in both directions preserve the correctness
+of the original algorithm.
+
+// TODO finish this.
+General idea:
+\begin{itemize}
+ \item
+ Invalidate calls from fixpoint $\to$ max strategy are correct if
+ the calls the other way are, because it completely re-solves the
+ equations
+ \item
+ Invalidate calls from max strategy $\to$ fixpoint are correct
+ because they effectively ``reset'' that part of the system,
+ creating it to be entirely re-calculated.
+\end{itemize}