\floatname{algorithm}{Listing} \newcommand\inflFP{\mathsf{infl^{FP}}} \newcommand\stableFP{\mathsf{stable^{FP}}} \newcommand\touchedFP{\mathsf{touched^{FP}}} \newcommand\inflSI{\mathsf{infl^{SI}}} \newcommand\stableSI{\mathsf{stable^{SI}}} \newcommand\stable{\mathsf{stable}} \newcommand\system{\mathsf{system}} \newcommand\Systems{\mathcal{E}} \newcommand\init{\mathsf{\textsc{init}}} \newcommand\eval{\mathsf{\textsc{eval}}} \newcommand\stabilise{\mathsf{\textsc{stabilise}}} \newcommand\solve{\mathsf{\textsc{solve}}} \newcommand\invalidate{\mathsf{\textsc{invalidate}}} \newcommand\fixpoint{\mathsf{\textsc{fixpoint}}} \newcommand\strategy{\mathsf{\textsc{strategy}}} \algblockx[Globals]{Globals}{EndGlobals}{\textbf{Globals:\\}}{} \algblockx[Assume]{Assumptions}{EndAssumptions}{\textbf{Assume:\\}}{} \chapter{Contribution} \label{chap:contribution} The main theoretical contribution of this paper is an improvement on a $\max$-strategy improvement algorithm for solving fixpoint equations over the integers with monotonic operators\cite{Gawlitza:2007:PFC:1762174.1762203}. The original algorithm is presented in Section \ref{section:basic-algorithm}. We employ the ideas of Seidl, et al. to design an algorithm which runs in considerably less time (in the best case, and in most practical cases) than the existing solver. In this chapter we will begin by presenting the Work-list Depth First Search (W-DFS) fixpoint algorithm developed by Seidl, et al.\cite{DBLP:tr/trier/MI96-11}. We will then present a modification to the algorithm to allow it to perform $\max$-strategy iteration rather than fixpoint iteration. The chapter will then conclude with our Local Demand-driven Strategy Improvement (LDSI) algorithm. The existing algorithm as presented in Section \ref{section:basic-algorithm} consists of two iterative operations: fixpoint iteration and max-strategy iteration. Each of these operations consists of naively ``evaluating'' the system repeatedly until a further evaluation yields no change. It is shown by Gawlitza, et al. that these iterations must converge in a finite number of steps\cite{Gawlitza:2007:PFC:1762174.1762203}, but in practice this naive approach performs many more operations than are necessary, in many cases merely re-calculating results which are already known. By making use of some data-dependencies within the equation systems it is possible to reduce the amount of work that is to be done quite considerably. In order to aid our explanation of these algorithms we will now define a few terms and notations. All variables are taken from the set $X$ and all values from the set $\D$. \begin{definition} \textbf{Variable Assignments:} $X \to \D$. A function from a variable to a value in our domain. An underlined value (eg. $\underline{\infty}$) indicates a variable assignment mapping everything to that value. Variable assignments may be combined with $\oplus$ in the following way: \begin{align*} \rho \oplus \varrho = \left\{\begin{array}{lc} \varrho(x) & x \in \mathsf{domain}(\varrho) \\ \rho(x) & \mbox{otherwise} \end{array}\right. \end{align*} \end{definition} \begin{definition} \textbf{Expressions:} For the purposes of this discussion we will consider expressions, $e \in E$, as $e : (X \to \D) \to \D$, a mapping from a variable assignment to the expression's value in that assignment. The subset of expressions of the form $\max(e_1, e_2, ... e_n)$, with $e_1, e_2, ..., e_n \in E$ are referred to as \emph{$\max$-expressions}, denoted by $E_{\max} \subset E$. \end{definition} \begin{definition} \textbf{Equation System:} $\{ x = e_x \mid x \in X, e_x \in E \}$. The values $x \in X$ are called ``variables'' while the values $e_x \in E$ are called ``right-hand-sides''. The set of all equation systems is denoted by $\Systems$; An equation system can be considered as a function $\system : X \to ((X \to \D) \to \D)$; $\system[x] = e_x$, mapping from each variable to its right-hand-side. An equation system can also be considered as a function $\system : (X \to \D) \to (X \to \D)$, $\system(\rho)[x] = e_x(\rho)$, taking a variable assignment $\rho$ and returning the result of each $e_x$'s evaluation in $\rho$. These two functional forms are equivalent and correspond to changing the argument order in the following way: $\system(\rho)[x] = \system[x](\rho)$. It is only for convenience that we use both forms. \end{definition} \begin{definition} \textbf{Dependencies:} A variable or expression $x$ is said to \emph{depend on} $y$ if a change to the value of $y$ induces a change in the value of $x$. If $x$ depends on $y$ then $y$ is said to \emph{influence} $x$. \end{definition} \begin{definition} \textbf{Local:} A solver is said be local if, for some $e_x \in E$, the evaluation of $e_x$ only requires the evaluation of other variables which $e_x$ may depend on. \end{definition} \section{Fixpoint Iteration} \subsection{Kleene Iteration} A simple approach to fixpoint iteration over monotonic equations is to simply iterate over the system repeatedly until a reevaluation results in no change to the values of any variables. This approach will always reach the least/greatest solution if there is one to be found, but it will often perform many more evaluations than are necessary. This algorithm is presented in Listing \ref{algo:kleene}. \begin{algorithm}[H] \begin{algorithmic} \Function {solvefixpoint} {$\system \in \Systems$} \State $k = 0$ \State $\rho_0 = \underline{\infty}$ \Repeat \State $\rho_{k+1} = \system( \rho_{k} )$ \State $k = k + 1$ \Until {$\rho_{k-1} = \rho_k$} \State \Return $\rho_k$ \EndFunction \end{algorithmic} \caption{The Kleene iteration algorithm for solving fixpoint equations for their greatest solutions.} \label{algo:kleene} \end{algorithm} For each iteration the entire system is evaluated, irrespective of whether it could possibly have changed value. This results in a considerable inefficiency in practice, requiring the evaluation of many right-hand-sides repeatedly for the same value. Thus an approach which can evaluate smaller portions of the system in each iteration would be a significant improvement. An additional deficiency of Kleene iteration is that it is not guaranteed to terminate. In some cases Kleene iteration must iterate an infinite number of times in order to reach a fixpoint. An example system is presented as Figure \ref{fig:kleene-infinite}. In this case $x$ will take the value of $0$ in the first iteration, then $y$ will evaluate to $-1$. In the next iteration $x$ will also take the value $-1$, thereby requiring $y$ to take the value $-2$. This will continue without bound, resulting in the Kleene iteration never reaching the greatest fixpoint of $\{ x \mapsto -\infty, y \mapsto -\infty \}$. \begin{figure}[H] \begin{align*} x &= \min(0, y) \\ y &= x - 1 \end{align*} \caption{An example equation system for which Kleene iteration will not terminate} \label{fig:kleene-infinite} \end{figure} This particular non-termination of Kleene iteration is not a significant problem in our application, however, as it has been shown in the work of Gawlitza et. al. \cite{Gawlitza:2007:PFC:1762174.1762203}, that Kleene iteration in this specific case is restricted to at most $|X|$ iterations. \subsection{W-DFS algorithm} \label{sec:wdfs} The W-DFS algorithm presented by Seidl, et al. takes into account some form of data-dependencies as it solves the system. This gives it the ability to leave portions of the system unevaluated when it is certain that those values have not changed. The algorithm is presented in Listing \ref{algo:w-dfs}. \begin{algorithm} \begin{algorithmic} \Globals \begin{tabularx}{0.9\textwidth}{rX} $D$ & $\in (X \to \D)$, a mapping from variables to their current values \\ $\inflFP$ & $\in (X \to 2^X)$, a mapping from a variable to the variables it \emph{may} influence \\ $\stableFP$ & $\subseteq X$, a set of ``stable'' variables \\ $\system$ & $\in \Systems$, an equation system \\ \end{tabularx} \EndGlobals \end{algorithmic} \begin{algorithmic} \Function {solvefixpoint} {$s \in \Systems$} \State $D = \underline{\infty}$ \State $\inflFP = \{x \mapsto \emptyset \mid \forall x \in X\}$ \State $\stableFP = \emptyset$ \State $\system = s$ \State \Return $\lambda x . (\solve(x); D[x])$ \EndFunction \label{algo:w-dfs:init} \end{algorithmic} \begin{algorithmic} \Function {eval} {$x \in X$, $y \in X$} \Comment{Evaluate $y$, track $x$ depends on $y$} \State $\solve(y)$ \State $\inflFP[y] = \inflFP[y] \cup \{x\}$ \State \Return $D[y]$ \EndFunction \end{algorithmic} \begin{algorithmic} \Procedure {solve} {$x \in X$} \Comment{Solve a $x$ and place its value in $D$} \If {$x \not \in \stableFP$} \State $e_x = \system[x]$ \State $\stableFP = \stableFP \cup \{x\}$ \State $v = e_x( \lambda y . \eval(x, y) )$ \If {$v \ne D[x]$} \State $D = \{ x \mapsto v \} \oplus D$ \State $W = \inflFP[x]$ \State $\inflFP(x) = \emptyset$ \State $\stableFP = \stableFP \backslash W$ \For {$v \in W$} \State $\solve(v)$ \EndFor \EndIf \EndIf \EndProcedure \end{algorithmic} \caption{The W-DFS mentioned in \cite{DBLP:tr/trier/MI96-11} and presented in \cite{fixpoint-slides}, modified to find greatest-fixpoints of monotonic fixpoint equations} \label{algo:w-dfs} \end{algorithm} The function $\init$ in Listing \ref{algo:w-dfs:init} is of the type $\Systems \to (X \to D)$. It takes an equation system and returns a function to query the greatest fixpoint. Each fixpoint value is solved on demand and will only evaluate the subset of the equation system which is required to ensure the correct solution is found. The algorithm performs no evaluation of any variables unless their value is requested (whether externally or internally). Once a value is requested the right-hand-side is evaluated, which may in turn request the values of other variables (through the $\eval$ function). In the case of mutually-recursive variables this would result in infinite recursion, as each time the variable's value must be calculated it must first calculate its own value. In order to prevent this a $\stable$ set is maintained of variables whose values are either currently being computed or whose values have already been computed. This set provides a ``base'' for the recursion to terminate, at which point the value is simply looked up in $D$. Whenever a variable's value changes it will ``destabilise'' and re-evaluate any variables which \emph{may} depend on it for their values, as they may change their value if re-evaluated (and hence are not stable). If these values change then they, too, will destabilise their dependent variables, and so on. The W-DFS algorithm provides us with a \emph{local}, \emph{demand-driven} solver for greatest fixpoints of monotonic fixpoint problems. \section{$\max$-strategy Iteration} The $\max$-strategy iteration can be viewed as an accelerated fixpoint problem. We are attempting to find a strategy, $\sigma: E_{\max} \to E$ that will result in the greatest value for each $e \in E_{\max}$. Therefore if we consider our ``variables'' to be $\max$-expressions and our ``values'' to be their subexpressions then we can solve for the best $\max$-strategy using a similar approach to the W-DFS algorithm presented above. Because $\max$-strategy iteration is so similar to a standard fixpoint problem it is possible \subsection{Naive approach} \begin{algorithm}[H] \begin{algorithmic} \Assumptions \begin{tabularx}{0.9\textwidth}{rX} $\sigma$ & $\in E_{\max} \to E$, a $\max$ strategy \\ $\system$ & $\in \Systems$, an equation system \\ $\rho$ & $\in (X \to D)$, a variable assignment \\ $P_{\max}$ & $ \in ((E_{\max} \to E), (X \to \D)) \to (E_{\max} \to E)$, a $\max$-strategy improvement operator \\ \end{tabularx} \EndAssumptions \Function {solvestrategy} {$\system \in \Systems$} \State $k = 0$ \State $\sigma_0 = \lambda x . -\infty$ \State $\rho_0 = \underline{-\infty}$ \Repeat \State $\sigma_{k+1} = P_{\max}(\sigma_k, \rho_k)$ \State $\rho_{k+1} = \solve \fixpoint(\sigma_{k+1}(\system))$ \State $k = k + 1$ \Until {$\sigma_{k-1} = \sigma_k$} \State \Return $\sigma_k$ \EndFunction \end{algorithmic} \caption{The naive approach to strategy iteration} \label{algo:naive-strategy} \end{algorithm} \subsection{Adapted W-DFS algorithm} \label{sec:adapted-wdfs} This, then, allows us to use the W-DFS algorithm to re-evaluate only those parts of the strategy which have changed. Listing \ref{algo:w-dfs-max} presents this variation on W-DFS. \begin{algorithm}[H] \begin{algorithmic} \Globals \begin{tabularx}{0.9\textwidth}{rX} $\sigma$ & $\in (E_{\max} \to E)$, a mapping from $\max$-expressions to a subexpression \\ $\inflSI$ & $\in (E_{\max} \to 2^{E_{\max}}$, a mapping from a $\max$-expression to the sub-expressions it influences \\ $\stableSI$ & $\subseteq E_{\max}$, the set of all $\max$-expressions whose strategies are stable \\ $\system$ & $\in \Systems$, an equation system \\ $P_{\max}$ & $\in (E_{\max}, E, (X \to \D)) \to E$, a function mapping from a $\max$-expression, the current strategy and a variable assignment to a better subexpression than the current strategy in that variable assignment \\ \end{tabularx} \EndGlobals \end{algorithmic} \begin{algorithmic} \Function {solvestrategy} {$s \in \Systems$} \State $\sigma = \underline{-\infty}$ \State $\inflSI = \{x \mapsto \emptyset \mid \forall x \in E_{\max}\}$ \State $\stableSI = \emptyset$ \State $\system = s$ \State \Return $\lambda x . (\solve(x); \sigma[x])$ \EndFunction \label{algo:w-dfs:init} \end{algorithmic} \begin{algorithmic} \Function {eval} {$x \in E_{\max}$, $y \in E_{\max}$} \State $\solve(y)$ \State $\inflSI[y] = \inflSI[y] \cup \{x\}$ \State \Return $\sigma[y]$ \EndFunction \end{algorithmic} \begin{algorithmic} \Procedure {solve} {$x \in E_{\max}$} \If {$x \not \in \stableSI$} \State $\stableSI = \stableSI \cup \{x\}$ \State $\rho = \solve \fixpoint(\sigma(\system))$ \State $v = P_{\max}(x, \sigma[x], \lambda y . \eval(x, y))$ \If {$v \ne \sigma[x]$} \State $\sigma = \{ x \mapsto v\} \oplus \sigma$ \State $W = \inflSI[x]$ \State $\inflSI(x) = \emptyset$ \State $\stableSI = \stableSI \backslash W$ \For {$v \in W$} \State $\solve(v)$ \EndFor \EndIf \EndIf \EndProcedure \end{algorithmic} \caption{W-DFS, this time modified to find the best $\max$-strategy.} \label{algo:w-dfs-max} \end{algorithm} \section{Local Demand-driven Strategy Improvement (LDSI)} W-DFS can be used to speed up both the $\max$-strategy iteration and the fixpoint iteration as two independent processes, but each time we seek to improve our strategy we must compute at least a subset of the fixpoint from a base of no information. Ideally we would be able to provide some channel for communication between the two W-DFS instances to reduce the amount of work that must be done at each stage. By supplying each other with information about which parts of the system have changed we can jump in partway through the process of finding a solution, thereby avoiding repeating work that we have already done. The new \emph{Local Demand-driven Strategy Improvement} algorithm, \emph{LDSI}, seeks to do this. By adding an ``invalidate'' procedure to the fixpoint iteration we provide a method for the $\max$-strategy to invalidate fixpoint variables, and we can modify the fixpoint iteration algorithm to report which variables it has modified with each fixpoint iteration step. This essentially results in a $\max$-strategy iteration which, at each strategy-improvement step, invalidates a portion of the current fixpoint iteration which may depend on the changed strategy. The fixpoint iteration then re-stabilises each variable as it is requested, tracking which values have changed since the last time they iterated. The $\max$-strategy iteration then uses this list of changed variables to determine which portions of the current strategy must be re-stabilised once more. This process continues until the $\max$-strategy stabilises (at which point the fixpoint will also stabilise). This entire approach is \emph{demand driven}, and so any necessary evaluation of strategies or fixpoint values is delayed until the point when it is actually required. This means that the solver will be \emph{local}. This algorithm is presented in three parts. Section \ref{sec:ldsi:top-level} presents the global state and entry point to the algorithm. Section \ref{sec:ldsi:max-strategy} presents the fixpoint portion of the algorithm. This portion bears many similarities to the W-DFS algorithm presented in Section \ref{sec:wdfs}, with a few modifications to track changes and allow external invalidations of parts of the solution. Section \ref{sec:ldsi:fixpoint} presents the $\max$-strategy portion of the algorithm. This portion is quite similar to the Adapted W-DFS algorithm presented in Section \ref{sec:adapted-wdfs}, with some modifications to allow for communication with the fixpoint portion of the algorithm. Section \ref{sec:ldsi:correctness} argues the correctness of this approach for finding the least fixpoints of monotonic, expansive equation systems involving $\max$-expressions. \subsection{Top level} \label{sec:ldsi:top-level} \begin{algorithm}[H] \begin{algorithmic} \Globals \begin{tabularx}{0.9\textwidth}{rX} $D$ & $\in (X \to \D)$, a mapping from variables to their current values \\ $\inflFP$ & $\in (X \to 2^X)$, a mapping from a variable to the variables it \emph{may} influence \\ $\stableFP$ & $\subseteq X$, a set of ``stable'' variables \\ $\touchedFP$ & $\subseteq X$, a set of variables which have been ``touched'' by the fixpoint iteration, but not yet acknowledged by the $\max$-strategy iteration \\ \\ $\sigma$ & $\in (E_{\max} \to E)$, a mapping from $\max$-expressions to a subexpression \\ $\inflSI$ & $\in (E_{\max} \to 2^{E_{\max}}$, a mapping from a $\max$-expression to the sub-expressions it influences \\ $\stableSI$ & $\subseteq E_{\max}$, the set of all $\max$-expressions whose strategies are stable \\ $P_{\max}$ & $\in (E_{\max}, E, (X \to \D)) \to E$, a function mapping from a $\max$-expression, the current strategy and a variable assignment to a better subexpression than the current strategy in that variable assignment \\ \\ $\system$ & $\in \Systems$, an equation system \\ \end{tabularx} \EndGlobals \end{algorithmic} \caption{Global variables used by the LDSI algorithm} \label{algo:ldsi:fixpoint:globals} \end{algorithm} These variables are mostly just a combination of the variables found in Sections \ref{sec:wdfs} and \ref{sec:adapted-wdfs}. The only exception is $\touchedFP$, which is a newly introduced variable to track variables which are touched by the fixpoint iteration steps. \subsection{$\max$-strategy iteration} \label{sec:ldsi:max-strategy} \subsection{Fixpoint iteration} \label{sec:ldsi:fixpoint} \subsection{Correctness} \label{sec:ldsi:correctness} THIS IS THE END OF WHERE I'M UP TO! The stuff after this still needs to be fixed up to use the new variables and have the ``touched'' thing properly put in. \endinput Listings \ref{algo:ldsi:fixpoint:globals}, \ref{algo:ldsi:fixpoint:eval}, \ref{algo:ldsi:fixpoint:invalidate}, \ref{algo:ldsi:fixpoint:solve} and \ref{algo:ldsi:fixpoint:stabilise} present the fixpoint-iteration portion of the algorithm, while Listing \ref{algo:combined-max} presents the $\max$-strategy portion. The correctness of this new algorithm is argued in \ref{sec:ldsi-correctness}. \begin{algorithm}[H] \begin{algorithmic} \Globals \begin{tabularx}{0.9\textwidth}{rX} $D$ & $X \to \D$ - a mapping from variables to values, starting at $\{ x \mapsto \infty \}$ \\ $\sigma$ & $E_{\max} \to E$ - a mapping from $\max$-expressions to their sub-expressions (a $\max$-strategy) \\ $I_{X,X}$ & $X \to X$ - a mapping from a variable to the variables it influences \\ $I_{\max,\max}$ & $E_{\max} \to E_{\max}$ - a mapping from a $\max$-expression to the $\max$-expressions it influences \\ $U_{X}$ & The set of all variables whose values have not stabilised to a final fixpoint value (unstable variables) \\ $S_{\max}$ & The set of all $\max$ expressions whose strategies have stabilised to their final strategy (stable $\max$-expressions) \\ $\system$ & The equation system, a mapping from a variable to its associated function \\ \end{tabularx} \EndGlobals \end{algorithmic} \caption{Global variables used by the LDSI algorithm} \label{algo:ldsi:fixpoint:globals} \end{algorithm} A few things are of particular note for the global variables. In particular the difference between $U_X$ being an unstable set and $S_{\max}$ being a stable set. In reality these two are entirely equivalent, but because the fixpoint-iteration will be started as being entirely ``stable'' (with values of $-\infty$) it is of more practical benefit to avoid the extra work populating the ``stable'' set by instead storing unstable values. The other variables are just the state from each of the previous algorithms for intelligently performing $\max$-strategy iteration and fixpoint iteration (as were presented in Sections \ref{sec:wdfs} and \ref{sec:adapted-wdfs}). $D$ and $I_{X,X}$ are taken from the W-DFS algorithm, while $\sigma$ and $I_{\max,\max}$ are taken from the Adapted W-DFS algorithm. \begin{algorithm}[H] \begin{algorithmic} \Function {evalfixpoint} {$x \in X$, $y \in X$} \State $\solve \fixpoint(y)$ \State $I_{X,X}[y] = I_{X,X}[y] \cup \{x\}$ \State \Return $D[y]$ \EndFunction \end{algorithmic} \caption{Utility function used to track fixpoint variable dependencies.} \label{algo:ldsi:fixpoint:eval} \end{algorithm} This procedure is exactly the same as the equivalent method in the W-DFS algorithm. It allows us to more easily track dependencies between fixpoint variables by injecting this function as our variable-lookup function. It then both calculates a new value for the variable (the $\solve \fixpoint$ call) and notes the dependency between $x$ and $y$. \begin{algorithm}[H] \begin{algorithmic} \Function {invalidatefixpoint} {$x \in X$} \Comment{Invalidate a fixpoint variable} \If {$x \not \in U_X$} \State $U_X = U_X \cup \{x\}$ \State $D[x] = \infty$ \State $W = I[x]$ \State $I[x] = \emptyset$ \For {$v \in W$} \State $\invalidate \fixpoint(v)$ \EndFor \EndIf \EndFunction \end{algorithmic} \caption{Fixpoint subroutine called from the $\max$-strategy iteration portion to invalidate fixpoint variables} \label{algo:ldsi:fixpoint:invalidate} \end{algorithm} This procedure is not called in the fixpoint iteration process, but is rather the method by which the $\max$-strategy iteration can communicate with the fixpoint-iteration. It allows the $\max$-strategy iteration to inform the fixpoint-iteration which values have changed and will require re-evaluation. This makes it possible to only re-evaluate a partial system (the solving of which is also be delayed until requested by the $\solve \fixpoint$ procedure). \begin{algorithm}[H] \begin{algorithmic} \Function {solvefixpoint} {$x$} \Comment{Solve a specific expression and place its value in $D$} \If {$x \in U_X$} \State $f = \system[x]$ \State $U_X = U_X \backslash \{x\}$ \State $v = \sigma(\system[x])( \lambda y . \eval \fixpoint(x, y) )$ \If {$v \ne D[x]$} \State $D = \{ x \mapsto v \} \oplus D$ \State $W = I_{X,X}[x]$ \State $I_{X,X}[x] = \emptyset$ \State $\invalidate \strategy(x)$ \State $\stable = \stable \backslash W$ \For {$v \in W$} \State $\solve \fixpoint(v)$ \EndFor \EndIf \EndIf \EndFunction \end{algorithmic} \caption{The subroutine of the fixpoint iteration responsible for solving for each variable} \label{algo:ldsi:fixpoint:solve} \end{algorithm} After an evaluation of the $\solve \fixpoint$ procedure, the variable supplied as its argument will have been stabilised within the current $\max$-strategy. This means that it will have taken on the same value as it would take in the greatest fixpoint of this system. Because the $\solve \fixpoint$ calls need not be made immediately it's possible for a variable to be invalidated by several $\max$-strategy improvements without being re-evaluated. \begin{algorithm}[H] \begin{algorithmic} \Function {evalstrategy} {$x \in E_{\max}$, $y \in E_{\max}$} \Comment{Evaluate $y$ for its value and note that when $y$ changes, $x$ must be re-evaluated} \State $\solve \strategy(y)$ \State $I_{\max,\max}[y] = I_{\max,\max}[y] \cup \{x\}$ \State \Return $\sigma[y]$ \EndFunction \Function {evalstrategyfixpoint} {$x \in E_{\max}$, $y \in X$} \Comment{Evaluate $y$ for its value and note that when $y$ changes, $x$ must be re-evaluated} \State $\solve \fixpoint(y)$ \State $I_{\max,\max}[y] = I_{\max,\max}[\system[y]] \cup \{x\}$ \State \Return $D[y]$ \EndFunction \end{algorithmic} \label{algo:ldsi:strategy:eval} \caption{Evaluate a portion of the $\max$-strategy and note a dependency} \end{algorithm} The $\eval \strategy$ function is essentially the same as the $\eval \fixpoint$ function, except that it notes the dependencies in a different variable, $I_{\max,\max}$. This is because the dependency information for the $\max$-strategy iteration is entirely separate to that of the fixpoint iteration. The $\solve \fixpoint$ calls in $\eval \strategy \fixpoint$ are top-level calls, meaning that upon their return we know that $D$ contains the value it would take in the greatest fixpoint of the current strategy $\sigma$. This function, therefore, acts as a simple intermediate layer between the fixpoint-iteration and the $\max$-strategy iteration to allow for dependencies to be tracked. \begin{algorithm}[H] \begin{algorithmic} \Function {invalidatestrategy} {$x \in X$} \Comment{$x$ is a \emph{variable}} \For {$v \in I_{\max,\max}(\system[x])$} \Comment{$v$ is influenced by $x$} \State $\invalidate \strategy (v)$ \EndFor \EndFunction \Function {invalidatestrategy} {$x \in E_{\max}$} \Comment{$x$ is a \emph{$\max$-expression}} \If {$x \in S_{\max}$} \State $S_{\max} = S_{\max} \backslash \{x\}$ \For {$v \in I_{\max,\max}$} \Comment {$v$ is influenced by $x$} \State $\invalidate \strategy (v)$ \EndFor \EndIf \EndFunction \end{algorithmic} \label{algo:ldsi:strategy:invalidate} \caption{Evaluate a portion of the $\max$-strategy and note a dependency} \end{algorithm} Invalidating the $\max$-strategy iteration is slightly more complicated than invalidating the fixpoint iteration stage. As the invalidation interface consists of variables, we must first translate a variable into a $\max$-expression (which is easily done by looking it up in the equation system). We must then invalidate the strategies for each variable which depends on this resultant $\max$-expression. The invalidation for $\max$-expressions consists of transitively invalidating everything which depends on the $\max$-expression, as well as itself. \begin{algorithm}[H] \begin{algorithmic} \Function {solvestrategy} {$x \in E_{\max}$} \Comment{Solve a specific variable and place its value in $\sigma$} \If {$x \not \in S_{\max}$} \State $S_{\max} = S_{\max} \cup \{x\}$ \State $\sigma_{dynamic} = \lambda y . \eval \strategy(x,y)$ \State $e = P_{\max}(\sigma_{dynamic}, \lambda y . \eval \strategy \fixpoint(x, y))(x)$ \If {$e \ne \sigma[x]$} \State $\sigma = \{ x \mapsto e \} \oplus \sigma$ \State $\invalidate \fixpoint(\system^{-1}(x))$ \State $S_{\max} = S_{\max} \backslash I[x]$ \For {$v \in I[x]$} \State $\solve(v)$ \EndFor \EndIf \EndIf \EndFunction \end{algorithmic} \caption{The $\max$-strategy portion of the Combined W-DFS.} \label{algo:ldsi:solve} \end{algorithm} \subsection{Correctness} \label{sec:combined-correctness} This algorithm relies on the correctness of the underlying W-DFS algorithm. This algorithm was presented in \cite{DBLP:tr/trier/MI96-11}. If we assume that W-DFS is correct then we only have to prove that the combination algorithm is correct. For this it is sufficient to show that the invalidate calls in both directions preserve the correctness of the original algorithm. The fixpoint iteration step invalidates anything that \emph{might} depend on $x$ while it invalidates $x$, thereby ensuring that any further calls to $\solve \fixpoint$ will result in a correct value for the given strategy. The strategy-iteration step invalidates anything that \emph{might} depend on $\system[x]$ while it invalidates $x$, thereby ensuring that any further calls to $\solve \strategy$ will result in a correct value for the given strategy.