1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
|
\floatname{algorithm}{Listing}
\newcommand\stable{\mathsf{stable}}
\newcommand\eval{\mathsf{\textsc{eval}}}
\newcommand\solve{\mathsf{\textsc{solve}}}
\newcommand\system{\mathsf{system}}
\algblockx[Globals]{Globals}{EndGlobals}{\textbf{Globals:\\}}{}
\algblockx[Assume]{Assumptions}{EndAssumptions}{\textbf{Assume:\\}}{}
\chapter{Contribution} \label{chap:contribution}
The main theoretical contribution of this paper is an improvement on a
$\max$-strategy improvement algorithm for solving fixpoint equations
over the integers with monotonic
operators\cite{Gawlitza:2007:PFC:1762174.1762203}. The original
algorithm is presented in Section \ref{section:basic-algorithm}. We
employ the ideas of Seidl, et al. to design an algorithm which runs in
considerably less time than the existing solver.
In this chapter we will begin by presenting the Work-list Depth First
Search (W-DFS) fixpoint algorithm developed by Seidl, et
al.\cite{DBLP:tr/trier/MI96-11}. We will then present a modification
to the algorithm to allow it to perform $\max$-strategy iteration
rather than fixpoint iteration. The chapter will then conclude with
our Local Demand-driven Strategy Improvement (LDSI) algorithm.
The existing algorithm as presented in Section
\ref{section:basic-algorithm} consists of two iterative operations:
fixpoint iteration and max-strategy iteration. Each of these
operations consists of naively ``evaluating'' the system repeatedly
until a further evaluation yields no change. It is shown by Gawlitza,
et al. that these iterations must converge in a finite number of
steps\cite{Gawlitza:2007:PFC:1762174.1762203}, but in practice this
naive approach performs many more operations than are necessary, in
many cases merely re-calculating results which are already known.
By making use of some data-dependencies within the equation systems it
is possible to reduce the amount of work that is to be done quite
considerably.
In order to aid our explanation of these algorithms we will now define
a few terms and notations. All variables are taken from the set $X$
and all values from the set $\D$.
\begin{definition}
\textbf{Variable Assignments:} $X \to \D$. A function from a
variable to a value in our domain. An underlined value
(eg. $\underline{\infty}$) indicates a variable assignment mapping
everything to that value. Variable assignments may be combined with
$\oplus$ in the following way:
\begin{align*}
\rho \oplus \varrho = \left\{\begin{array}{lc}
\varrho(x) & x \in \mathsf{domain}(\varrho) \\
\rho(x) & \mbox{otherwise}
\end{array}\right.
\end{align*}
\end{definition}
\begin{definition}
\textbf{Expressions:} For the purposes of this discussion we will
consider expressions, $e \in E$, as $e : (X \to \D) \to \D$, a
mapping from a variable assignment to the expression's value in that
assignment.
\end{definition}
\begin{definition}
\textbf{Equation System:} $\{ x = e_x \mid x \in X, e_x \in E \}$. An
equation system can also be considered as a function $\varepsilon :
(X \to D) \to (X \to D)$; $\varepsilon[\rho](x) = e_x(\rho)$.
\end{definition}
\begin{definition}
\textbf{Dependencies:} A variable $x$ is said to \emph{depend on}
$y$ if a change to the value of $y$ induces a change in the value of
$x$.
\end{definition}
\section{Fixpoint Iteration}
\subsection{Kleene Iteration}
A simple approach to fixpoint iteration over monotonic equations is to
simply iterate over the system repeatedly until a reevaluation results
in no change to the values of any variables. This approach will always
reach the least/greatest solution if there is one to be found, but it
will often perform many more evaluations than are necessary. This
algorithm is presented in Listing \ref{algo:kleene}.
\begin{algorithm}
\begin{algorithmic}
\Assumptions
\begin{tabularx}{0.9\textwidth}{rX}
$\rho $:&$ X \to \D$, a variable assignment \\
$\varepsilon $:&$ (X \to \D) \to (X \to \D)$, an equation system
\end{tabularx}
\EndAssumptions
\State $n = 0$
\State $\rho_0 = \underline{\infty}$
\Repeat
\State $\rho_{n+1} = \varepsilon[ \rho_{n} ]$
\State $n = n + 1$
\Until {$\rho_{n-1} = \rho_n$}
\State \Return $\rho_n$
\end{algorithmic}
\caption{The Kleene iteration algorithm for solving fixpoint
equations for their greatest solutions.}
\label{algo:kleene}
\end{algorithm}
For each iteration the entire system is evaluated, irrespective of
whether it could possibly have changed value. This results in a
considerable inefficiency in practice and thus an approach which can
evaluate smaller portions of the system in each iteration would be a
sensible improvement.
\subsection{W-DFS algorithm}
The W-DFS algorithm presented by Seidl, et al. takes into account some
form of data-dependencies as it solves the system. This gives it the
ability to leave portions of the system unevaluated when it is certain
that those values have not changed.
\begin{algorithm}
\begin{algorithmic}
\Globals
\begin{tabularx}{0.9\textwidth}{rX}
$D : X \to \D$ & a mapping from variables to their current values, starting
at $\{ x \mapsto \infty | \forall x \in X \}$ \\
I & A mapping from a variable to the variables which depend on
it in their evaluation \\
stable & The set of all variables whose values have stabilised \\
system & The equation system, a mapping from a variable to its
associated function \\
\end{tabularx}
\EndGlobals
something something
\end{algorithmic}
\begin{algorithmic}
\Function {eval} {$x$, $y$}
\Comment{Evaluate $y$ for its value and note that when $y$
changes, $x$ must be re-evaluated}
\State $\solve(y)$
\State $I[y] = I[y] \cup \{x\}$
\State \Return $D[y]$
\EndFunction
\end{algorithmic}
\begin{algorithmic}
\Function {solve} {$x$}
\Comment{Solve a specific variable and place its value in $D$}
\If {$x \not \in \stable$}
\State $f = \system[x]$
\State $\stable = \stable \cup \{x\}$
\State $v = f( \lambda y . \eval(x, y) )$
\If {$v \ne D[x]$}
\State $D = \{ x \mapsto v, \alpha \mapsto D[\alpha] \}
\forall \alpha \ne x$
\State $W = I[x]$
\State $I(x) = \emptyset$
\State $\stable = \stable \backslash W$
\For {$v \in W$}
\State $\solve(v)$
\EndFor
\EndIf
\EndIf
\EndFunction
\end{algorithmic}
\caption{The W-DFS alluded to in \cite{DBLP:tr/trier/MI96-11} and
presented in \cite{fixpoint-slides}, modified to find
greatest-fixpoints of monotonic fixpoint equations}
\label{algo:w-dfs}
\end{algorithm}
\section{$\max$-strategy Iteration}
\subsection{Naive approach}
TODO: Explanation of the naive approach
\subsection{Adapted W-DFS algorithm}
The $\max$-strategy iteration can be viewed as a fixpoint problem. We
are attempting to find a strategy, $\sigma: E_{\max} \to E$ that will
result in the greatest value for each $e \in E_{\max}$. Therefore if
we consider our ``variables'' to be $\max$-expressions, our ``values''
to be their subexpressions and our ``comparison'' to be carried out
using the result of evaluating the system with that strategy.
This, then, allows us to use the W-DFS algorithm to re-evaluate only
those parts of the strategy which have changed. Listing
\ref{algo:w-dfs-max} presents this variation on W-DFS.
\begin{algorithm}
\begin{algorithmic}
\Globals
\begin{tabularx}{0.9\textwidth}{rX}
$\sigma$ & A mapping from $\max$-expressions to their current
sub-expressions, starting by mapping to the first
sub-expression \\
I & A mapping from a $\max$-expression to the sub-expressions
which depend on it in their evaluation \\
stable & The set of all $\max$-expressions whose strategies have
stabilised \\
system & The equation system, a mapping from a variable to its
associated function \\
bestStrategy & A function $(E_{\max}, (X \to D)) \to E$ mapping
from an expression and a variable \\& assignment to the greatest
subexpression in that context
\end{tabularx}
\EndGlobals
\Function {eval} {$x$, $y$}
\Comment{Evaluate $y$ for its value and note that when $y$
changes, $x$ must be re-evaluated}
\State $\solve(y)$
\State $I[y] = I[y] \cup \{x\}$
\State \Return $\sigma[y]$
\EndFunction
\Function {solve} {$x$}
\Comment{Solve a specific expression and place its value in $\sigma$}
\If {$x \not \in \stable$}
\State $f = \system[x]$
\State $\stable = \stable \cup \{x\}$
\State $v = bestStrategy(x, \lambda y . \eval(x, y))$
\If {$v \ne \sigma[x]$}
\State $\sigma = \{ x \mapsto v, \alpha \mapsto \sigma[\alpha]
\} \forall \alpha \ne x $
\State $W = I[x]$
\State $I(x) = \emptyset$
\State $\stable = \stable \backslash W$
\For {$v \in W$}
\State $\solve(v)$
\EndFor
\EndIf
\EndIf
\EndFunction
\end{algorithmic}
\caption{W-DFS, this time modified to find the best $\max$-strategy.}
\label{algo:w-dfs-max}
\end{algorithm}
\section{Combined W-DFS}
W-DFS can be used to speed up both the $\max$-strategy iteration and
the fixpoint iteration as two independent processes, but each
fixpoint-iteration step still requires at least one complete
evaluation of the equation system per $\max$-strategy improvement
step. Ideally we would be able to adapt the W-DFS algorithm so that
the fixpoint-iteration and $\max$-strategy iteration steps could
provide some information to each other about what values have changed
so that at each step only a subset of the entire system would have to
be evaluated.
The new algorithm, \emph{Combined W-DFS} seeks to do this. By adding
an ``invalidate'' method to both W-DFS instances we provide an
interface for the two sides of the algorithm to indicate which values
have changed. This gives the other side enough information to avoid
evaluating irrelevant portions of the equation system.
This algorithm is presented in two parts. Listing
\ref{algo:combined-fixpoint} presents the fixpoint-iteration portion
of the algorithm, while Listing \ref{algo:combined-max} presents the
$\max$-strategy portion. The correctness of this new algorithm is
argued in \ref{sec:combined-correctness}.
\begin{algorithm}
\begin{algorithmic}
\Globals
\begin{tabularx}{0.9\textwidth}{rX}
D & A mapping from variables to their current values, starting
at $\{ x \mapsto \infty \}$ \\
I & A mapping from a variable to the variables which depend on
it in their evaluation \\
stable & The set of all variables whose values have stabilised \\
system & The equation system, a mapping from a variable to its
associated function \\
\end{tabularx}
\EndGlobals
\Function {eval} {$x$, $y$}
\Comment{Evaluate $y$ for its value and note that when $y$
changes, $x$ must be re-evaluated}
\State $\solve(y)$
\State $I[y] = I[y] \cup \{x\}$
\State \Return $D[y]$
\EndFunction
\Function {invalidate} {$x$}
\If {$x \in \stable$}
\State $\stable = \stable \backslash \{x\}$
\State $D[x] = \infty$
\State $W = I[x]$
\State $I[x] = \emptyset$
\For {$v \in W$}
invalidate(v)
\EndFor
\EndIf
\EndFunction
\Function {solve} {$x$}
\Comment{Solve a specific expression and place its value in $D$}
\If {$x \not \in \stable$}
\State $f = \system[x]$
\State $\stable = \stable \cup \{x\}$
\State $v = f( \lambda y . \eval(x, y) )$
\If {$v \ne D[x]$}
\State $D = \{ x \mapsto v, \alpha \mapsto D[\alpha] \}
\forall \alpha \ne x$
\State $W = I[x]$
\State $I(x) = \emptyset$
\State strategy::invalidate($x$)
\State $\stable = \stable \backslash W$
\For {$v \in W$}
\State $\solve(v)$
\EndFor
\EndIf
\EndIf
\EndFunction
\end{algorithmic}
\caption{The fixpoint portion of the Combined W-DFS.}
\label{algo:combined-fixpoint}
\end{algorithm}
\begin{algorithm}
\begin{algorithmic}
\Globals
\begin{tabularx}{0.9\textwidth}{rX}
$\sigma$ & A mapping from $\max$-expressions to their current
sub-expressions, starting by \\& mapping to the first
sub-expression \\
D & A \emph{dynamic} variable assignment which will stay updated
as $\sigma$ changes \\
$I$ & A mapping from a $\max$-expression to the sub-expressions
which depend on it \\& in their evaluation \\
$I_V$ & A mapping from a variable to the $\max-$ expressions which
depend on it in their \\& evaluation \\
stable & The set of all $\max$-expressions whose strategies have
stabilised \\
system & The equation system, a mapping from a variable to its
associated function \\
bestStrategy & A function $(E_{\max}, (X \to D)) \to E$ mapping
from an expression and a variable \\& assignment to the greatest
subexpression in that context
\end{tabularx}
\EndGlobals
\Function {eval} {$x$, $y$}
\Comment{Evaluate $y$ for its value and note that when $y$
changes, $x$ must be re-evaluated}
\State $\solve(y)$
\State $I[y] = I[y] \cup \{x\}$
\State \Return $\sigma[y]$
\EndFunction
\Function {invalidate} {$x \in X$} \Comment{X is vars}
\State $\stable = \stable \backslash I[x]$
\State $W = I_v[x]$
\State $I_V = \emptyset$
\For {$v \in W$}
\State solve(v)
\EndFor
\EndFunction
\Function {solve} {$x$}
\Comment{Solve a specific variable and place its value in $\sigma$}
\If {$x \not \in \stable$}
\State $f = \system[x]$
\State $\stable = \stable \cup \{x\}$
\State $v = bestStrategy(x,
\lambda y . \eval(x, y))$
\If {$v \ne \sigma[x]$}
\State $\sigma = \{ x \mapsto v, \alpha \mapsto \sigma[\alpha]
\} \forall \alpha \ne x $
\State $W = I[x]$
\State $I(x) = \emptyset$
\State fixpoint::invalidate$(\mathsf{lookupVarFor}(x))$
\State $\stable = \stable \backslash W$
\For {$v \in W$}
\State $\solve(v)$
\EndFor
\EndIf
\EndIf
\EndFunction
\end{algorithmic}
\caption{The $\max$-strategy portion of the Combined W-DFS.}
\label{algo:combined-max}
\end{algorithm}
\subsection{Correctness} \label{sec:combined-correctness}
This algorithm relies on the correctness of the underlying W-DFS
algorithm. This algorithm was presented in
\cite{DBLP:tr/trier/MI96-11}.
If we assume that W-DFS is correct then we only have to prove that the
combination algorithm is correct. For this it is sufficient to show
that the invalidate calls in both directions preserve the correctness
of the original algorithm.
// TODO finish this.
General idea:
\begin{itemize}
\item
Invalidate calls from fixpoint $\to$ max strategy are correct if
the calls the other way are, because it completely re-solves the
equations
\item
Invalidate calls from max strategy $\to$ fixpoint are correct
because they effectively ``reset'' that part of the system,
creating it to be entirely re-calculated.
\end{itemize}
|