summaryrefslogtreecommitdiff
path: root/tex/thesis/contribution/contribution.tex
blob: b5d47db6adf46c6b31c6ce5c4219099fb059ec92 (about) (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
\floatname{algorithm}{Listing}

\newcommand\inflFP{\mathsf{infl^{FP}}}
\newcommand\stableFP{\mathsf{stable^{FP}}}
\newcommand\touchedFP{\mathsf{touched^{FP}}}
\newcommand\inflSI{\mathsf{infl^{SI}}}
\newcommand\stableSI{\mathsf{stable^{SI}}}

\newcommand\stable{\mathsf{stable}}
\newcommand\system{\mathcal{E}} %\mathsf{system}}
\newcommand\parent{\mathsf{parent}}
\newcommand\parents{\mathsf{parents}}
\newcommand\old{\mathsf{old}}

\newcommand\Operators{\mathcal{O}}
\newcommand\Systems{\varepsilon}


\newcommand\init{\mathsf{\textsc{init}}}
\newcommand\eval{\mathsf{\textsc{eval}}}
\newcommand\stabilise{\mathsf{\textsc{stabilise}}}
\newcommand\solve{\mathsf{\textsc{solve}}}
\newcommand\invalidate{\mathsf{\textsc{invalidate}}}
\newcommand\fixpoint{\mathsf{\textsc{fixpoint}}}
\newcommand\strategy{\mathsf{\textsc{strategy}}}

\algblockx[Globals]{Globals}{EndGlobals}{\textbf{Globals:\\}}{} 
\algblockx[Assume]{Assumptions}{EndAssumptions}{\textbf{Assume:\\}}{} 


\chapter{Contribution} \label{chap:contribution}

In this chapter the main theoretical contribution of this thesis is
presented: an extension on a $\max$-strategy iteration algorithm for
solving fixpoint equations over the integers with monotonic
operators\cite{Gawlitza:2007:PFC:1762174.1762203} (see Section
\ref{sec:gawlitza-algorithm}). We devise an algorithm which runs in
linear time in the best case.

In this chapter we will begin by presenting the Work-list Depth First
Search (W-DFS) fixpoint algorithm developed by Seidl et
al.\cite{DBLP:tr/trier/MI96-11}. We will then discuss a modification
of the algorithm to allow it to perform $\max$-strategy iteration. The
chapter will then conclude with our Local Demand-driven Strategy
Improvement (LDSI) algorithm to perform efficient strategy iteration
and fixpoint iteration. % TODO: fix me, or something

The existing algorithm as presented in Section
\ref{section:basic-algorithm} consists of two iterative operations:
fixpoint iteration and max-strategy iteration. Each of these
operations consists of naively ``evaluating'' the system repeatedly
until a further evaluation results in no change. Gawlitza et
al.~proved that these iterations must converge in a finite number of
steps\cite{Gawlitza:2007:PFC:1762174.1762203} , but in practice this
naive approach performs many more operations than are necessary, in
many cases re-calculating results which are already known.
% TODO: add something here about variable dependencies

%By considering which variables depend on, and influence, each other it
%is possible to reduce the amount of work that is to be done quite
%considerably.

\section{Preliminaries}

In order to aid our explanation of these algorithms we will now define
a few terms and notations. We will assume a fixed, finite set $X$ of
variables. We will also define $\CZ = \Z \cup \{-\infty, \infty\}$.

\begin{definition}
  A \textbf{Variable Assignment} is a partial function, $\rho: X \to
  \CZ$, from a variable to a value. The domain of a variable
  assignment, $\rho$, is represented by $\mathsf{domain}(\rho)
  \subseteq X$.

  An underlined value, $\underline{a}$ indicates a variable assignment
  $\{ x \mapsto a \mid \forall x \in X \}$.

  Variable assignments $\rho \in X \to \CZ$ and $\varrho \in X \to
  \CZ$ may be composed with the $\oplus$ operator in the following
  way:

  \begin{align*}
    \rho \oplus \varrho = \left\{\begin{array}{lc}
        \varrho(x) & x \in \mathsf{domain}(\varrho) \\
        \rho(x) & \mbox{otherwise}
      \end{array}\right.
  \end{align*} 
\end{definition}

\begin{definition}
  \textbf{Expressions:} We will consider expressions, $e \in E$, as $e
  : (X \to \CZ) \to \CZ$, a mapping from a variable assignment to the
  expression's value in that assignment.

  Expressions have two possible forms:
  \begin{itemize}
    \item $e = f( e_1, e_2, ..., e_k )$, with $f: \CZ^k \to \CZ$ and
      $e_1, e_2, ... e_k \in E$ and $e(\rho) = f(e_1(\rho),
      e_2(\rho), ..., e_k(\rho))$.
    \item $e = x \in X$, with $e(\rho) = \rho(v)$
  \end{itemize}
  
  The subset of expressions of the form $\max\{ e_1, e_2, ... e_k \}$,
  with $e_1, e_2, ..., e_k \in E$ are referred to
  as \emph{$\max$-expressions}, denoted by $E_{\max} \subset E$.

  For an expression $e$, the set of expressions containing it, $\{
  e_\parent \mid e_\parent = o\{ e_1, e_2, ... e_k \}, o \in \O, e_i
  \in E \}$, is denoted $\parents(e)$.
\end{definition}

\begin{definition}
  \textbf{Equation System:} An equation system is a mapping from
  variables to expressions. $\system : X \to E_\system$, where
  $E_\system \subset E$. As each element of $E$ is also a function $(X
  \to \CZ) \to \CZ$, so an equation system can be considered as a
  function $\system : X \to ((X \to \CZ) \to \CZ)$ (which is
  equivalent to the above).

  Alternatively an equation system can be represented as a function
  $\system : (X \to \CZ) \to (X \to \CZ)$, $\system(\rho)[x] =
  e_x(\rho)$, taking a variable assignment $\rho$ and returning the
  result of each $e_x$'s evaluation in $\rho$.

  These two functional forms are equivalent and correspond to changing
  the argument order in the following way: $\system(\rho)[x] =
  \system[x](\rho)$. We use both forms as a convenience for when we
  only wish to use one argument. The representations are
  differentiated by context (as the arguments are of differing types)
  as well as by whether parentheses or brackets are used.

  The expressions in $E_\system$ are referred to as ``right hand
  sides'' of the equation system $\system$. We denote the set of all
  equation systems by $\Systems$.

  We define an inverse mapping, $\system^{-1} : E \to 2^X$, taking an
  expression, $e$, to the set of variables, $x$ for which it is a
  ``subexpression'' of $\system[x]$. This is captured in the following
  definition:
  \begin{align*}
    \system^{-1}(e) &= \{ x \mid system(x) = e \} \cup \{
    \system^{-1}(e_\parent) \mid e_\parent \in parents(e) \}
  \end{align*}
\end{definition}

\begin{definition}
  \textbf{Dependencies:} A variable or expression $x$ is said to
  \emph{depend on} $y$ if $x(\rho) \ne x(\varrho)$ and
  \begin{align*}
    \rho(z) &= \varrho(z) & \forall z \in X \backslash \{y\} \\
    \rho(y) &\ne \varrho(y)
  \end{align*}

  If $x$ depends on $y$ then $y$ is said to \emph{influence} $x$.
\end{definition}

\section{Fixpoint Iteration}
\subsection{Kleene Iteration}

A simple approach to fixpoint iteration over monotonic equations is to
simply iterate over the system repeatedly until a reevaluation results
in no change to the values of any variables. This approach will always
reach the least/greatest solution if there is one to be found, but it
will often perform many more evaluations than are necessary. This
algorithm is presented in Listing \ref{algo:kleene}.

\begin{algorithm}[H]
  \begin{algorithmic}
    \Function {solvefixpoint} {$\system \in \Systems$}
      \State $k = 0$
      \State $\rho_0 = \underline{\infty}$
      \Repeat
        \State $\rho_{k+1} = \system( \rho_{k} )$
        \State $k = k + 1$
      \Until {$\rho_{k-1} = \rho_k$}
      \State \Return $\rho_k$
    \EndFunction
  \end{algorithmic}
  \caption{The Kleene iteration algorithm for solving fixpoint
    equations for their greatest solutions.}
  \label{algo:kleene}
\end{algorithm}

For each iteration the entire system is evaluated, resulting in a
considerable inefficiency in practice, requiring the repeated
evaluation of many right-hand-sides for the same value. Thus an
approach which can evaluate smaller portions of the system in each
iteration would be a significant improvement.

An additional deficiency of Kleene iteration is that it is not
guaranteed to terminate for all fixpoints. An example system is
presented in Figure \ref{fig:kleene-infinite}, where our first
iteration will give $\{ x \mapsto 0, y \mapsto 0 \}$, leading to a
second iteration resulting in $\{ x \mapsto -1, y \mapsto 0\}$. The
iteration will continue without bound, tending towards $\{ x \mapsto
-\infty, y \mapsto 0 \}$ rather than the greatest fixpoint of $\{
x \mapsto -\infty, y \mapsto -\infty \}$.

\begin{figure}[H]
  \begin{align*}
    x &= \min(0, x - 1) \\
    y &= \min(0, \infty \oplus x) &
    \text{where } \infty \oplus -\infty &= -\infty
  \end{align*}
  \caption{An example equation system for which Kleene iteration will
    not terminate}
  \label{fig:kleene-infinite}
\end{figure}

This particular non-termination of Kleene iteration is not a
significant problem in our application, however, as it has been shown
in the work of Gawlitza et
al.\cite{Gawlitza:2007:PFC:1762174.1762203}, that Kleene iteration in
this specific case is restricted to at most $|X|$ iterations. It does,
however, require $\Theta(|X|^3)$ right hand side evaluations, which is
very inefficient.

\subsection{W-DFS algorithm} \label{sec:wdfs}

The W-DFS algorithm of Seidl et al.~takes into account dependencies
between variables as it solves the system. By taking into account
dependencies it can leave portions of the system unevaluated when it
is certain that those values have not changed. The algorithm is
presented in Listing \ref{algo:wdfs}.

\begin{algorithm}
  \begin{algorithmic}
    \Globals
    \begin{tabularx}{0.9\textwidth}{rX}
      $D$ & $: X \to \CZ$, a mapping from variables to their current
      values \\
      
      $\inflFP$ & $: X \to 2^X$, a mapping from a variable to the
      variables it \emph{may} influence \\

      $\stableFP$ & $\subseteq X$, a set of ``stable'' variables \\

      $\system$ & $\in \Systems$, an equation system \\
    \end{tabularx}
    \EndGlobals
  \end{algorithmic}

  \begin{algorithmic}
    \Function {init} {$s \in \Systems$}
      \State $D = \underline{\infty}$
      \State $\inflFP = \{x \mapsto \emptyset \mid \forall x \in X\}$
      \State $\stableFP = \emptyset$
      \State $\system = s$
      \State \Return $\lambda x . (\solve(x); D[x])$
    \EndFunction
    \label{algo:wdfs:init}
  \end{algorithmic}

  \begin{algorithmic}
    \Function {eval} {$x \in X$, $y \in X$}
    \Comment{Evaluate $y$, track $x$ depends on $y$}
      \State $\solve(y)$
      \State $\inflFP[y] = \inflFP[y] \cup \{x\}$
      \State \Return $D[y]$
    \EndFunction 
  \end{algorithmic}

  \begin{algorithmic}
    \Procedure {solve} {$x \in X$}
    \Comment{Solve a $x$ and place its value in $D$}
    \If {$x \not \in \stableFP$}
      \State $e_x = \system[x]$
      \State $\stableFP = \stableFP \cup \{x\}$
      \State $v = e_x( \lambda y . \eval(x, y) )$
      \If {$v \ne D[x]$}
        \State $D = \{ x \mapsto v \} \oplus D$
        \State $W = \inflFP[x]$
        \State $\inflFP(x) = \emptyset$
        \State $\stableFP = \stableFP \backslash W$
        \For {$v \in W$}
          \State $\solve(v)$
        \EndFor
      \EndIf
    \EndIf
    \EndProcedure
  \end{algorithmic}

  \caption{The W-DFS mentioned in \cite{DBLP:tr/trier/MI96-11} and
    presented in \cite{fixpoint-slides}, modified to find
    greatest-fixpoints of monotonic fixpoint equations}
  \label{algo:wdfs}
\end{algorithm}

The function $\init: \Systems \to (X \to D)$ in
Listing \ref{algo:wdfs:init} takes an equation system as its argument
and returns a function to query the greatest fixpoint. Each fixpoint
value is solved on demand and will only evaluate the subset of the
equation system which is required to ensure the correct solution is
found.

The algorithm performs no evaluation of any variables unless their
value is requested (whether externally or internally). Once a value is
requested the right-hand-side is evaluated, which may in turn request
the values of other variables (through the $\eval$ function).

In the case of mutually-recursive variables this would result in
infinite recursion, as each time the variable's value is calculated we
must first calculate the variable's value. In order to prevent this a
$\stable$ set is maintained of variables whose values are either
currently being computed or whose values have already been
computed. This set provides a ``base'' for the recursion to terminate,
at which point the value is simply looked up in $D$.

Whenever a variable's value changes it will ``destabilise'' and
re-evaluate any variables which \emph{may} depend on it for their
values, as they may change their value if re-evaluated (and hence are
not certainly stable). If these values change then they, too, will
destabilise their dependent variables, and so on.

The W-DFS algorithm provides us with a \emph{local},
\emph{demand-driven} solver for greatest fixpoints of monotonic
fixpoint problems.

\begin{figure}[H]
  \begin{align*}
    x &= \min(0, y) \\
    y &= x \\
    a &= \min(0, b) \\
    b &= c \\
    c &= d \\
    d &= e \\
    e &= a \
  \end{align*}
  \caption{Example equation system for a W-DFS solve}
  \label{fig:wdfs-example}
\end{figure}

Because each value is only computed when requested, the solver avoids
calculating results which are irrelevant to the final outcome. To
provide a more concrete example, consider the equation system
presented in Figure \ref{fig:wdfs-example}. This equation system can
be separated into two independent equation systems, one consisting of
$\{x, y\}$ and the other of $\{a, b, c, d, e\}$. In order to find the
value of a variable in either of these sub-systems it is necessary to
evaluate the entire sub-system, but it is not necessary to evaluate
the other sub-system. To compute the value of $x$ in the greatest
fixpoint it is unnecessary to compute the value of the variable $d$,
but it is necessary to compute the value of $y$. The W-DFS algorithm,
will only evaluate the necessary subsystem for a requested variable,
leaving the other system untouched. In order to capture this property
we will define the notion of a \emph{local} solver.

\begin{definition}
  \textbf{Local:} A solver is said be \emph{local} if, for some
  equation system, $\system$, and some $e \in E_\system$, it will
  solve for $e_x$ without evaluating every element of $E_\system$.
\end{definition}

W-DFS provides us with a \emph{local}, \emph{demand-driven} solver for
greatest-fixpoint problems.



\section{$\max$-strategy Iteration}

The problem of $\max$-strategy iteration is trying to find a mapping
$\sigma: E_{\max} \to E$ from each $\max$-expression to its greatest
sub-expression. We will use $\sigma: \Systems \to \Systems$ as a
shorthand to denote the system obtained by replacing each
$\max$-expression, $x$, in $\system$ with $\sigma(x)$.

In the particular case of \emph{monotonic}, \emph{expansive} operators
it has been shown\cite{Gawlitza:2007:PFC:1762174.1762203} that this
process will take a finite number of iterations if we ensure at each
iteration that we make a \emph{strict} improvement on our
strategy. This means that each time we improve our strategy we must
make it greater in at least one $\max$-expression, and no worse in the
others.

To this end a new function, $P_{\max}: ((E_{\max} \to E) \times (X \to D))
\to (E_{\max} \to E)$, is used below as a ``strategy improvement
operator''. $P_{\max}$ takes a $\max$-strategy and a variable
assignment and returns a new $\max$-strategy which constitutes an
`improvement' of the strategy in the variable assignment $\rho$. If
have a $\max$-strategy $\sigma$, a variable assignment $\rho$ and
$\varsigma = P_{\max}(\sigma, \rho)$. Then:
\begin{align*}
  \sigma(\system)(\rho) \le \varsigma(\system)(\rho)
\end{align*}

If it is possible for $\sigma$ to be improved then $P_{\max}$ will
return an improvement, otherwise it will return $\sigma$ unchanged.

\subsection{Naive approach}

The problem of $\max$-strategy iteration is one of constantly
improving until no further improvement can be made, so a simple
approach is simply to perform a straightforward loop, improving the
strategy at each step, until an equilibrium point is reached. This is
the approach presented in Listing \ref{algo:naive-strategy}.

\begin{algorithm}[H]
  \begin{algorithmic}
    \Assumptions
      \begin{tabularx}{0.9\textwidth}{rX}
        $\sigma$ & $: E_{\max} \to E$, a $\max$ strategy \\

        $\system$ & $\in \Systems$, an equation
        system \\

        $\rho$ & $: X \to D$, a variable assignment \\

        $P_{\max}$ & $: ((E_{\max} \to E) \times (X \to \CZ)) \to (E_{\max}
        \to E)$, a $\max$-strategy improvement operator \\
      \end{tabularx}
    \EndAssumptions

    \Function {solvestrategy} {$\system \in \Systems$}
      \State $k = 0$
      \State $\sigma_0 = \{ x \mapsto -\infty \mid \forall x \in X \}$
      \State $\rho_0 = \underline{-\infty}$
      \Repeat
        \State $\sigma_{k+1} = P_{\max}(\sigma_k, \rho_k)$
        \State $\rho_{k+1} = \solve \fixpoint(\sigma_{k+1}(\system))$
        \State $k = k + 1$
      \Until {$\sigma_{k-1} = \sigma_k$}
      \State \Return $\sigma_k$
    \EndFunction
  \end{algorithmic}
  \caption{The naive approach to strategy iteration}
  \label{algo:naive-strategy}
\end{algorithm}

This approach is quite similar to that of Kleene iteration, and from
the results of Gawlitza et al.\cite{Gawlitza:2007:PFC:1762174.1762203}
it is known that this iteration is guaranteed to terminate. This naive
approach is an inefficient way to solve this problem, however, for
many of the same reasons as Kleene iteration. A large amount of time
will be spent attempting to improve portions of the strategy for which
no improvement can be made.

\subsection{Adapted W-DFS algorithm} \label{sec:adapted-wdfs}

The $\max$-strategy iteration can be viewed as an accelerated fixpoint
problem. We are attempting to find a strategy, $\sigma: E_{\max} \to
E$ that will result in the greatest value for each $e \in
E_{\max}$. Therefore if we consider our ``variables'' to be
$\max$-expressions and our ``values'' to be their subexpressions then
we can solve for the best $\max$-strategy using a similar approach to
the W-DFS algorithm presented above in Section \ref{sec:wdfs}. Listing
\ref{algo:adapted-wdfs} presents one variation on W-DFS to allow it to
solve $\max$-strategy iteration problems.

\begin{algorithm}[H]
  \begin{algorithmic}
    \Globals
    \begin{tabularx}{0.9\textwidth}{rX}
      $\sigma$ & $: (E_{\max} \to E)$, a mapping from
      $\max$-expressions to a subexpression \\

      $\inflSI$ & $: (E_{\max} \to 2^{E_{\max}}$, a mapping from a
      $\max$-expression to the sub-expressions it influences \\

      $\stableSI$ & $\subseteq E_{\max}$, the set of all
      $\max$-expressions whose strategies are stable \\

      $\system$ & $\in \Systems$, an equation system \\
      
      $P_{\max}$ & $: ((E_{\max} \to E) \times (X \to \CZ)) \to (E_{\max}
      \to E)$, a $\max$-strategy improvement operator \\
    \end{tabularx}
    \EndGlobals
  \end{algorithmic}

  \begin{algorithmic}
    \Function {init} {$s \in \Systems$}
      \State $\sigma = \underline{-\infty}$
      \State $\inflSI = \{x \mapsto \emptyset \mid \forall x \in E_{\max}\}$
      \State $\stableSI = \emptyset$
      \State $\system = s$
      \State \Return $\lambda x . (\solve(x); \sigma[x])$
    \EndFunction
    \label{algo:adapted-wdfs:init}
  \end{algorithmic}

  \begin{algorithmic}
    \Function {eval} {$x \in E_{\max}$, $y \in E_{\max}$}
      \State $\solve(y)$
      \State $\inflSI[y] = \inflSI[y] \cup \{x\}$
      \State \Return $\sigma[y]$
    \EndFunction
  \end{algorithmic}

  \begin{algorithmic}
    \Procedure {solve} {$x \in E_{\max}$}
    \If {$x \not \in \stableSI$}
      \State $\stableSI = \stableSI \cup \{x\}$
      \State $\rho = \solve \fixpoint(\sigma(\system))$
      \State $v = P_{\max}(\lambda y . \eval(x, y), \rho)[x]$
      \If {$v \ne \sigma[x]$}
        \State $\sigma = \{ x \mapsto v\} \oplus \sigma$
        \State $W = \inflSI[x]$
        \State $\inflSI(x) = \emptyset$
        \State $\stableSI = \stableSI \backslash W$
        \For {$v \in W$}
        \State $\solve(v)$
        \EndFor
      \EndIf
    \EndIf
    \EndProcedure
  \end{algorithmic}

  \caption{W-DFS, this time modified to find the best $\max$-strategy.}
  \label{algo:adapted-wdfs}
\end{algorithm}

This approach retains the benefits of the W-DFS algorithm for solving
fixpoints, but applies them to $\max$-strategy iteration problems. It
will take into account dependencies between the $\max$-expressions and
only attempt to improve the strategy for a given $\max$-expression if
some of the $\max$-expressions which may influence it are changed.

For this particular algorithm to work, however, we must assume another
property of $P_{\max}$. In Listing \ref{algo:adapted-wdfs} we take the
value of $P_{\max}(\lambda y . \eval(x, y), \rho)[x]$; we improve the
strategy, then use only the strategy for $x$. In order for this
algorithm to be correct $P_{\max}$ must always improve the strategy at
$x$ if it is possible to do so. If $P_{\max}$ did not improve the
strategy at $x$, while a permissible improvement existed, then we
would consider $x$ to be ``stable'' and would not attempt to improve
our strategy at $x$ until the improvement of another portion of our
strategy invalidated $x$. It is not guaranteed that another strategy
improvement will invalidate the strategy at $x$, however, so the
strategy at $x$ may still be able to be improved when the algorithm
terminates. If the strategy at $x$ can still be improved then we have
not reached the solution of our $\max$-strategy iteration.

Additionally, in order to remain efficient, $P_{\max}$ should not
attempt to improve the strategy for any $\max$-expressions until that
expression is requested. Whether or not $P_{\max}$ is lazy in this way
will not affect the correctness of the algorithm, as it will only
result in the $\max$-strategy improvement at each step being more
expensive. The extra work to improve other portions of the strategy
will be discarded with only $P_{\max}(...)[x]$ being used.

\section{Local Demand-driven Strategy Improvement (LDSI)}

W-DFS can be used to speed up both the $\max$-strategy iteration and
the fixpoint iteration as two independent processes, but each time we
seek to improve our strategy we compute at least a subset of the
greatest fixpoint from a base of no information. We would like to be
able to provide some channel for communication between the two W-DFS
instances to reduce the amount of work that must be done at each
stage. By supplying each other with information about which parts of
the system have changed we can jump in partway through the process of
finding a fixpoint, thereby avoiding repeating work that we have
already done when it is unnecessary to do so.

The new \emph{Local Demand-driven Strategy Improvement} algorithm,
\emph{LDSI}, seeks to do this. By adding an ``invalidate'' procedure
to the fixpoint iteration we provide a method for the $\max$-strategy
to invalidate fixpoint variables, and we can modify the fixpoint
iteration algorithm to report which variables it has modified with
each fixpoint iteration step. We then have a $\max$-strategy iteration
which, at each strategy-improvement step, invalidates a portion of the
current fixpoint iteration which may depend on the changed
strategy. The fixpoint iteration then re-stabilises each variable as
it is requested, tracking which values have changed since the last
time they were stabilised. The $\max$-strategy iteration then uses
this list of changed variables to determine which portions of the
current strategy must be re-stabilised once more. This process
continues until the $\max$-strategy stabilises (at which point the
fixpoint will also stabilise).

Both portions of the LDSI algorithm are \emph{demand driven}, and so
any necessary evaluation of strategies or fixpoint values is delayed
until the point when it is required. This means that the solver will
be \emph{local} (see Section \ref{sec:wdfs}).






This algorithm is presented in three parts.

\begin{itemize}
\item
  Section \ref{sec:ldsi:top-level} presents the global state and entry
  point to the algorithm.

\item
  Section \ref{sec:ldsi:fixpoint} presents the fixpoint portion of the
  algorithm. This portion bears many similarities to the W-DFS
  algorithm presented in Section \ref{sec:wdfs}, with a few
  modifications to track changes and allow external invalidations of
  parts of the solution.

\item
  Section \ref{sec:ldsi:max-strategy} presents the $\max$-strategy
  portion of the algorithm. This portion is quite similar to the
  Adapted W-DFS algorithm presented in Section \ref{sec:adapted-wdfs},
  with some modifications to allow for communication with the fixpoint
  portion of the algorithm.

\item
  Section \ref{sec:ldsi:correctness} argues the correctness of this
  approach for finding the least fixpoints of monotonic, expansive
  equation systems involving $\max$-expressions.
\end{itemize}

\subsection{Top level} \label{sec:ldsi:top-level}

\begin{algorithm}[H]
  \begin{algorithmic}
    \Globals
    \begin{tabularx}{0.9\textwidth}{rX}
      $D$ & $: X \to \CZ$, a mapping from variables to their current
      value \\

      $D_\old$ & $: X \to \CZ$, a mapping from variables to their
      last stable value \\
      
      $\inflFP$ & $: X \to 2^X$, a mapping from a variable to the
      variables it \emph{may} influence \\

      $\stableFP$ & $\subseteq X$, a set of ``stable'' variables \\

      $\touchedFP$ & $\subseteq X$, a set of variables which have been
      ``touched'' by the fixpoint iteration, but not yet acknowledged
      by the $\max$-strategy iteration \\

      \\

      $\sigma$ & $: E_{\max} \to E$, a mapping from
      $\max$-expressions to a subexpression \\

      $\inflSI$ & $: E_{\max} \to 2^{E_{\max}}$, a mapping from a
      $\max$-expression to the sub-expressions it influences \\

      $\stableSI$ & $\subseteq E_{\max}$, the set of all
      $\max$-expressions whose strategies are stable \\

      $P_{\max}$ & $: ((E_{\max} \to E) \times (X \to \CZ)) \to
      (E_{\max} \to E)$, a $\max$-strategy improvement operator \\
      
      \\

      $\system$ & $\in \Systems$, an equation system \\
    \end{tabularx}
    \EndGlobals
  \end{algorithmic}
  \caption{Global variables used by the LDSI algorithm}
  \label{algo:ldsi:globals}
\end{algorithm}

These variables are mostly just a combination of the variables found
in Sections \ref{sec:wdfs} and \ref{sec:adapted-wdfs}. The only
exception is $\touchedFP$, which is a newly introduced variable to
track variables which are touched by the fixpoint iteration steps.

\begin{algorithm}[H]
  \begin{algorithmic}
    \Function {init} {$s \in \Systems$}
      \State $D = \underline{-\infty}$
      \State $D_\old = \underline{-\infty}$
      \State $\inflFP = \{x \mapsto \emptyset \mid \forall x \in X\}$
      \State $\stableFP = X$
      \State $\touchedFP = \emptyset$

      \State $\sigma = \underline{-\infty}$
      \State $\inflSI = \{x \mapsto \emptyset \mid \forall x \in E_{\max}\}$
      \State $\stableSI = \emptyset$

      \State $\system = s$

      \State \Return $\lambda x . (\solve \strategy (x); \solve
      \fixpoint (x) ; D[x])$
    \EndFunction
  \end{algorithmic}
  \caption{LSDI init function and entry point}
  \label{algo:ldsi:init}
\end{algorithm}

The $\init$ function is, similarly, just a combination of the $\init$
functions presented in Sections \ref{sec:wdfs} and
\ref{sec:adapted-wdfs}. The $\touchedFP$ and $D_\old$ variables are
also initialised, as they was not present in either of the previous
$\init$ functions.

$D$ and $D_\old$ are both initialised to \underline{$-\infty$} and
$\stableFP$ is initially the entire set of variables. The values of
$D$, $D_\old$ and $\stableFP$ are set to their result after a complete
fixpoint iteration over each variable with the initial
$\max$-strategy.

The type of this function is also different to either of the original
$\init$ functions. $\init : \Systems \to (X \to \CZ)$. The function
returned by $\init$ performs the entire process of $\max$-strategy
iteration and fixpoint iteration and calculates the final value of
each variable for the least fixpoint, returning a value in $\CZ$.



\subsection{Fixpoint iteration} \label{sec:ldsi:fixpoint}

The process of fixpoint iteration is extremely similar to the fixpoint
iteration algorithm presented in Section \ref{sec:wdfs}. The only
difference is that we have an $\invalidate \fixpoint$ procedure for
the $\max$-strategy iteration to invalidate a portion of the
system. By invalidating only a portion of the system we reuse the
values which have already been computed to avoid performing additional
work. The last stable value is also stored in this procedure to
identify \emph{changed} values, rather than only \emph{unstable} ones.

\begin{algorithm}[H]
  \begin{algorithmic}
    \Function {evalfixpoint} {$x \in X$, $y \in X$}
      \State $\solve \fixpoint(y)$
      \State $\inflFP[y] = \inflFP[y] \cup \{x\}$
      \State \Return $D[y]$
    \EndFunction
  \end{algorithmic}
  \caption{Utility function used to track fixpoint variable
    dependencies. This function is very similar to the $\eval$
    function presented in Listing \ref{algo:wdfs}}
  \label{algo:ldsi:fixpoint:eval}
\end{algorithm}

This procedure is similar to the equivalent method in the W-DFS
algorithm, except for the fact that $\solve$ has been renamed to
$\solve \fixpoint$. $\eval \fixpoint$ performs exactly the same
function as the $\eval$ function in Listing \ref{algo:wdfs}.

\begin{algorithm}[H]
  \begin{algorithmic}
    \Procedure {invalidatefixpoint} {$x \in X$}
    \Comment{Invalidate a fixpoint variable}
    \If {$x \in \stableFP$}
      \State $\stableFP = \stableFP \backslash \{x\}$
      \State $\D_\old[x] = D[x]$
      \State $D[x] = \infty$
      \State $W = \inflFP[x]$
      \State $\inflFP[x] = \emptyset$
      \State $\touchedFP = \touchedFP \cup \{ x \}$
      \For {$v \in W$}
        \State $\invalidate \fixpoint(v)$
      \EndFor
    \EndIf
    \EndProcedure
  \end{algorithmic}
  \caption{Fixpoint subroutine called from the $\max$-strategy
    iteration portion to invalidate fixpoint variables}
  \label{algo:ldsi:fixpoint:invalidate}
\end{algorithm}

This procedure is not called in the fixpoint iteration process, but is
rather the method by which the $\max$-strategy iteration can
communicate with the fixpoint-iteration. A call to $\invalidate
\fixpoint$ indicates to the fixpoint-iteration that a variable's value
\emph{may} have changed as a result of a $\max$-strategy improvement,
and that it will require re-calculation. The invalidation provided by
this procedure permits us to only re-evaluate a partial system (the
solving of which is also be delayed until requested by the $\solve
\fixpoint$ procedure).

While invalidating variables we also note their $\old$ value, and mark
them as ``touched''. As the $\old$ value is only set in this
procedure, and only when $x$ is stable, it gives us the ability to see
whether a variable has stabilised to a new value, or whether it has
merely re-stabilised to the same value as last time. This can prevent
us from doing extra work in the $\max$-strategy iteration phase.



\begin{algorithm}[H]
  \begin{algorithmic}
    \Procedure {solvefixpoint} {$x \in X$}
    \If {$x \not \in \stableFP$}
      \State $\stableFP = \stableFP \cup \{ x \}$
      \State $v = \sigma(\system[x])( \lambda y . \eval \fixpoint(x, y) )$
      \If {$v \ne D[x]$}
        \State $D = \{ x \mapsto v \} \oplus D$
        \State $W = \inflFP[x]$
        \State $\inflFP[x] = \emptyset$
        \State $\stableFP = \stableFP \backslash W$
        \For {$v \in W$}
          \State $\solve \fixpoint(v)$
        \EndFor
      \EndIf
    \EndIf
    \EndProcedure
  \end{algorithmic}
  \caption{The subroutine of the fixpoint iteration responsible for
    solving for each variable. This is very similar to the $\solve$
    procedure in Listing \ref{algo:wdfs}}
  \label{algo:ldsi:fixpoint:solve}
\end{algorithm}

The $\solve \fixpoint$ procedure is extremely similar to the $\solve$
procedure presented in Listing \ref{algo:wdfs}. There are two main
differences: the self-recursive call has been renamed to reflect the
change in the function's name, and instead of using $\system[x]$ we
must use $\sigma(\system[x])$. We use $\sigma$ to provide the
fixpoint-iteration with the current $\max$-strategy. $\sigma$ is
stable for the duration of the $\solve \fixpoint$ execution, so it
will result in fixpoint iteration being performed over a stable
system.

After an evaluation of the $\solve \fixpoint$ procedure, the variable
$x$ will have its value in the current $\max$-strategy's greatest
fixpoint stabilised, and that value will be stored in
$D[x]$. Stabilising this may result in other related variables also
being stabilised.



\subsection{$\max$-strategy iteration} \label{sec:ldsi:max-strategy}



\begin{algorithm}[H]
  \begin{algorithmic}
    \Function {evalstrategy} {$x \in E_{\max}$, $y \in E_{\max}$}
      \State $\solve \strategy(y)$
      \State $\inflSI[y] = \inflSI[y] \cup \{x\}$
      \State \Return $\sigma[y]$
    \EndFunction

    \Function {evalstrategyfixpoint} {$x \in E_{\max}$, $y \in X$}
      \State $\solve \fixpoint(y)$
      \State $\inflSI[y] = \inflSI[\system[y]] \cup \{ x \}$ 
      \For {$v \in \touchedFP$}
        \If {$v \in \stableFP$ and $D[x] \ne D_\old[x]$}
          \For {$w \in \inflSI[\strategy(\system[v])]$}
            \State $\invalidate \strategy(\system[v])$
          \EndFor
        \EndIf
      \EndFor
      
      \State \Return $D[y]$
    \EndFunction
  \end{algorithmic}
  \label{algo:ldsi:strategy:eval}
  \caption{Functions which evaluate a portion of the
    $\max$-strategy/fixpoint and note a dependency}
\end{algorithm}

The $\eval \strategy$ function is essentially the same as the $\eval$
function in Listing \ref{algo:adapted-wdfs}, except that it calls the
$\solve \strategy$ procedure.

The $\solve \fixpoint$ calls in $\eval \strategy \fixpoint$ are
top-level fixpoint-iteration calls, meaning that upon their return we
know that $D$ contains the value it would take in the greatest
fixpoint of the current strategy $\sigma$. This function, therefore,
acts as a simple intermediate layer between the fixpoint-iteration and
the $\max$-strategy iteration to allow for dependencies to be tracked.

Upon the conclusion of the $\solve \fixpoint$ we also must inspect the
$\touchedFP$ set to determine which variables have values which may
have changed. If their values have stabilised since being invalidated,
and if they stabilised to a different value to their previous
stabilisation, then we will invalidate all strategies which depend on
them. We do not have to invalidate the variable's right hand side
directly, but if it is dependent on its own value (as in $\{ x = x + 1
\}$, for example), then it will be destabilised by the transitivity of
$\invalidate \strategy$.

\begin{algorithm}[H]
  \begin{algorithmic} 
    \Procedure {invalidatestrategy} {$x \in E_{\max}$}
      \If {$x \in \stableSI$}
        \State $\stableSI = \stableSI \backslash \{x\}$
        \For {$v \in \inflSI$}
          \State $\invalidate \strategy (v)$
        \EndFor
      \EndIf
    \EndProcedure
  \end{algorithmic}    
  \label{algo:ldsi:strategy:invalidate}
  \caption{Evaluate a portion of the $\max$-strategy and note a
    dependency}
\end{algorithm}

Invalidating the $\max$-strategy iteration is essentially the same as
the invalidation in the fixpoint-iteration stage, except that we do
not need to keep track of the last stable values, nor which
$\max$-expressions we have invalidated. All we need to do is remove
them from the $\stableSI$ set, thereby indicating that they need to be
re-stabilised by a $\solve \strategy$ call. All $\max$-expressions which (transitively) depend on $x$ are
themselves destabilised as well.


\begin{algorithm}[H]
  \begin{algorithmic}    
    \Procedure {solvestrategy} {$x \in E_{\max}$}
    \If {$x \not \in \stableSI$}
      \State $\stableSI = \stableSI_{\max} \cup \{x\}$
      \State $e = P_{\max}(\lambda y . \eval \strategy(x,y),
                          \lambda y . \eval \strategy \fixpoint(x, y))[x]$
      \If {$e \ne \sigma[x]$}
        \State $\sigma = \{ x \mapsto e \} \oplus \sigma$
        \State $\invalidate \fixpoint(\system^{-1}(x))$
        \State $\stableSI = \stableSI \backslash I[x]$
        \For {$v \in \inflSI[x]$}
          \State $\solve(v)$
        \EndFor
      \EndIf
    \EndIf
    \EndProcedure
  \end{algorithmic}
  \caption{The $\max$-strategy portion of the Combined W-DFS.}
  \label{algo:ldsi:solve}
\end{algorithm}



\subsection{Correctness} \label{sec:ldsi:correctness}

In order to argue that LDSI is correct we must first assume that the
W-DFS algorithm of Seidl et al.~is correct and will solve the equation
system for its least/greatest fixpoint.




This algorithm relies on the correctness of the underlying W-DFS
algorithm. This algorithm was presented in
\cite{DBLP:tr/trier/MI96-11}.

If we assume that W-DFS is correct then we only have to prove that the
combination algorithm is correct. For this it is sufficient to show
that the invalidate calls in both directions preserve the correctness
of the original algorithm.

The fixpoint iteration step invalidates anything that \emph{might}
depend on $x$ while it invalidates $x$, thereby ensuring that any
further calls to $\solve \fixpoint$ will result in a correct value for
the given strategy.

The strategy-iteration step invalidates anything that \emph{might}
depend on $\system[x]$ while it invalidates $x$, thereby ensuring that
any further calls to $\solve \strategy$ will result in a correct value
for the given strategy.