Applications and Demos

(NOTE: The server that previously run the interpreter is currently down due to my departure from UCI. You can consult the codes but unfortunately you cannot run them. Sorry.)

[At this time, JuliusC is an interpreter. It takes a C program as input, it analyzes the input and it generates a type-DAG as output. The analysis may require the execution of the input, but in general, it is only partial.]


  1. Considerations

    The Type-DAG has several practical applications and we will present some of them in the following. We present 6 types of application, each one representative of a particular feature-concept. We generate the type-DAG on demand using a cgi-scripts, but we have some limitations.


  2. Notations

    How to read the output? (what output?)

     
    Parsing ...
    Semantic Analysis ...
    Binding Function Calls to Function Definitions ... 
    ----------> get time 0 sec<------
    Static Type Checking ... 
    ----------> get time 0 sec<------
    Checking Return statments... 
    ----------> get time 0 sec<------
    Call Graph from Main .... 
    ----------> get time 0 sec<------
    Function calls properties .... 
    ----------> get time 0 sec<------
    Data Dependency Analysis ... 
    ----------> get time 0.02 sec<------
    Data dependecy result .... 
    ----------> get time 0 sec<------
    Marking D&C formals ... 
    ----------> get time 0 sec<------
    Check the formals on the function definitions ... 
    Interpretation ...
    ----------> get time 3.1 sec<------
    On count
    Reuse Ratio 605379
    ----------> get time 0.01 sec<------
    main<> [1] {} |Rec|
       fractal_matrix_multiply<117,117,117> [1] {0} |Rec|
          fractal_matrix_multiply<59,59,59> [1] {0} |Rec|
             fractal_matrix_multiply<30,30,30> [20196] {0} |Rec|
                fractal_matrix_multiply<15,15,15> [223424] {0} |Rec|
                   fractal_matrix_multiply<8,8,8> [441333] {0} |Rec|
                      fractal_matrix_multiply<4,4,4> [13889186] {0} |Rec|
                         mm<4,4,4> [13889186] {0} |LeaF|
     
    ....
    

    We begin with time statistics (Wall Time expressed in seconds):

    1. Binding Function Calls to Function Definitions
    2. Static Type Checking
    3. Checking Return statements (Java like)
    4. Function calls properties (call-graph construction)
    5. Data Dependency Analysis (basic interprocedural data dependency analysis)
    6. Marking D&C formals (this is the core of the compiler. Statically, we determine in each function definition the fomal parameters that specify the Divide and Conquer procedure).
    7. Interpretation (we create the type-DAG running the application - very - partially)
    8. On count (We use the type-DAG for statistics purpose)

      each function call has an entry that we can describe using a single line

       
      name< size > [ number of times this function is called ] { value } |Rec| or |LeaF|  
      

      The problem fractal_matrix_multiply has problem type size <15,15,15>; it is called 223424 times; if there was dynamic programming on, { this value } presents the value - see binomial.c of factors.2.c; the last attribute is |Rec|, which stands for Recursive.

      The problem mm has problem type size <4,4,4>; it is called 13889186 times; if there was dynamic programming on, { this value } presents the value - see binomial.c of factors.2.c; the last attribute is |LeaF|, non recursive.

      Indentation: we use indentation to present the relation among calls.

      • The node main is the root of every function calls
      • Two nodes, with same indentation, are siblings.
      •                   fractal_matrix_multiply<4,4,4> [13889186] {0} |Rec|
                             mm<4,4,4> [13889186] {0} |LeaF|
        
        mm() is a child of fractal_matrix_multiply()

  3. Demos

    Let's get started

    1. Debug Tool: Whether or not a program stops it is an undecidable problem.

      The type-DAG can be used to help solving this problem. Indeed, a recursive algorithms does not stop, when, at any time, the execution visit a function call multiple times with same inputs. If the inputs specify the flow of the execution, this will create a loop in the type-DAG.

      JuliusC is insensitive to the no-ended recursion: it stops when it reaches a computation leaf or when it finds a node with same characteristics. In the latter case, the type-DAG is not a DAG because it has a cycle. We find a cycle by determining a topological order of the type-DAG and specifying a back-edge.

      cyclic_recursion.c is the input and run JuliusC

    2. Dynamic Programming: Several text book authors introduce dynamic programming as solution for some of the limitations of divide and conquer algorithms. Some algorithms divide a problem in unbalanced sub-problems and they recompute partial solution that have already computed.

      We present two examples: one is the classical binomial computation, the other computes a balanced factorization of on an integer n.

      • Binomial n over k has solution n!/((n-k)!k!). JuliusC allows us to solve the binomial problem in polynomial time and space.

        binomial.c is the input and Run JuliusC

      • For the implementation of Cooley-Tookey FT, it is required to determine the factorization of an integer n in two factors p and q so that n=pq. Recursively, we need to determine the factors for p and q.

        We present two factorizations algorithms, very similar in nature. Both determine a balanced factorization: one is exact the other is a heuristic.

        factors.2.c is the input and Run JuliusC

    3. FFT: we present the Cooley-Tookey Fourier Transform with a balance factorization.

      Vitter et al. proposed this algorithm and it takes O(nloglogn) steps for n=2^m. This algorithm is extremely interesting because it allows to minimize the number of function calls for any n (therefore the overhead). We present the type-DAG for two algorithms: one using an exact balanced factorization and another using a heuristic.

    4. Fractal Matrix Multiply: C+=AB: this is a classic example of recursive algorithm.

      Matrix Multiply has several algorithms (e.g.; Strassen's). We present a recursive algorithm with a simplified implementation using non standard layout (fractal layout or Z-morton). The result presented here is not correct, but the algorithm has the same sequence of function calls as well as complexity of the correct algorithm.

      fractal_matrix_multiply.c is the input and Run JuliusC

    5. Fractal All-Pairs Shortest Paths (APSP): we know that all-pairs shortest paths problem is computationally equivalent to matrix multiplication (the classic O(n^3) matrix multiply). This means that we cannot compute either one any faster than the other.

      We present an algorithm for APSP that is a version of Fractal matrix multiplication (the two algorithms are more than computationally equivalent). This algorithm use non standard layout and it is simplified. The output is incorrect but the algorithm has the same sequence of function calls as well as complexity of the correct algorithm.

      transitive.c is the input and Run JuliusC

    6. Fractal LU-Factorization: this algorithm is fairly complex. It uses non standard layout, it is based on fractal matrix multiplication and it does not perform pivoting.

      We compute the LU-factorization for a small matrix 50x50. Without pivoting, this algorithm is extremely unstable, therefore it may take a long time to run in this architecture because of it.

      lu.c is the input Run JuliusC