My Weblog

Blog about programming and math

Propositional Logic course ( Coursera )

Recently I took a logic course on Coursera. Although I am bit familiar with logic but this course was great. I got more clear grasp on Proposition logic, Proof tree, bit of Prolog and came to know about Quine–McCluskey algorithm ( in my TODO list of implementation ) while learning about circuit minimisation. In one of the assignments, we have to tell if the given formula is tautology, contradiction or contingent so I wrote some quick Haskell code to solve the assignment

import Text.Parsec.Prim
import Text.Parsec.Expr
import Text.Parsec.Char
import Text.Parsec.String ( Parser )
import Control.Applicative hiding ( ( <|> ) , many )
import qualified Data.Map as M
import Data.Maybe
import Data.List
import Control.Monad

data LExpr = Lit Char
           | Not LExpr       
           | And LExpr LExpr
           | Or LExpr LExpr
           | Imp LExpr LExpr  -- ( P =>  Q )
           | Red LExpr LExpr  -- ( P <=  Q )
           | Eqi LExpr LExpr  -- ( P <=> Q ) 
           deriving Show

exprCal :: Parser LExpr
exprCal = buildExpressionParser table atom 

table = [  [  Prefix ( Not <$ string  "~"  ) ]
         , [  Infix  ( And <$ string  "&"  ) AssocLeft ]
         , [  Infix  ( Or  <$ string  "|"  ) AssocLeft ]
         , [  Infix  ( Eqi <$ try ( string "<=>" ) ) AssocLeft 
            , Infix  ( Imp <$  string "=>"         ) AssocLeft 
            , Infix  ( Red <$  string "<="         ) AssocLeft 
           ]
        ]

atom :: Parser LExpr
atom =  char '(' *>  exprCal   <* char ')' 
     <|> ( Lit <$> letter )

assignment :: LExpr -> [ M.Map Char Bool ]
assignment expr = map ( M.fromList . zip vs ) ps where
    vs = variables expr
    ps = replicateM ( length vs ) [ True, False]
 

variables :: LExpr -> [ Char ]
variables expr = map head . group . sort . vars expr $ []  where
    vars ( Lit c )    xs = c : xs
    vars ( Not expr ) xs = vars expr xs
    vars ( And exprf exprs ) xs = vars exprf xs ++ vars exprs xs
    vars ( Or  exprf exprs ) xs = vars exprf xs ++ vars exprs xs
    vars ( Imp exprf exprs ) xs = vars exprf xs ++ vars exprs xs
    vars ( Red exprf exprs ) xs = vars exprf xs ++ vars exprs xs
    vars ( Eqi exprf exprs ) xs = vars exprf xs ++ vars exprs xs


expEval :: LExpr -> M.Map Char  Bool -> Bool
expEval ( Lit v   )         mp =  fromMaybe False ( M.lookup v mp )
expEval ( Not expr  )       mp =  not . expEval  expr $ mp
expEval ( And exprf exprs ) mp =  expEval exprf mp && expEval  exprs mp
expEval ( Or  exprf exprs ) mp =  expEval exprf mp || expEval  exprs mp
expEval ( Imp exprf exprs ) mp =  (  not . expEval  exprf $ mp  ) || expEval exprs mp 
expEval ( Red exprf exprs ) mp =  expEval ( Imp exprs exprf  ) mp 
expEval ( Eqi exprf exprs ) mp =  expEval exprf mp == expEval exprs mp

values :: LExpr -> [ Bool ]
values expr = map ( expEval expr ) ( assignment expr ) 

isTautology :: LExpr -> Bool
isTautology = and . values

isContradiction :: LExpr -> Bool
isContradiction  = not . or . values

isContingent :: LExpr -> Bool
isContingent expr = not ( isTautology expr || isContradiction expr )
  
calculator :: String -> LExpr
calculator expr = case parse  exprCal ""  expr of
                       Left msg -> error "failed to parse"
                       Right val -> val

Mukeshs-MacBook-Pro:Compilers mukeshtiwari$ ghci LogicPraser.hs 
GHCi, version 7.8.1: http://www.haskell.org/ghc/  😕 for help
Loading package ghc-prim ... linking ... done.
Loading package integer-gmp ... linking ... done.
Loading package base ... linking ... done.
[1 of 1] Compiling Main             ( LogicPraser.hs, interpreted )
Ok, modules loaded: Main.
*Main> calculator "p=>q|q=>p"
Loading package transformers-0.3.0.0 ... linking ... done.
Loading package array-0.5.0.0 ... linking ... done.
Loading package deepseq-1.3.0.2 ... linking ... done.
Loading package containers-0.5.5.1 ... linking ... done.
Loading package bytestring-0.10.4.0 ... linking ... done.
Loading package mtl-2.1.3.1 ... linking ... done.
Loading package text-1.1.1.1 ... linking ... done.
Loading package parsec-3.1.5 ... linking ... done.
Imp (Imp (Lit 'p') (Or (Lit 'q') (Lit 'q'))) (Lit 'p')
*Main> calculator "(p=>q)|(q=>p)"
Or (Imp (Lit 'p') (Lit 'q')) (Imp (Lit 'q') (Lit 'p'))
*Main> isTautology ( Or (Imp (Lit 'p') (Lit 'q')) (Imp (Lit 'q') (Lit 'p')) )
True
*Main> calculator "(p=>q)&(q=>p)"
And (Imp (Lit 'p') (Lit 'q')) (Imp (Lit 'q') (Lit 'p'))
*Main> isTautology ( And (Imp (Lit 'p') (Lit 'q')) (Imp (Lit 'q') (Lit 'p')) )
False
*Main> isCont ( And (Imp (Lit 'p') (Lit 'q')) (Imp (Lit 'q') (Lit 'p')) )
isContingent     isContradiction
*Main> isContingent  ( And (Imp (Lit 'p') (Lit 'q')) (Imp (Lit 'q') (Lit 'p')) )
True
*Main> isCont  ( And (Imp (Lit 'p') (Lit 'q')) (Imp (Lit 'q') (Lit 'p')) )
isContingent     isContradiction
*Main> isContradiction   ( And (Imp (Lit 'p') (Lit 'q')) (Imp (Lit 'q') (Lit 'p')) )
False

Proof tree for propositional logic was something which I learned from this course. Why proof tree method is great ? We will take a example to understand it more clearly.

p \Rightarrow ( q \vee r ) \newline  s \Rightarrow ( \neg r) \newline  \line(1,0){70}\newline  ( p\wedge s ) \Rightarrow q

A set of premises Δ logically entails a conclusion ϕ (written as Δ |= ϕ) if and only if every interpretation that satisfies the premises also satisfies the conclusion. Logical formulas above the line are premises and below is conclusion.

p \hspace{5 mm} q \hspace{5 mm}  r \hspace{5 mm}  s \hspace{5 mm} p \Rightarrow ( q \vee r ) \hspace{5 mm}            s \Rightarrow ( \neg r) \hspace{5 mm}( p\wedge s ) \Rightarrow q

\begin{tabular}{ | l | l | l | l | p{2cm} | p{2cm} | p{2cm} | }     \hline      0 & 0 & 0 & 0 & 1 & 1 & 1  \\    0 & 0 & 0 & 1 & 1 & 1 & 1  \\    0 & 0 & 1 & 0 & 1 & 1 & 1  \\    0 & 0 & 1 & 1 & 1 & 0 & 1  \\    0 & 1 & 0 & 0 & 1 & 1 & 1  \\    0 & 1 & 0 & 1 & 1 & 1 & 1  \\    0 & 1 & 1 & 0 & 1 & 1 & 1  \\    0 & 1 & 1 & 1 & 1 & 0 & 1  \\    1 & 0 & 0 & 0 & 0 & 1 & 1  \\    1 & 0 & 0 & 1 & 0 & 1 & 0  \\    1 & 0 & 1 & 0 & 1 & 1 & 1  \\    1 & 0 & 1 & 1 & 1 & 0 & 0  \\    1 & 1 & 0 & 0 & 1 & 1 & 1  \\    1 & 1 & 0 & 1 & 1 & 1 & 1  \\    1 & 1 & 1 & 0 & 1 & 1 & 1  \\    1 & 1 & 1 & 1 & 1 & 0 & 1  \\       \hline      \end{tabular}

We can see from the truth table that all the interpretation which satisfy premises also satisfy conclusion. Looks great but lengthy.
Proof tree rules

A \wedge B  \hspace*{10mm} A \vee B \hspace*{10mm} A \Rightarrow B\hspace*{10mm} A \equiv  B \hspace*{10mm} A \newline   \hspace*{5mm}| \hspace*{15mm} / \hspace*{8mm}  \backslash \hspace*{10mm} / \hspace*{8mm}\backslash \hspace*{10mm} / \hspace*{8mm} \backslash \hspace*{7mm} \neg A\newline  \hspace*{4mm} A \hspace*{13mm} A \hspace*{8mm} B \hspace*{5mm} \neg A \hspace*{8mm}B \hspace*{6mm} A \hspace*{10mm} \neg A \hspace*{5mm} \times \newline  \hspace*{4mm} B  \hspace*{55mm} B\hspace*{10mm} \neg B\newline

\neg ( A \wedge B ) \hspace*{5mm} \neg ( A \vee B) \hspace*{5mm} \neg ( A \Rightarrow B) \hspace*{5mm} \neg ( A \equiv B) \hspace*{5mm} \neg \neg A \newline  \hspace*{4mm} / \hspace*{8mm} \backslash \hspace*{15mm} | \hspace*{23mm} | \hspace*{15mm} /\hspace*{8mm} \backslash \hspace*{15mm} | \newline  \hspace*{2mm} \neg A \hspace*{6mm} \neg B \hspace*{8mm} \neg A \hspace*{20mm} A \hspace*{13mm} A\hspace*{8mm} \neg A \hspace*{10mm} A \newline  \hspace*{28mm} \neg B \hspace*{18mm} \neg B \hspace*{11mm} \neg B\hspace*{8mm} B  \newline

Theorem: Δ |= ϕ if and only if Δ ∪ {¬ϕ} is unsatisfiable. Following the theorem, our problem translates into proving that

p \Rightarrow ( q \vee r ) \newline  s \Rightarrow ( \neg r) \newline  \neg ( ( p\wedge s ) \Rightarrow q )

is unsatisfiable. We can see that tree is closed and there is no interpretation which makes it satisfiable. Chose any branch and start moving up the tree and if it contains both A and ~A then we close the branch because it’s not possible to make A and ~A true simultaneously. You can see the course notes.
Screen Shot 2014-05-03 at 2.06.54 am.
See Propositional logic library from hackage. Source code on github

May 2, 2014 Posted by | Haskell, Programming | , , , , | Leave a comment

Counting Inversions

First week of algorithm course programming assignment is to compute the number of inversion count of given array. We have a given array A[0…n – 1] of n distinct positive integers and we have to count all the pairs ( i , j ) such that i < j and A[i] > A[j].
1. Brute Force
Brute force idea is very simple. Run two loops and keep counting the respective pairs for which i < j and A[i] > A[j]. The complexity of brute force is O( n2 ).

bruteForce :: [ Int ] -> Int
bruteForce [] = 0
bruteForce [_] = 0
bruteForce ( x : xs ) = ( length . filter ( x > ) $ xs ) + bruteForce xs

*Main> bruteForce [ 3 , 1 , 2]
Loading package array-0.4.0.1 ... linking ... done.
Loading package deepseq-1.3.0.1 ... linking ... done.
Loading package bytestring-0.10.0.0 ... linking ... done.
2
*Main> bruteForce [ 2,3,8,6,1]
5

2. Divide and Conquer
We can modify the merge sort to compute the inversion count. The idea is to compute the inversion count of during the merge phase. Inversion count of array C resulting from merging two sorted array A and B is sum of inversion count of A, inversion count of B and their cross inversion. The inversion count of single element is 0 ( base case ). The only trick is compute the cross inversion. What will be the inversion count of following array if we merge them ?
A = [ 1 , 2 , 3 ]
B = [ 4 , 5 , 6 ]
Clearly 0 because all element of A is less than the first element of B. What is the inversion count of C = [ 4 , 5 , 6 ] and D = [ 1 , 2 , 3 ] ? First element of D ( 1 ) is less than first element of C ( 4 ) so we have total three pairs ( 4 , 1 ) ( 5 , 1 ) ( 6 , 1). Similarly for 2 and 3 ( total 9 inversion count ). It reveals that if we have any element y in D is less than certain element x in C then all the element of A after x will also be greater than y and we will have those many cross inversion counts. More clear explanation. Wrote a quick Haskell code and got the answer.

import Data.List
import Data.Maybe ( fromJust )
import qualified Data.ByteString.Lazy.Char8 as BS

inversionCnt :: [ Int ] -> [ Int ] -> [ Int ] -> Int -> ( Int , [ Int ] )
inversionCnt [] ys  ret  cnt = ( cnt , reverse ret ++ ys ) 
inversionCnt xs []  ret  cnt = ( cnt , reverse ret ++ xs )
inversionCnt u@( x : xs ) v@( y : ys )  ret  cnt
      | x <= y = inversionCnt xs v  (  x : ret   )   cnt
      | otherwise = inversionCnt u ys  ( y : ret )  ( cnt + length u )  

merge ::  ( Int , [ Int ] ) -> ( Int , [ Int ] ) -> ( Int , [ Int ] )
merge ( cnt_1 , xs ) ( cnt_2 , ys ) = ( cnt_1 + cnt_2 + cnt , ret ) where
        ( cnt , ret ) = inversionCnt xs ys []   0

mergeSort ::  [ Int ] -> ( Int , [ Int ] )
mergeSort [ x ] = ( 0 , [ x ] )
mergeSort  xs   = merge ( mergeSort xs' ) ( mergeSort ys' ) where
           ( xs' , ys' ) = splitAt ( div ( length xs ) 2 ) xs
          
main =  BS.interact $ ( \x -> BS.pack $ show x ++ "\n" ) . fst 
        .  mergeSort . map ( fst . fromJust .  BS.readInt :: BS.ByteString -> Int  ) 
        . BS.lines 


Mukeshs-MacBook-Pro:Puzzles mukeshtiwari$ ghc  -fforce-recomp -O2  Inversion.hs 
[1 of 1] Compiling Main             ( Inversion.hs, Inversion.o )
Linking Inversion ...
Mukeshs-MacBook-Pro:Puzzles mukeshtiwari$ time ./Inversion <  IntegerArray.txt 
2407905288

real	0m8.002s
user	0m7.881s
sys	0m0.038s

I tried to compare this solution with GeeksforGeeks‘s solution written in C and it was a complete surprise to me.

Mukeshs-MacBook-Pro:Puzzles mukeshtiwari$ time ./a.out <  IntegerArray.txt 
 Number of inversions are 2407905288 

real	0m0.039s
user	0m0.036s
sys	0m0.002s

Almost 200 times faster! My Haskell intuition told me that I am doing certainly some wrong. I started analyzing the code and got the point. In inversionCnt function, I am doing extra work for computing the length of list u ( O ( n ) ) so rather than doing this extra work, I can pass the length.

import Data.List
import Data.Maybe ( fromJust )
import qualified Data.ByteString.Lazy.Char8 as BS


inversionCnt :: [ Int ] -> [ Int ] -> [ Int ] -> Int -> Int -> ( Int , [ Int ] )
inversionCnt [] ys  ret _ cnt = ( cnt , reverse ret ++ ys ) 
inversionCnt xs []  ret _ cnt = ( cnt , reverse ret ++ xs )
inversionCnt u@( x : xs ) v@( y : ys )  ret n cnt
      | x <= y = inversionCnt xs v  (  x : ret   )  ( pred n ) cnt
      | otherwise = inversionCnt u ys  ( y : ret ) n ( cnt + n )  

merge ::  ( Int , [ Int ] ) -> ( Int , [ Int ] ) -> ( Int , [ Int ] )
merge ( cnt_1 , xs ) ( cnt_2 , ys ) = ( cnt_1 + cnt_2 + cnt , ret ) where
        ( cnt , ret ) = inversionCnt xs ys [] ( length xs )  0

mergeSort ::  [ Int ] -> ( Int , [ Int ] )
mergeSort [ x ] = ( 0 , [ x ] )
mergeSort  xs   = merge ( mergeSort xs' ) ( mergeSort ys' ) where
           ( xs' , ys' ) = splitAt ( div ( length xs ) 2 ) xs
          
main =  BS.interact $ ( \x -> BS.pack $ show x ++ "\n" ) . fst 
        .  mergeSort . map ( fst . fromJust .  BS.readInt :: BS.ByteString -> Int  ) 
        . BS.lines 

Mukeshs-MacBook-Pro:Puzzles mukeshtiwari$ time ./Inversion <  IntegerArray.txt 
2407905288

real	0m0.539s
user	0m0.428s
sys	0m0.031s

Now 10 times slower and still lot of room for improvement but for now I am happy with this :). You can also try to solve INVCNT. Run this code on Ideone.

July 4, 2013 Posted by | Haskell, Programming | , , , , | 2 Comments

My experience with online learning

I am one big confused soul who is always curious for learning. I found Coursera and Udacity very helpful. I try to take as many courses as possible but recently I took the Heterogeneous Parallel Programming by Professor Wen-mei W. Hwu and it was fun course. The video lectures were excellent and assignments were not very hard. They were designed to make us familiar with basics of cuda programming and apart from that I got certificate signed by Professor Wen-mei W. Hwu :). While doing some more research about heterogeneous computing which involves CPU, GPU and FPGA all together in single system, today I found FPGA Programming for the Masses. There are two promising technologies, one from Altera and other is Liquid Metal from IBM.

Altera OpenCL-to-FPGA Framework.
Altera proposed a framework for synthesizing bitstreams from OpenCL. The framework consists of a kernel compiler, a host library, and a system integration component. The kernel compiler, based on the open source LLVM compiler infrastructure, synthesizes OpenCL kernel code into hardware. The host library provides APIs and bindings for establishing communication between the host part of the application running on the processor and the kernel part of the application running on the FPGA. The system integration component wraps the kernel code with memory controllers and a communication interface (such as PCIe)
Liquid Metal.
The Liquid Metal project at IBM aims to provide not only high-level synthesis for FPGAs,1 but also, more broadly, a single unified language for programming heterogeneous architectures. The project offers a language called Lime, which can be used to program hardware (FPGAs), as well as software running on conventional CPUs and more exotic architectures such as GPUs. The Lime language was founded on principles that eschew the complicated program analysis that plagues C-based frameworks, while also offering powerful programming models that espouse the benefits of functional and stream-oriented programming.
Heterogeneous parallel programming looks very promising but still long way to go. My certificate 😉

.

March 9, 2013 Posted by | Programming | , , , , , , | Leave a comment