I haven’t slept yet, so bare with me. I’ve decided to give functional programming a try in order to learn more about Haskell and expand my problem solving process. My background is dealing with object oriented and some aspects of imperative programming. Object oriented means that the data and functions are encapsulated and put together to form modules, and those modules are thought of as objects. Imperative programming means that the program’s state changes due to side effects. Side effects are the ‘uncertainty’ of the return type from a function. For instance, a print (or System.out.print) function in python and java, respectively, work because of side effects.

print "tomato"
-> 'tomato'
print 23
-> 23

Notice how we use the same print function, but the return types are different. This is due to side effects.

Functional programming is different in that it emphasizes procedures and functions over changes in states and mutability. If you've ever done calculus and notice that when you pass a parameter to a function, the return type is always predictable. This is the essence behind functional programming. Haskell takes it a step further and goes into verified logic programming, but I'm still wrapping my head around functional programming, so vlp will be the topic of a later discussion.

I have decided to compare and contrast imperative with functional programming through Python and Scheme (a dialect of Lisp used in the SICP book) to find the approximation of a cube root using the newton-raphson method.

Looking at the source codes below, I'm going to explain how the newton-raphson technique works with my imperative background. First, a best-guess for the cube root of n is entered. In our case, we'll start with 1.0 because we can always assume that most numbers will have a cube root > 1.0. Then, while the cube approximation minus our inputted cube (n) is greater than or equal to epsilon (our precision checker), the guesses are entered into newton's function for approximating the square root: ((2*y)+(x/y^2))/3 and the guess keeps getting reassigned until the approximation cubed is within a difference of 0.0001 of the actual cubed number. x will then be returned (our approximate cube root).

def newt_cube(n):
x = n
g = 1.0
eps = 0.0001
if n == 1:
return 1
while abs(x**3 - n) >= eps:
x = ((2*g))
g = (x+(n/g**2))/3.0
x = g
return x
print newt_cube(1232342)

As you can see, what I did with x = n and then set x = g later on is called destructive update. In a functional programming language, this is usually not allowed.

Observing Scheme:

(define (square x) (* x x))
(define (cube x) (* x x x))
(define (cbrt-iter guess x)
(if (good-enough? guess x)
guess
(cbrt (improve guess x)
x)))
(define (improve guess x)
(/ (+ (/ x (square y)) (* 2 y)) 3))
(define (average x y)
(/ (+ x y) 3))
(define (good-enough? guess x)
(< (abs (- (cube guess) x)) 0.001))
(define (cbt x)
(cbrt-iter 1.0 x))

it is noticed that instead of destructive update, the cubed root approximation is found through procedures. Also, notice that instead of using a loop (the while loop in Python above) we just use a procedure called cbrt-iter and successively find the approximatino through a series of procedures. Still, much to learn with functional programming! But I'm tired now, and shall go off and nap.