algorithm Analysis 2, computer science homework help

Q1/

2.1-3 Consider the searching problem: Input: A sequence of n numbers A D ha1; a2;:::;ani and a value . Output: An index i such that D AŒi or the special value NIL if does not appear in A. Write pseudocode for linear search, which scans through the sequence, looking for . Using a loop invariant, prove that your algorithm is correct. Make sure that your loop invariant fulfills the three necessary properties.

Solution:

LINEAR-SEARCH(A, v)

1 for i ← 1 to length[A]

2 do if A[i] = v

3 then return i

4 return NIL

Correctness: Loop invariant At the start of each iteration of the for loop we have A[j] 6= v for all j < i.

Initialization Before the first loop iteration the invariant holds since the statement is empty.

Maintenance The loop invariant is maintained at each iteration, since otherwise at the i-th iteration there is some some j < i such that A[j] = v. However, in that case for the j-th iteration of the loop the value j is returned, and there is no i-th iteration of the loop, a contradiction.

Termination When the loop terminates, there may be two cases: one is that it terminates after i ≤ length(A) iterations, and returns i, in which case the if conditional ensures that A[i] = v. The other case is that i exceeds length(A), in this case by the loop invariant we have that for all j ≤ length(A) A[j] 6= v, this returning NIL is correct.

q2

2.2-3 Consider linear search again (see Exercise 2.1-3). How many elements of the input sequence need to be checked on the average, assuming that the element being searched for is equally likely to be any element in the array? How about in the worst case? What are the average-case and worst-case running times of linear search in ‚-notation? Justify your answers.

Solution:

Since the probability of v = A[i] is 1/n for all i = 1, . . . , n, and we need to check exactly i elements when v = A[i], we have that expected value of the number of checks is

1 / n (1 + 2 + · · · + n) = 1 /n n(n + 1)/ 2 = n + 1 /2 .

In the worst case it is n. The avarage case running time is c · n+1 /2 = Θ(n) since for all n ≥ 1 we have

1 /2 n ≤ n + 1/ 2 ≤ n.

The worst case running time is also Θ(n).

q3

2.3-5 Referring back to the searching problem (see Exercise 2.1-3), observe that if the sequence A is sorted, we can check the midpoint of the sequence against and eliminate half of the sequence from further consideration. The binary search algorithm repeats this procedure, halving the size of the remaining portion of the sequence each time. Write pseudocode, either iterative or recursive, for binary search.

Solution:

BINARY-SEARCH(A, p, q, v)

1 if q < p

2 then return NIL

3 m ← p + d(p − q)/2e

4 if A[m] = v

5 then return m

6 if A[m] > v

7 then return BINARY-SEARCH(A, 1, m − 1, v)

8 else return BINARY-SEARCH(A, m + 1, length(A), v)

q4

2-1 Insertion sort on small arrays in merge sort Although merge sort runs in ‚.n lg n/ worst-case time and insertion sort runs in ‚.n2/ worst-case time, the constant factors in insertion sort can make it faster in practice for small problem sizes on many machines. Thus, it makes sense to coarsen the leaves of the recursion by using insertion sort within merge sort when subproblems become sufficiently small. Consider a modification to merge sort in which n=k sublists of length k are sorted using insertion sort and then merged using the standard merging mechanism, where k is a value to be determined.

a. Show that insertion sort can sort the n=k sublists, each of length k, in ‚.nk/ worst-case time.

b. Show how to merge the sublists in ‚.n lg.n=k// worst-case time.

c. Given that the modified algorithm runs in ‚.nk C n lg.n=k// worst-case time, what is the largest value of k as a function of n for which the modified algorithm has the same running time as standard merge sort, in terms of ‚-notation?

d. How should we choose k in practice?

Answer:

A. Sorting sublists

Note that sorting each list takes ak2 + bk + c for some constants a, b and c. We have n/k of those, thus:

n k (ak2 + bk + c) = ank + bn + cn k = Θ(nk)

B. Merging sublists

Sorting a sublists of length k each takes:

T(a) = ( 0 if a = 1, 2T(a/2) + ak if a = 2p , if p > 0.

This makes sense, since merging one sublist is trivial and merging a sublists means splitting dividing them in two groups of a/2 lists, merging each group recursively and then combining the results in ak steps, since have two arrays, each of length a 2 k.

Proof by induction:

Base. Simple as ever:

T(1) = 1k lg 1 = k · 0 = 0

Step.Assume that T(a) = ak lg a and we calculate T(2a)

T(2a) = 2T(a) + 2ak = 2(T(a) + ak) = 2(ak lg a + ak) = (7)

= 2ak(lg a + 1) = 2ak(lg a + lg 2) = 2ak lg(2a) (8)

This proves it. Now if we substitue the number of sublists n/k for a:

T(n/k) = n k k lg n k = n lg(n/k)

While this is exact only when n/k is a power of 2, it tells us that the overall time complexity of the merge is Θ(n lg(n/k)).

C. The largest value of k The largest value is k

= lg n. If we substitute, we get:

Θ(n lg n + n lg n lg n ) = Θ(n lg n)

If k = f(n) > lg n, the complexity will be Θ(nf(n)), which is larger running time than merge sort.

4. The value of k in practice

In practice, k should be the largest list length on which insertion sort is faster than merge sort.

Q5

2-3 Correctness of Horner’s rule

The following code fragment implements Horner’s rule for evaluating a polynomial

P .x/ D Xn kD0 akxk D a0 C x.a1 C x.a2 CC x.an1 C xan/// ;

given the coefficients a0; a1;:::;an and a value for x:

1 y D 0

2 for i D n downto 0

3 y D ai C x y

a. In terms of ‚-notation, what is the running time of this code fragment for Horner’s rule?

b. Write pseudocode to implement the naive polynomial-evaluation algorithm that computes each term of the polynomial from scratch. What is the running time of this algorithm? How does it compare to Horner’s rule?

d. Conclude by arguing that the given code fragment correctly evaluates a polynomial characterized by the coefficients a0; a1;:::;an.

answer:

A-This code fragment iterates over n elements once and is therefore Θ(n).

B-

y = A[1]

for i = 2 to A.length

x’ = 1

for j = 1 to i – 1

x’ = x’ * x

y = A[i] * x’ + y

This is an Θ(n^2) worst-case algorithm due to the nested for loop within

the outer for loop. For each coefficient other than the first it requires

an iteration of the loop to multiply the x value out. This is much worse

than Horner’s rule, with its Θ(n) run time. While this isn’t terrible for

low order polynomials if one wanted to calculate the value for a polynomial

with 100 terms the different would be large due to the growth rate of

an Θ(n^2) run time function.

D- At the start of the algorithm the variable y is set to an initial zero. During

each iteration it calculates the next part of the sum using Horner’s rule until

it reaches i = -1, where there are no more coefficients to sum.

"Is this question part of your assignment? We can help"

ORDER NOW