The first thing we need to solve this question is to determine the relative length of amicrosecond. A quick search on the web determines the conversions we need:
• 1 day = 24 hours × 60 minutes × 60 seconds = 8.64 ×1010 microseconds
• 1 year = 365.25 days ≈ 3.16 × 1013 microseconds
Now if each function takes f (n) microseconds to complete and we wish to find the largestn such that the function completes withing the given limit of time. We can simply equatethe maximum amount of time we are given to our function and solve for n. For examplelets figure out how large n can be such that the function f (n) = 2n = 1.0 × 106:
For other functions such as f (n) = n lg(n) we have to get more creative since we don’thave a closed form for the solution, that is, we cannot attain a solution through algebraicmanipulation. Here is one way to solve the problem f (n) = n lg(n) = 1.0 × 106:
f (n) = n log(n) = 1.0 × 106 × log 2 ≈ 301029
Why do we do this? Because we are all familiar with the logarithm base 10 which whenrounded up counts the number of digits in the number. As an example the above num-ber has 6 digits as we can see when we round up the value of the base 10 logarithm(log 301029 ≈ 5.48). Thus we have transformed our problem into finding the number nsuch that n log(n) = 301029. But we know that this number has 6 digits, which will beour approximation for log(n). Hence we arrive to the approximation 6n = 301029, whichgives n = 50171.5. Is this our answer? Let’s check:
No, but we are very close. Here is where we turn to the computer for help. As suggestedby our professor lets make a program to figure out this magic number. Here is a C++program which starts with our guess and computes the correct (integer) value of 62746:
for (float i = 50171.0f; i < 100000.0f; i++) {
std::cout << i << std::endl;break;
The other functions can be solved similarly. Here is the complete table:
Here we only consider the order of growth (i.e. ignoring the coefficients). The n3 factorsdominates this function in terms of order thus we can express the given function as Θ(n3).
This really depends on the context of the algorithm we are to modify. If for example weneed to modify a search algorithm, we can check as a special case if the item we are lookingfor is at the beginning of the array we are trying to search and simply return in constanttime. If our algorithm needs to sort an array of integers we can as a special case check ifthe array is already sorted in which case our best-case running time is linear. Finally, ifour algorithm does some numerical computation we can simply pre-compute some valuesand return immediately upon requesting those values via table lookup.
Let’s be precise and define what max(f (n), g(n)) means:
In order to show max(f (n), g(n)) = Θ(f (n) + g(n)) it suffices to find the constants c1, c2 >0, n0 such that
c1(f (n) + g(n)) ≤ max(f (n), g(n)) ≤ c2(f (n) + g(n))
By our definition of the function max we can see that max(f (n), g(n)) ≤ f (n) + g(n) forall n, thus taking c2 = 1 will work. Now we also know from definition that
Hence taking c1 = 0.5 gives us the desired result.
Using these facts we wish to find constants c1, c2, n0 > 0 such that
But we already know from the above two inequalities that
Thus taking powers of b of each term preserves the inequality since b is assumed to bestrictly positive, hence we have
2 = 2b, and n0 = 2|a| to satisfy the definition of Θ.
We must be careful of the definition of O(n2), which is the set of functions f (n) for whichthere exists constants c, n0 > 0 such that 0 ≤ f (n) ≤ cn2 for all n ≥ n0. This set offunctions includes the function g(n) = 0 for all n, i.e. the constant zero function. Sincethe running time of a function has to be non-negative, saying that ”The running time ofalgorithm A is at least O(n2),” tells us nothing about the running time of this algorithm.
To show this we must find a constants c, n0 > 0 such that 0 ≤ 2n+1 ≤ c2n for alln ≥ n0. Note that 2n+1 = 2 · 2n, thus we can satisfy the definition of O by takingc = 2, n0 = 1.
(b) False, O(22n) = O(2n). Assume for a contradiction that O(22n) = O(2n), or in other
words, ∃c, n0 such that 0 ≤ 22n ≤ c2n for all n ≥ n0. But then 22n = (2n)2 = 2n · 2n ≤c2n ⇒ 2n ≤ c, for all n ≥ n0. But we know that this is not true for any n > lg(c)leading to a contradiction.
In order to prove this theorem let us recall the definition of these three constructs:
f (n) = Θ(g(n)) ⇔ ∃c1, c2, n0 > 0,
Now from the above definition we can see that (ignoring the exist and for all quantifiersfor simplicity):
f (n) = Θ(g(n)) ⇔ c1g(n) ≤ f (n) ≤ c2g(n)
⇔ c1g(n) ≤ f (n) and f (n) ≤ c2g(n)⇔ f (n) = Ω(g(n)) and f (n) = O(g(n))
proving the theorem due to the bi-directional equivalence (⇔).
This is a rather trivial problem as this statement follows directly from Theorem 3.1 whichwe proved in the previous question.
Once again, let us recall the definition of the two constructs:
f (n) = ω(g(n)) ⇔ ∀c2, ∃n0 > 0,
Now let’s construct the set o(g(n)) ∩ ω(g(n)):
o(g(n)) ∩ ω(g(n)) = {f (n) | ∀c1, c2,
Since this holds for any c1 and c2, we can choose to set them equal to eachother, andhence this set is clearly empty since cg(n) < cg(n) is never true.
(a) False. Take f (n) = 1, g(n) = n.
(b) False. Take f (n) = 1, g(n) = n.
Suppose f (n) = O(g(n)) with the assumption that lg(g(n)) ≥ 1 and f (n) ≥ 1 forsufficiently large n. Then by definition of O we have that for some c, n0 > 0, f (n) ≤cg(n) for all n ≥ n0. Using the fact that the logarithm is a monotone increasingfunction and taking the base 2 logarithm of both sides we get that
lg(f (n)) ≤ lg(cg(n)) = lg(c)+lg(g(n)) ≤ lg(c)lg(g(n))+lg(g(n)) = (lg(c)+1)lg(g(n))
since lg(g(n)) ≥ 1. Thus taking c = lg(c) + 1 gives us that lg(f (n)) = O(lg(g(n))).
(d) False. Take f (n) = 2n, g(n) = n. The counterexample follows by Exercise 3.1-4 (a).
By definition of f (n) = O(g(n)), there exists c, n0 > 0 such that f (n) ≤ cg(n) forall n ≥ n0. Taking c = 1 > 0 we get that c f (n) ≤ g(n) for all n > n
definition means that g(n) = Ω(f (n)).
By definition g(n) = o(f (n)), there exists c, n0 > 0 such that g(n) < cf (n) for alln ≥ n0. Relaxing the strict inequality and adding f (n) to both sides yields
f (n) + g(n) ≤ cf (n) + f (n) = (c + 1)f (n)
Now by taking a sufficiently large n0 we can assume that g(n) ≥ 0 hence
Finally combining the above two inequalities give us
f (n) ≤ f (n) + g(n) ≤ cf (n) + f (n) = (c + 1)f (n)
Which by definition means f (n) + g(n) = Θ(f (n)) with c1 = 1, c2 = c + 1, and n0sufficiently large. Since g(n) was chosen as arbitrary, the result follows.
Let T (n) denote the time it takes to sort an array of n elements. Trivially, an array oflength 1 is already sorted. The time needed to sort an array of length n − 1 is exactlyT (n − 1). In order to insert an element into an array of length n we must make at most ncomparisons (in the case we try to insert a new maximal element). Thus the recurrenceis:
While we can improve the search of where to insert this element into the sub arrayA[1 . . . j − 1] this will not improve the efficiency of the algorithm, since lines 5-7 notonly look for the correct place to insert the given element, but they also shift the elementsthat are larger than A[j] one index to the right. This particular step can take as muchas Θ(j) time since A[j] could be smaller than A[1]. Since we cannot avoid this shifting ofelements by using binary search, it will not help us improve the running time.
Here is an intuitive algorithm for solving this problem:
• Sort the elements in S in increasing order. (Θ(n lg(n)))• For each element y ∈ S perform a binary search for x − y. (Θ(n lg(n)))
(a) Insertion sort runs in Θ(k2) for a k element list in the worst case, thus to sort n lists
of k elements would require Θ( nk2 ) = Θ(nk) in the worst case.
(b) To achieve merging of the lists in Θ(n lg(n)) time we need to work on pairs of lists.
The amount of time it takes to merge two lists is Θ(n). Since we are only merginglists of length no smaller than n and each list is generated by splitting a larger list
splits, hence the total running time for merging
(c) First note that any value of k larger than lg(n) would not work, for the product nk in
the expression would be asymptotically larger than n lg(n). Since lg(n) is an upperbound for k we just have to show that k = lg(n) will produce the same running time. This is quite straight forward as
Θ(nk + n lg(n) − n lg(k))) = Θ(n lg(n) + n lg(n) − n lg(lg(n))))
(d) In practice, we should choose k to be the largest possible value for which insertion
sort performs faster than merge sort.
We can solve this by a recursion tree approach (not drawn):
= T (n − 3) + (n − 2) + (n − 1) + n
= 1 + 2 + 3 + · · · + (n − 2) + (n − 1) + n
We will use the substitution method to solve this problem. We first guess that the solutionis Ω(n lg(n)). We must show T (n) ≥ cn lg(n) for some constant c > 0. We assume thatthis bound holds for
n , thus substituting into our recurrence give us:
≥ cn lg(n) + n − cn − 2c lg(n) − 2c≥ cn lg(n) + n − 5cn≥ cn lg(n)
The last line holds true if and only if c ≤ 1 . We must also check the base cases, i.e.
T (2) ≥ 2c lg(2) and T (3) ≥ 3c lg(3), both of which are trivial assuming T (1) = 1.
By the master theorem (or by looking at page 74) we know that for the even case, integerdivision by 2 is exact, thus the recurrence is T (n) = 2T ( n ) + Θ(n). Now all that is left
is to prove the odd case, and we can do this trivially by making use of the fact that ourrecurrence function T is a monotone increasing function of n, hence
Now if n is odd, then n − 1 and n + 1 are both even, and thus T (n) = Θ(n lg(n)) as frombefore.
(n)) by case 1 of the Master Theorem with
(n) lg(n)) by case 2 of the Master Theorem.
(c) T (n) = Θ(n) by case 3 of the Master Theorem with
(d) T (n) = Θ(n2) by case 3 of the Master Theorem with
With an application of the master theorem and using the fact that Strassen’s algorithmruns in Θ(nlg(7)) we have the inequality
Solving this inequality yields the answer a = 48.
The result follows directly from the Master Theorem by case 2.
No, this cannot be solved with the Master Theorem. The reason is because the ratio
= lg(n) which is asymptotically smaller than any polynomial, hence the Master
Theorem does not apply to this recurrence. In order to solve the recurrence we can usethe recursion tree method (or by the extended Master Theorem with k = 1) tp yieldΘ(n2 lg2(n)).
• (a) T (n) = T n + Θ(1) which has a running time of Θ(lg(n))
(b) T (n) = T n + Θ(N ) which has a running time of Θ(N lg(n))
(c) T (n) = T n + Θ(n) which has a running time of Θ(n)
• (a) T (n) = 2T n + Θ(n) which has a running time of Θ(n lg(n))
(b) T (n) = 2T n + Θ(n) + Θ(N ) which has a running time of Θ((n + N ) lg(n))
(c) T (n) = 2T n + Θ(n) which has a running time of Θ(n lg(n))
D.C. Fundamentals Trial Mid-Semester Test _____________________________________________________________________________________________ DC ELECTRICAL FUNDAMENTALS TRIAL MID-SEMESTER TEST This test is closed book, calculator permitted. Answer questions in the spaces provided. 60 MARKS TOTAL (70% pass) Clearly label all currents, resistors and voltage drops i