TRENDING NEWS

POPULAR NEWS

Algorithim Runtime Efficiency

What is an Algorithm's time complexity?

One may improve the speed of algorithm A to run two times faster. This is good but you can achieve it with a faster computer too. If the size of the problem becomes 10 times bigger, the execution time of the algorithm will still increase with the same rate.An improvement is much more noticeable when you improve its big O notation which means increasing the size of the problem does not increase the execution time with the same rate.But how to measure big O?Consider an algorithm A1 with one loop:for i = 1 to N do….…Here if N increases 10 times the algorithm runs 10 times slower so the complexity is O(N).but in algorithm A2 with two loops:for i = 1 to N dofor j = 1 to 2N do….…Increase of N with the order of 10 results to computation of 100 times slower so the complexity is O(N^2).We do not much care about O(2N^2) and O(N^2). Constant rates do not matter.However, logarithm order is counted. For example quick sort has average complexity of O(N log N).Sorting algorithms have a meaningful comparison table. It would be great to have a look at them.

What is the best algorithm in term of run-time complexity for checking if a given set is a subset of another given set?

If you add all of the members of one set to a HashMap it should take n  operations. If you then check each member of another set against the set in the HashMap it should take another n operations. n+n = 2n -> O(n)If any of the items in the second set returns a value from the HashMap lookup, then the sets overlap.See: http://stackoverflow.com/a/9659474

An algorithm processes an input of size n if n=4,096 and the run time=512 milliseconds. If n=16,384, run time is 8,192 MS. What is the efficiency and big O notation?

Big-O notation does not concern itself with milliseconds. Algorithmic complexity is a mathematical idealization divorced from any particular machine.Trying to determine complexity from two samples is just silly. Let’s estimate the complexity of Rado’s sigma function. [math]\Sigma(2)=4, \Sigma(3)=6[/math]. What’s the complexity? Before you answer, you might want to know that [math]\Sigma(5)[/math] is still an open question, but is known to be greater than [math]3.5\times10^{18267}[/math]. (Busy beaver - Wikipedia).If this is a homework question (of course it’s a homework question), just put down “not enough information.”Update: Kelly Kochanski asks in the comments about how many samples I recommend for algorithm characterization. I spend several paragraphs trying to dig my way out of that bit of sarcasm.

Which sorting algorithm has best asymptotic run time complexity?

now there are quite a lot of sorting algorithms. But there are a few which can be considered the best in case of run time complexity.Now the best out of them all according to me would be counting sort because it takes only O(n) time in sorting the given array, but the problem with counting sort is that it is best to use when the range of the elements in the array is not quite large.Suppose we take an array with elements [2, 4, 3, 5, 12, 7]then we see the range = max val - min value i.e. 12–2 = 10. Which is not quite large so we can consider it for sorting .but where the range is suppose 1000 or more. counting sort is not so suitable as the number of itiration to form the count array will be large hence the total time will be large.(but keep in mind as the time complexity decreases so the space complexity increase and vice-versa hence space complexity for counting sort is quite large.)according to me the best might be merge sort or quicksort or heapsort in this case. As they take only O(nlogn) in average cases to find the solution and are definitly better then primitive sorting algos like selection, insertion and bubble sort.

What is the time complexity of Ford-Fulkerson's algorithm?

Note that the run-time [math]O(E f)[/math] applies only when the edge capacities are integers.First, let me define augmenting path: an augmenting path is a path from the start vertex ([math]s[/math]) to the end vertex ([math]t[/math]) that can receive additional flow without going over capacity.Now, the basic idea of the Ford-Fulkerson algorithm is to find any augmenting path in the graph and to add a constant amount of flow to each edge in that path.  It repeats this process until it cannot find any more augmenting paths.  At the end, flow is maximized.You may be tempted to think that the run-time is [math]O(Vf)[/math] because each augmenting path has [math]O(V)[/math] edges and each of those edges can have [math]O(f)[/math] increases in its capacity in the worst case.However, finding an augmenting path requires a depth-first search of the graph, which takes [math]O(E)[/math] time.  We have to find a new augmenting path each time the algorithm does another iteration.Since we can do at most [math]f[/math] iterations, and each iteration takes [math]O(E + V)[/math] time, the worst-case run time is [math]O((E+V)f)[/math] which is [math]O(Ef)[/math].

What is the complexity of Dijkstra's algorithm?

In worst case graph will be a complete graph i.e total edges= v(v-1)/2 where v is no of vertices.That is : e>>v and e ~ v^2Time Complexity of Dijkstra's algorithms is:1. With Adjacency List and Priority queue:O((v+e) log v)-> in worst case: e>>v so O( e log v)2. With matrix and Priority queue:O(v^2 + e log v)-> in Worst case e ~ v^2So O(v^2 + e log v) ~ O(e + e log v) ~ O(e log v).3. With Fibonacci Heap and adjacency list :it will be O( e + v log v)

I wanted to run the program with Matlab it is so time consuming but my friend tell me run it with c++, Why?

The answer to your question is... it depends.

C++ is very fast, esp if you use optimization when you build. But if you don't code your program to be speed optimized then the runtime of your program will not be comparable to Matlab.

As the last person said, it also depends on what you are doing. If you are trying to do some simulation or computer vision software, then you should make sure to use libraries such as STL and BOOST uBLAS. The componenets of these libraries are optimized and will run as fast/faster than similar operations native to matlab.

Whether or not your program will run comparable to matlab will depend on the implementation of your algorithm, and the management of memory. Make sure you create things such as freelists to cut down on dynamic memory allocation time.

Hope this helps.

What are some easy ways to understand and calculate the time complexity of algorithms?

Big O is the most commonly used asymptotic notation for comparing algorithms, but there are the two most often used notations aside from the big O notation; Big Omega Ω and Big Theta Θ.Big-oh is the most useful because represents the worst-case behavior. So, it guarantees that the program will terminate within a certain time period, it may stop earlier, but never later.So, now, how to calculate the time complexity of algorithm using Big-oh notation?We can start by simple example, moving forward to more complex.A Simple Example int sum (int n) {
1 int totalSum = 0;
2 for( int i = 0; i < n; i++)
3 totalSum+= i;
4 return totalSum;
}
The analysis for this piece of code is simple. Lines 1 and 4 count for one unit of time each.Line 2 counts for one unit for initializing, N + 1 for the tests, and N for the increments. The test is executed N + 1 times, where the last time is when the condition is false. The increment is executed N times. So, the total cost is 2N + 2.Line 3 counts for 2 units; one addition and one assignment, and it’s executed N times, for a total of 2N units.Therefore, we have 2 + 2N + 2 + 2N = 4N + 4.Fortunately, since we are going to use the Big O notation, there are a lot of shortcuts we can make to simplify the analysis. So, we are going to throw away any constants, and also low-order terms.The terms are essentially the things separated by plus / minus symbols.We are going to drop the +4, because it’s obviously lower than 4N(that’s the low-order term), and also throw any constants. So, we will end up having a complexity of O(N), instead of O(4N + 4).I’ve written a complete tutorial on Big-oh notation The Big Scary O Notation. You can find how to measure the complexity of an algorithm (loops, nested-loops, statements, if-else, etc), and the common time complexities.Hope It helps

TRENDING NEWS