TRENDING NEWS

POPULAR NEWS

How To Find The Initial Guess For The Practical Questions In Bisection Methods

How can we calculate the actual error in the bisection method?

I presume you want to find [math]x* \in [a,b][/math] which is the solution of[math]f(x*)=0[/math]and for that you know that [math]f(a)*f(b)<0[/math], that is [math]f(a)>0[/math] and [math]f(b)<0[/math], or vice-versa.In the bisection method we go on by dividing the initial interval [math][a,b][/math] in halves, calculating the value [math]f(c)[/math] of the midpoint, [math]c=(a+b)/2[/math], and using [math]c[/math] as a new interval bound by replacing either [math]a[/math] or [math]b[/math] — that is, if [math]f(c)[/math] has the same sign of [math]f(a)[/math] then [math]c[/math] replaces [math]a[/math], otherwise [math]c[/math] replaces [math]b[/math], such that, always, the sign of [math]f()[/math] at the interval limits is opposed.So, there are two errors to be considered:the error in [math]x*[/math], already mentioned in other answers, which can be bounded by [math](1/2)(b-a)/2^N[/math], where N is the number of iterations you take in the bisection method.but there is the error in the function value [math]f(x_n)\neq 0[/math], where [math]x_n[/math] is the value undergoing scrutiny at iteration [math]n[/math] of the method, which you cannot control unless you know “how” is [math]f(x)[/math] in the interval [math][a,b][/math].For the application of the bisection method usually — in theory — it is forced the condition that [math]f(x)[/math] is continuous and bounded in that interval. But in a practical application you most of the times never know what the function is!So, for the sake of showing a twisted example, suppose that [math]f(x)=1/x[/math] and the starting interval is [math][-2,5][/math]. It is neither bounded nor continuous, so consider it as a “surprise! function” as it happens in real life! The graph is below.When we start halving the initial interval, we get values near zero, either above or below zero, and the value of [math]f(x_n)[/math] keeps going to [math]\pm\infty[/math]! And in fact there is no solution [math]f(x*)=0[/math] in this contrived example.So, although one keeps reducing the uncertainty (or error in [math]x[/math]) in the interval, the actual error in the function is growing towards infinity.HTH

Newton’s method and bisection, which one is more effective theoretically and practically? Why?

The bisection method does require a little knowledge of the function. In particular, it requires an interval, [a,b], such that f(a) and f(b) have opposite signs. This is because it relies on the intermediate value theorem for correctness.The advantage to the bisection method is that with such an interval, it is guaranteed to converge, with low error, and only requires the function to be continuous on that interval.Newton’s method also requires a good initial guess, but requires many more assumptions of the function in a neighborhood around the initial guess. In particular, it requires you to know how to compute the derivative. One may approximate it with finite differences, but then you are introducing more error. That said, it theoretically converges faster on a good neighborhood than bisection. It is worth pointing out that Newton’s method is a kind of fixed point iteration, and there are general methods for determining if a fixed point iteration is converging, and thus it is practically feasible to know when to start over and pick a different initial guess.Thus, given a good neighborhood; the bisection method theoretically works for a broader class of functions than Newton’s method. Practically if one is working with analytic functions, or functions where automatic differentiation can be used, Newton’s method is faster.

How do you find a rational approximation of [math]\sqrt 3[/math] without a calculator?

Last night I used ancient Pythagorean musical methodology involving harmonic (or subcontrary) and arithmetic means (HM and AM) as a pathway to the geometric mean (GM) in order to derive √2 here: Demetrios E. Lekkas's answer to Why can the square root of 2 only be approximated even though the length of the diagonal of a unit square can be measured?This method will do for any square root. Here goes for √3. First of all, to locate the geometric mean or radical in question between 1 and 3, one should compute their two other means: the harmonic or subcontrary one and the arithmetic one (AM). These are, respectively, 3/2 and 2. As a first musical remark, note that, musically, 3/1 is the interval of a perfect twelfth, and the two means, 3/2 and 2, are respectively, the perfect fifth and the octave; √3 sits in between those two, corresponding to something like 19 tempered quarter-tones or 9.5 semitones or precisely halfway between a major sixth and a minor seventh. To continue, we observe that the fifth and the octave leave a gap of a perfect fourth, equaling ratio 4/3; time to take the harmonic mean of that and multiply. Cutting a long story short:step 1: 3/2 < √3 < 2/1 (or 1.5000000… < √2 < 2.0000000…); gap (2/1 ÷ 3/2): 4/3; HM: 8/7, AM: 7/6;step 2: 3/2 × 8/7 < √3 < 3/2 × 7/6 ↔ 12/7 < √3 < 7/4 (or 1.7142857… < √2 < 1.7500000…); new gap: 49/48; new HM: 98/97, new AM: 97/96;step 3: 12/7 × 98/97 < √3 < 12/7 × 97/96 ↔ 168/97 < √3 < 97/56 (or 1.7319588… < √3 < 1.7321429…); new gap: 9409/9408; new HM: 18818/18817, new AM: 18817/18816;step 4: 168/97 × 18818/18817 < √3 < 168/97 × 18817/18816 ↔ 32592/18817 < √3 < 18817/10864 (or 1.7320508… < √3 < 1.7320508…) and so on.A really fast convergence. As this methodology was available to the Pythagoreans, Archimedes must have known it.

What is the method to calculate a square root by hand?

For the square root there is a very old method, called the Babylonian Method, which turns out to be much faster than the rule that is so well known and so cumbersome that it is taught in school. In fact I do not remember what that rule is like because the Babylonian is much simpler.It was used to make and delimit square surfaces of known area. Today it is used to make square roots in a simple way. Let's see it with several examples and step by step I explain it to you.Let's suppose that we want to calculate the square root of 3. R = 3, We will use two auxiliary values ​​that we will call B and H. For the moment we do B = 3 and H = 1. It has to be fulfilled that B * H = R , that is, in our case 3. We see that B * H = 3. Then we calculate a new value for B.The new value of B is the average of the values ​​of B and H above.Therefore now B is replaced by B → (B + H) / 2 = (1 + 3) / 2 = 2.B is now 2The new value of H is the ratio between R the new B.H will now be H → R / B = 3/2 = 1.5So we have B = 2 and H = 1.5Next step. We do the same thing again, so nowB → (2 + 1.5) / 2 = 1.75 and following the rule H → 3 / 1.75 = 1.714285.We have that B = 1.75 and H = 1.714285.We do the same thing again:B → (1.75 + 1.714285) / 2 = 1.732142 and H → 3 / 1.732142 = 1.731959.So now B = 1,732142 and H = 1.731959.This is known in Mathematics as an "iterative formula". We stop calculating when we obtain the desired precision, and we take as value the common part between B and H. In the example the value of the root of 3 would be 1.73 so far. Let's go one step further.B → (1,732142 + 1,731959) / 2 = 1,732050. H → 3 / 1.732050 = 1.732051We can therefore use the value of 1.73205 as that of the root of 3.In fact (1.73205) ^ 2 = 2.9999997. We have achieved a good precision.—————————————————————————————————Like everything in this life, this method has its problems, and the most important is that it can converge very slowly and you can throw yourself a long time to get an acceptable result.The trick is to start from an approximate root for the first B. Suppose we want to find the root of 237, ugly number where there are. If it starts with B = 237 and H = 1, you will see that it takes a while to find it. The trick is to start with an approximate root, for example in our case B = 15 since 15 ^ 2 = 225. we calculate the H that would be now 15,866666 and thus we start the calculation. Converge fasterI hope you liked it.Regards

When does the Newton-Raphson method fail?

Newton-Raphson Method is a root finding iterative algorithm for computing the roots of functions numerically.But it has four Pitfalls or failure cases.Case 1: When your intial guess(x0) is on the inflection of the function. f ’’(x0) = 0Case 2 : When your intial guess(x0) is very close to rootCase 3 : When intial guess(x0) or any iterative value of function shoots off horizontally. f ’(x) = 0Case 4 : When intial guess(x0) is between local maximum or local minimum

What are some clever applications of binary search?

Software development examples:debugging a somewhat linear piece of code. if the code has many steps mostly executed in a sequence and there's a bug, you can isolate the bug by finding the earliest step where the code produces results which are different from the expected ones.cherry picking a bad code change from a release candidate. When pushing a large code release in production one would sometimes find that there's a problem with that binary. If reverting the whole release wasn't an option the release engineer would binary search through the code change ids. He would figure out the earliest code change which creates the bug.figuring out resource requirements for a large system. one could try running load tests on the system and binary search for the minimum amount of CPUs required to handle a predicted load. (this approach is better than random guessing but much worse than doing some analysis of your system and doing some good educated guesses)Algorithmic puzzle examples:Given A a sorted array find out how many times does x occur in A.Given a real number x, find out its cubic root.Given A a sorted array with distinct numbers, find out an i such that A[i] == i.Given the +,-,*,/,sqrt operations and a real number x find an algorithm to get log_2_x.Given an array of distinct numbers A such that A[0] > A[1] and A[n-1] > A[n-2] find out a local minimum (find out an i such that A[i-1] > A[i] < A[i + 1]).Let A be a sorted array with distinct elements. A is rotated k positions to the right (k is unknown). Find out k.Let A be a sorted array with distinct elements. A is rotated k positions to the right (k is unknown). Find out if A contains a number x.Given two sorted arrays of length n and m, find out the kth element of their sorted union.Given A, an array comprised of an increasing sequence of numbers followed immediately by a decreasing one. Determine if a given number x is in the array.Given an array of N distinct int values in ascending order, determine whether a given integer is in the array. You may use only additions and subtractions and a constant amount of extra memory.Player A chooses a secret number n. Player B can guess a number x and A replies how does x compare to n (equal, larger, smaller). What's an efficient strategy for B to guess n.

Is there a way to calculate the median of a series of non-clustered data without performing sorting?

This was an interview question given to me by a major technology company. (I won’t say which one, but you have likely heard of them.)An additional criteria was that the series of data was too big to fit in RAM (e.g. a series of 1TB of numbers)I attempted to find a clever solution by traversing the series once or twice but failed. Note that only a single traversal is required to calculate both mean and mode; both also require only a small amount of memory.After thinking for a while I gave out a relatively inefficient solution, which surprisingly turned out to be correct:Make an initial guess of the median based on your knowledge of the data (or calculate the median of first 100 numbers). Then traverse the list, keeping track of how many numbers above and below your initial guess.If you have more numbers above your initial guess shift your guess higher; if fewer numbers above shift your guess lower. Traverse again. Every traversal you can adjust your guess higher or lower in smaller increments (the bisection method will cut it in half each time) until you have equal numbers above and below your guess.When you have equal numbers above and below you are almost done. If you have an odd number of observations equal to your final guess that is the median; otherwise average the 2 closest observations (which you can keep track of during each traversal)

How do I write a program for finding the square root of a number without using the sqrt function?

Simple logic is if you keep on adding odd numbers then the number of times you add odd number will be the square root of the summation of 1 to that odd numberFor example1 = 1  square of 11 + 3 = 4 square of 2,,adding two times1 + 3 + 5 = 9 square of 3 adding three timeshere is my simple code#includeint main(){        long long int  n,x = 0,count=0,i;        scanf("%lld",&n);        for(i=1;x<=n;i+=2)        {            x = x + i;            count++;        }        printf("%lld\n",count);        return 0;}

TRENDING NEWS