TRENDING NEWS

POPULAR NEWS

What Would This Partial Function Equal

Piecewise function and partial derivatives?

a) Graph

The graph is more complicated to describe in words than to understand.

Took the plane z=0 and remove the x and y axes. This pair of crossing lines (the coordinate axes) should be elevated to z=1, and that's all: the z=0 plane, excluding the lines x=0 and y=0, and the lines x=0,z=1, and y=0,z=1.

b)
Partial derivatives.

∂f/∂x=0 for every point of the plane except the line x=0. It is undefined over the y axis.
∂f/∂y=0 for every point of the plane except the line y=0. It is undefined over the x axis.

See that f is constant (=0 outside the axes, =1 on the axis), and therefore its partial derivatives are null, except when crossing the axis, were undefined.

These question contains an incorrect assumption. The derivative of two equal functions is indeed equal. f completely determines f'. In other words, f=g => f'=g'. However, as is all too well known, f'=g' =/=> f=g, only that f and g differ by a constant.What is more interesting in such matters, however, is whether only somewhat controlling f in a modifiable interval can allow us to as somewhat control f'. For instance, suppose that f and g are two strictly positive functions such that f/g approaches one as the variable approaches infinity. Will f'/g' also behave in the same asymptotic fashion ? This is completely false, as, and this is the crucial point, it is possible for a function to have extremely violent vibrations, of diminishing amplitude but even faster increasing frequency, which leads to very high values of the derivative. Consider for example f(t)=1+e^(-t)f'(t)=e^(-t)g(t)=1+e^(-t) cos (e^(4t))g'(t)=-4e^(3t) sin(e^(4t)) - e^(-t) cos (e^(4t)).Clearly, g ~ f, but g'/f' can grow to as high values as you wish as the variable approaches infinity.

I’m parsing the question as “the derivative being zero everywhere“ (correct me if I’m wrong).If the function is continuous, then yes, it is constant.Otherwise, consider the counterexample[math]f(x) = 0[/math] for [math]x < 0[/math] and [math]f(x) = 1[/math] for [math]0 \le x[/math].The derivative is zero wherever it exists (anywhere but at [math]x = 0[/math]), but overall the function is not constant.

A function of two variables (x,y) is a 3D surface, let think in a mountain, it have a maximun gradient in one direction (when facing top?) but lower slopes if select an helliptical way up, if z axis is vertical any pair of mutually orthogonal planes (parallel to z) will make a trace on the mountain surface, if those planes defines the other coordinates (x,y;r,phi), each trace curve will have a constant value for the other plane coordinate, so, the slope of  that trace is a partial derivative.

Create a function where both partial derivatives exist at (1,1) but the function isn't differentiable at (1,1)?

Faires and Faires (later editions may have another author too) has one example of a function where the partials exist but the function is not differentiable. It is defined piecewise.

f(x,y) = { xy / (x^2 + y^2) if (x,y) =/= (0,0) (that's not equal)
{ 0 if (x,y) = (0,0)

f_x (0,0) = lim h->0 ( f(h,0) - f(0,0) )/ h = lim h->0 (0 - 0)/ h = 0
f_y (0,0) = lim k->0 ( f(0,k) - f(0,0) )/ k = lim k->0 (0 - 0)/ k = 0
But the function is not continuous at (0,0) and thus not differentiable.

They explain that the partial derivatives are the change in the function as you approach along a line parallel to a coordinate axis, but the derivative is the change in the function as you approach in any direction. For the function above, for any point along the line x = y, except at (0,0), f(x,y) = 1/2.

So existence of the partials is not enough.

They prove a theorem which suggests that any function that satisfies your question will be discontinuous at (1, 1). It's Theorem 14.18 on p. 800 of the 2nd edition revised.
If a function f of two variables is differentiable at (x,y), then f is continuous at (x,y).
From this it follows that if a function is discontinuous at (x,y), it can't be differentiated. The trick then is to construct a discontinuous function where the partial derivatives exist.

For your question, I'm guessing simple substitutions would suffice. Use (x-1) instead of x, (y-1) instead of y. I haven't worked through the details, but I'm reasonably confident both partials will exist. It's simply shifting the origin in a sense.

Help find the first partial derivatives of the function?

Find the first partial derivatives of the function:
u = x^(y/z)

(∂u/∂x) = (y/z)(x^((y/z)-1))

(∂u/∂y) = ______

(∂u/∂z) = ______

I figured out how to get ∂u/∂x but I can't can't get the last two. On ∂u/∂y I came up with lnx/z but that's wrong. On ∂u/∂z I came up with ylnx/z but that's wrong, I don't know how to get the last two.

Any function of the form [math]y=ae^x[/math] should work fine.Statement: Any function of the form [math]y=ae^x[/math] has the same derivative as itselfProof:Let [math]\dfrac{dy}{dx}=y[/math], where [math]y=f(x)[/math] is an arbitrary function[math]\implies \displaystyle \int \dfrac{dy}{y}=\int dx[/math][math]\implies \ln y=x+C[/math][math]\implies \ln y=x+\ln a[/math], where [math]C=\ln a[/math][math]\implies \ln y-\ln a=x[/math][math]\implies \ln \left(\dfrac{y}{a}\right)=x[/math][math]\implies \dfrac{y}{a}=e^x[/math][math]\implies y=ae^x[/math]Conclusion: [math]y=ae^x[/math] is the only type of function whose derivative is itself.Thanks for the A2A

You are really asking to solve the equation y’’(x) = y(x),or y’’ – y = 0.This is a second order homogenous differential equation with constant coefficients which is easy to solve. The method is as follows.You form the “characteristic equation” :that means given the differential equationay’’ + by’ +cy = 0you consider the quadraticam^2 + bm +c = 0.You solve for m. Let’s call the roots as m1 and m2The general solution of the differential equation will then be y=ce^(m1x )+ ke^(m2x).where c and k are constants (this is for the case of 2 real roots). In our case,. y’’-y’=0a=1, b=0, c= -1,so we get m^2 – 1= 0 .Thus m= +1 and -1. Therefore, the solution in our case is:y = ce^x + ke^(-x).i.e., all linear combinations of e^x and e^(-x)P.S: A special case of it would be cosh(x) and sinh(x).Hope this solves your problem.Have a good day.Cheers!

While the first derivative of a function being 0 may occur where the function is the minimum (such as for f(x) = x² + 1), it is not necessarily so for several reasons:The function may be at the maximum value (such as f(x) = 1 − x²).The function may be merely at a point of inflection (such as f(x) = x³).You refer to “the” minimum as if you are seeking the global minimum but the first derivative technique cannot distinguish between a local minimum and the global minimum (nor maximum either) (such as f(x) = 3x⁴ − 4x³ − 12x² + 5, which has a global minimum at x = 2 and a relative minimum at x = −1, and a local maximum at x = 0).There are extensions to the technique that will tell you whether you have a local maximum, a local minimum, or a point of inflection. In general you must find all of the relative minima to determine the global minimum (and likewise for maxima).The converse situation also applies. The minimum (or maximum) of a function might occur where the derivative of the function is not 0 for two reasons:The minimum may occur where the function is not differentiable (does not even have a derivative) (such as f(x) = |x|, which has a minimum as x = 0 but the function has a sharp bend there so the derivative is not only not 0—it does not exist at all).The function’s domain is bounded and the minimum occurs at an endpoint of the domain (such as f(x) = 2x − 1 restricted to the domain of [−1; 1], which has its minimum at x = −1 and a right-derivative of 2).Do not memorize formulas as rules. Such tends to show lack of understanding and will lead to mistakes in special cases, of which there are several for the concept of extrema of functions. Learn concepts (such as what I just presented) and any needed formulas will pop out.

Oh, yeah. In fact, something much weirder exists (which is what I assume you really meant): functions that are everywhere smooth (i.e. all derivatives exist), constant on some intervals, but not constant everywhere.Bump functions are the classic examples. Here is the usual example given:[math] f(x) = \begin{cases} e^{-\frac{1}{1-x^2}} & \text{ if } -1 < x < 1 \\ 0 & \text{ otherwise} \end{cases} [/math]This looks something like this (image borrowed from Wikipedia):You can actually build something even weirder, which is a function that is 1 on some interval, 0 outside of some other interval, and transitions smoothly between the two in the gaps. I will leave this as an exercise to the reader (this can be done by modifying the function that I have given).Real analysis is absolutely full of bizarre functions that should not exist but do anyway.

TRENDING NEWS