TRENDING NEWS

POPULAR NEWS

Using Mendelson What Would A Proof Be For This . 1. ~p =

How can I prove A = Pe^rt using e = [(1 + (1/n)]^n?

A little clarification: what you posted is not the value of e. The value of e is the limit of [(1 + (1/n)]^n as n approaches infinity. In other words, as n gets larger and larger, the value of [(1 + (1/n)]^n appracohes a certain value. That value is called e. (It's approximately 2.718281828459045. but that's irrelevant).

To derive A = Pe^(rt), I assume you already have A = P(1 + r/n)^(nt), the formula for compound interest. The formula A = Pe^(rt) is for continuously compounded interest, i.e. when n approaches infinity.

To convert A = P(1 + r/n)^(nt) to A = Pe^(rt), you need to do some algebraic hand-waving.

First, rewrite the nt as nt/r * r:

A = P(1 + r/n)^(nt/r * r)

Next, rewrite as follows, using standard exponent rules:

A = P[(1 + r/n)^(n/r)]^ (rt)

Now, rewrite r/n as 1/(n/r):

A = P[(1 + 1/(n/r))^(n/r)]^ (rt)

Now for more trickery: let x = n/r. Notice that as n approaches infinity, x also approaches infinity. Now you hae this:

A = P[(1 + 1/x)^x]^ (rt)

Since x is aproaching infinity, the expressionm [(1 + 1/x)^x] approaches e. Thus,

A = Pe^(rt)

Hope this helps :-)

Proof: Sequences?

Let {a_n} be the sequence

1/2, 1/3, 2/3, 1/4, 2/4, 3/4, 1/5, 2/5, 3/5, 4/5, 1/6, 2/6, ...

Suppose that 0 ≤ a < b ≤ 1. Let N(n; a, b) be the number of integers j ≤ n such that a_j is in [a, b]. (Thus N(2; 1/2, 2/3) = 2, and N(4; 1/3, 2/3) = 3.)

Prove that

lim [n → ∞] N(n; a, b)/n = b - a

This is a random problem I came across in my text and haven't been able to prove it yet.

Help would be great!

I need help proving this proof: 1. p -> r v (~s) 2. s -> q 3. (s ^ q) -> ~s Conclusion: p -> (~s)?

I assume that you want to prove the following.Suppose [math]p \Rightarrow r \vee \neg s[/math] and [math]s \Rightarrow q[/math] and [math](s \wedge q) \Rightarrow \neq s [/math]We want to prove that [math]p \Rightarrow \neg s[/math]. One can show this by contradiction; suppose [math]p[/math] holds but that also [math]s[/math]. But then we get from (2) that we must have that [math]q[/math] holds. And then from (3) we get that [math]\neg s[/math] holds, which is a contradiction. Note that we do not use (1) here, and that makes me wonder if this is intentional.

I need help solving this proof: 1. p ^ (~q) -> s 2. ~ s 3. p Conclusion: s -> q?

Proof :[Given]                               [math](p \wedge \neg q) \supset s[/math] [Given]                               [math]\neg s[/math]                                      [Given]                               [math]p[/math]                                              [1,2 modus tollens]          [math]\neg (p \wedge \neg q)[/math]       [4 tautology]                     [math]\neg p \vee q[/math]                         [3,5 resolution]                [math]q[/math][Assumption]                   [math]\neg s[/math][6,7 [math]\vee[/math] intro]                     [math]q \vee \neg s[/math][8 tautology]                    [math]s \supset q[/math][7-9 [math]\supset[/math] intro]                   [math]\neg s \supset (s \supset q)[/math][2, 10 Modus ponens]   [math]s \supset q[/math]The proof is quite simple. We are given that  [math](p \wedge \neg q) \supset s[/math] and [math]\neg s[/math]. The only way an implication can be true if the consequent is false, is if the premise of the implication is false. This means that (line 4)  [math]\neg (p \wedge \neg q)[/math] must hold. But since  [math]\neg (p \wedge \neg q) \equiv \neg p \vee q[/math], line 5 follows. We are also given that [math]p[/math] so by resolution it follows that [math]q \vee \top[/math] which is equivalent to [math]q[/math] (line 6).If we further assume [math]\neg s[/math] (line 7) we can use or- introduction to yield [math]q \vee \neg s[/math] which by tautology is the same as [math]s \supset q][/math]. So assumption of [math]\neg s[/math] leads us to the conclusion that [math]s \supset q[/math], which means that [math]\neg s \supset (s \supset q)[/math]. But we are also given that indeed [math]\neg s[/math] holds, so one more inference give us the conclusion [math]s \supset q[/math].

Prove lim as n--->infinity of P(1+(r/n))^nt=Pe^rt using L'Hospital's rule?

You actually don't need L'Hopital's Rule to prove this limit. I will provide a proof with L'Hopital's and then one without it.

Let L = lim (n-->infinity) P(1 + r/n)^(nt). Then, taking the natural logarithm of both sides gives:

ln(L) = lim (n-->infinity) ln[P(1 + r/n)^(nt)]
= lim (n-->infinity) {ln(P) + ln[(1 + r/n)^(nt)]}
= ln(P) + lim (n-->infinity) ln[(1 + r/n)^(nt)]
= ln(P) + lim (n-->infinity) nt*ln(1 + r/n)
= ln(P) + t*lim (n-->infinity) ln(1 + r/n)/(1/n).

This limit now takes the form 0/0. To make this easier to differentiate, note that:

lim (n-->infinity) ln(1 + r/n)/(1/n)
= lim (n-->infinity) ln[(n + r)/n]/(1/n)
= lim (n-->infinity) [ln(n + r) - ln(n)]/(1/n).

Applying L'Hopital's Rule gives:

lim (n-->infinity) [ln(n + r) - ln(n)]/(1/n)
= lim (n-->infinity) [1/(n + r) - 1/n]/(-1/n^2)
= lim (n-->infinity) [n - n^2/(n + r)]
= lim (n-->infinity) [n(n + r) - n^2]/(n + r)
= lim (n-->infinity) rn/(n + r)
= lim (n-->infinity) r/(1 + r/n)
= r.

Finally:

ln(L) = ln(P) + t*lim (n-->infinity) ln(1 + r/n)/(1/n) = ln(P) + rt
==> L = e^[ln(P) + rt] = e^[ln(P)] * e^(rt) = P*e^(rt), as required.

The second (and the easiest, in my opinion) is to use the definition of e.

e = lim (x-->infinity) (1 + 1/x)^x.

We can let 1/x = r/n <==> n = rx. Then, as n --> infinity, x --> infinity. The limit now becomes:

lim (n-->infinity) P(1 + r/n)^(nt)
= lim (x-->infinity) P(1 + 1/x)^(rxt)
= P * lim (x-->infinity) (1 + 1/x)^(rxt)
= P * lim (x-->infinity) [(1 + 1/x)^x]^(rt)
= P * [lim (x-->infinity) (1 + 1/x)^x]^(rt)
= P*e^(rt), by the definition of e.

I hope this helps!

Formal logic proof P -> Q v R |- (P ->Q) v (P ->R)?

Hi

If I remember correctly, it says in the book I use that it is possible to construct a 14-line proof for this argument, but I've never been able to see how. If your teacher shows you how to construct a 14-line proof, I'd be very interested to see how it goes. This is the best I can come up with.

(1) 1. P > (Q v R) Premise
(2) 2. ~((P > Q) v (P > R)) Assumption
(3) 3. P Assumption
(1,3) 4. Q v R 1,3 MP
(5) 5. Q Assumption
(6) 6. ~R Assumption
(3,6) 7. P & ~R 3,6 &I
(3,5,6) 8. Q & (P & ~R) 5,7 &I
(3,5,6) 9. Q 8 &E
(5,6) 10. P > Q 3,9 CP
(5,6) 11. (P > Q) v (P > R) 10 vI
(5) 12. ~R > ((P > Q) v (P > R)) 6,11 CP
(2,5) 13. ~~R 2,12 MT
(2,5) 14. R 13 DNE
(15) 15. R Assumption
(1,2,3) 16. R 4,5,14,15,15 vE
(1,2) 17. P > R 3,16 CP
(1,2) 18. (P > Q) v (P > R) 17 vI
(1,2) 19. ((P > Q) v (P > R)) & ~((P > Q) v (P > R)) 18,2 &I
(1) 20. ~~((P > Q) v (P > R)) 2,19 RAA
(1) 21. (P > Q) v (P > R) 20 DNE

Why is (-1)*(-1)=1? Is there any formal proof for this?

I think there's a useful distinction to be made here with respect to the word "proof". Once we define what it means to multiply two natural numbers together, there are all kinds of things we can prove about it. We can show that there are infinitely many primes, for example. Or that every natural number is the sum of four squares. Those are facts that are based on the fundamental notions of addition and multiplication of natural numbers, and they can be proved using (more, or less) clever logical arguments and arithmetic manipulations. But the fact that [math](-1)\times(-1)=1[/math] is of a different nature. It has to do with how we choose to define multiplication of integers, including negative numbers. It's not an observation that requires proof, it's a definition. Now, we choose this definition because we wish for certain things to remain true, like the associative and distributive laws. For instance, [math]2 \times 3 = 6[/math], so [math] 2 \times (4+(-1)) = 6 [/math] as well. Now if the distributive law is to hold, this means that [math] 8 + 2\times(-1) = 6 [/math] must be true, and this has us want to make [math]2 \times (-1) = -2[/math]. For that reason we define the product of a positive number and a negative number the way we do, and subsequently we define the product of two negative numbers by [math](-a) \times (-b) = a \times b[/math] for the exact same reason. So yes, it's true that you can "prove" this via the distributive property, but logically this is putting things backwards. We define multiplication of integers in such a way as to make the distributive property hold because it's convenient and useful, as opposed to statements like "there are infinitely many primes" which are just true and require proof .

How do you prove that if p is a prime, then (p-1)! ≡-1 (mod p)?

Two ways to prove Wilson’s theorem: [math](p-1)! \equiv -1\pmod{p}[/math] are given by Alon Amit and Ted Alper. These are arguably the two easiest ways to prove the theorem, and require minimal knowledge of modular arithmetic. What I have to add requires a deeper result which I will quote as a fact and not prove here:Theorem. If [math]p[/math] is a prime, then there exists [math]a[/math] such that [math]\text{ord}_p a=p-1[/math].This means that, in addition to [math]a^{p-1} \equiv 1\pmod{p}[/math] satisfied by all integers [math]a[/math] for which [math]p \nmid a[/math], [math]a^n \not\equiv 1\pmod{p}[/math] for any positive integer [math]n[/math] less than [math]p-1[/math]. In other words, the multiplicative group of units in [math]{\mathbb Z}_p[/math] is cyclic - the element [math]a[/math] serves as a generator of this cyclic group.Assume [math]p>2[/math]. Since [math]p \mid (a^{p-1}-1)=(a^{(p-1)/2}–1)(a^{(p-1)/2}+1)[/math] implies [math]p \mid (a^{(p-1)/2}-1)[/math] or [math]p \mid (a^{(p-1)/2}+1)[/math] [math](p[/math] is prime[math])[/math] and since the first divisibility is ruled out [math]([/math]as a consequence of the theorem[math])[/math], we must have[math]a^{(p-1)/2} \equiv -1\pmod{p}[/math].Wilson’s theorem is easily verified for [math]p=2[/math]. Henceforth assume [math]p>2[/math], and choose [math]a[/math] such that [math]\text{ord}_p a=p-1[/math]. Note that this implies [math]p \nmid a[/math]. Thus [math]a^m \equiv a^n\pmod{p}[/math] for [math]1 \le n

How to prove E(X) = 1/p?

Let X have the geometric distribution. X is the number of trials until the first success.

X ~ Geom(p) where p is the success probability

the probability mass function is:

f(x) = P(X = x) = p * (1 - p) ^ (x - 1) for x = 1, 2, 3, 4, 5, ....
f(x) = 0 otherwise

the mean of the geometric is 1/p
the variance of the geometric is (1-p)/p²

proofs

Let q = 1 - p


The mean is:

E(X) =


∑ x * P(X = x)
x = 0


∑ x * p * q ^ x
x = 0

change the index, the first terms of the sum is zero so we can start the index from 1


∑ x * p * q ^ (x - 1)
x = 1

factor out a p

. . . ∞
p * ∑ x * q ^ (x - 1)
. . x = 1

note that the arguement in the sum is a derivative:
∂ q^x / ∂q = x q ^ (x-1)

. . . ∞
p * ∑ ∂[q ^ (x - 1)] / ∂q
. . x = 1

it is not a trivial task to interchange a sumation and differentiation but it is valid in this case.

. . . . . . . ∞
p * ∂/∂q ∑ q ^ (x - 1)
. . . . . . x = 1

the sum is a geometric sum missing the first term and will evaluate to (1 / (1 - q) - 1)

p * ∂/∂q * (1 / (1 - q) - 1)

take the derivative

p * (1 / (1 - q)²)

write in terms of p

p * ( 1/ (1 - (1-p)² )) = p / p² = 1/p

qed.


Doesn't understand from:


note that the arguement in the sum is a derivative:
∂ q^x / ∂q = x q ^ (x-1)

. . . ∞
p * ∑ ∂[q ^ (x - 1)] / ∂q
. . x = 1

it is not a trivial task to interchange a sumation and differentiation but it is valid in this case.

. . . . . . . ∞
p * ∂/∂q ∑ q ^ (x - 1)
. . . . . . x = 1

the sum is a geometric sum missing the first term and will evaluate to (1 / (1 - q) - 1)

p * ∂/∂q * (1 / (1 - q) - 1)

take the derivative

p * (1 / (1 - q)²)

write in terms of p

p * ( 1/ (1 - (1-p)² )) = p / p² = 1/p

qed.

Proof P(AuBuC) = P(A)+P(B)+P(C)-P(AB)-P(AC)-P(B... by using (AuB) = Au(B - AB)?

I'm a little confused, here:

P(A ∪ B) = P(A) + P(A) - P(A ∩ B)

Using that, we can see:

P(A ∪ B ∪ C) = P(A ∪ (B ∪ C)) = P(A) + P(B ∪ C) - P(A ∩ (B ∪ C))
--> first P(B ∪ C) = P(B) + P(C) - P(B ∩ C), so that's easy

A ∩ (B ∪ C) = (A ∩ B) ∪ (A ∩ C), distributive law
--> from this we can get

P(A ∩ (B ∪ C)) = P((A ∩ B) ∪ (A ∩ C)) = P(A ∩ B) + P(A ∩ C) - P((A ∩ B) ∩ (A ∩ C))
--> hopefully you can see:

(A ∩ B) ∩ (A ∩ C) = A ∩ B ∩ C <-- think about it or use the distributive laws

Finally this gives:

P(A ∪ B ∪ C) = P(A ∪ (B ∪ C)) = P(A) + P(B ∪ C) - P(A ∩ (B ∪ C))
--> using P(B ∪ C) = P(B) + P(C) - P(B ∩ C)

P(A ∪ B ∪ C) = P(A) + P(B) + P(C) - P(BC) - P(A ∩ (B ∪ C))
--> using P(A ∩ (B ∪ C)) = P((A ∩ B) ∪ (A ∩ C)) = P(AB) + P(AC) - P(ABC)

P(A ∪ B ∪ C) = P(A) + P(B) + P(C) - P(BC) - P(AB) - P(AC) + P(ABC)
--> you can rearrange to get the exact equation

P(A ∪ B ∪ C) = P(A) + P(B) + P(C) - P(AB) - P(AC) - P(BC) + P(ABC), q.e.d.

TRENDING NEWS