TRENDING NEWS

POPULAR NEWS

Hep With Linear Algebra Proofs

Help with Linear Algebra proof?

Let (Ci)_(i=1..n) be the columns of the matrix A.

Rank(A)=dim(vect(Ci)_(i=1..n)).
Rank(cA)=dim(vect(cCi)_(i=1..n))
But, vect((cCi)_(i=1..n))=vect((Ci)_(i=1..n)) because c is not equal to zero.
Thus, rank(cA)=rank(A).

Remark : Maybe, your convention for the definition of the rank is different from the french one. It is dim(vect(Ci)_(i=1..n)) maybe yours is dim(vect(Li)_(i=1..m)) where Li are the line of A. The proof does not change. Replace lines by columns.

Linear Algebra Proof Help?

Well,

this is a simple application/verification case of Cayley–Hamilton's theorem :
A =
(a c)
(b d)
then the characteristic polynomial of A is defined as
p(λ.I2 - A) = (λ-a)(λ-d) - (-b)*-(c)
= λ^2 - (a+d)λ + (ad - bc)
= λ^2 - Tr(A)λ + det(A)
therefore :
p(A) = A^2 - Tr(A)A + det(A) I2
A^2 =
(a^2+bc ac+cd)
(ab+bd bc+d^2)
-Tr(A).A =
(-a^2-ad -ac-cd)
(-ab-bd -ad-d^2)
det(A) I2 =
(ad-bc ......0)
(0....... ad-bc)
and we can easily verify by addind all corresponding terms in their positions that :
A^2 - Tr(A)A + det(A) I2 = O_2x2

et voilà !!

NB: see link: http://en.wikipedia.org/wiki/Cayley%E2%8...

hope it' ll help!!

Help with a linear Algebra proof!?

Prove the following variant of the rank-nullity theorem: If T is a linear transformation from V to W, and if ker T and im T are both finite dimensional, then V is finite dimensional as well, and dim V = dim(ker T) + dim(im T).

Linear algebra proof help!!?

a real-valued function f defined on the real line is called an even function if f(-t)=f(t) for each real number t. prove that the set of even functions defined on the real line with the operations of addition and scalar multiplication is a vector space.

example of add/multiplication operations:

(f+g)(s) = f(s)+g(s)
(cf)(s)= c[f(s)]

please, i just need the first step, then i can take it from there!

Help with Linear Algebra proof?

I'll sketch a proof for you. I'm assuming that n>=m.

Using elementary row and column matrices, we can rewrite A
as the block matrix J = [ I_m | 0], where I_m is the m by m identity matrix.

(Let R and C be the product of elementary row and column matrices, respectively, used to reduce A to J. Then, we have that RAC = J.)

With J, this problem is easy. Set K be the n x m matrix with block form
[I_m]
[ 0 ].

Then, JK = I_m, as needed.

Finally, we can make B explicit.

Since JK = I_m and RAC = J, we have
(RAC)K = I_m
==> A (CK) = R^(-1) I_m
==> A (CK) = R^(-1)
==> A (CK) R = R^(-1) R
==> A (CKR) = I_m.

Setting B = CKR, we are done.

I hope that was informative!

Linear algebra proof help!!?

I don't even know where to start with this:

Let u, v and w be 3-dimensional column vectors. Show that if any 3-dimensional column vector b can be written as a linear combination of u, v and w then any 3 × 3 matrix with u, v and w as its columns must be invertible.

What does the fact that u, v, and w can combine to make any b have to do with something being invertible??????

How hard is proof based linear algebra?

It's a matter of opinion.  I found proof-based linear algebra quite easy.  In fact, moreso than with other proof-based courses, I kind of feel like proof-based linear algebra bears a strong resemblance to calculational linear algebra, just at a slightly higher level of abstraction.In other words, in an intro linear algebra class, you spend a lot of time practicing reducing a matrix to row echelon form, or solving a system of linear equations.  In a proof-based class, you just do that all with a stroke of a pen, like "Since this matrix has  such-and-such property, there is a basis in which it is diagonal.  Let B be such a basis..."The intuition is still the same, but you don't get mired down in calculations.To be sure, this comment applies to any proof-based class.  But for me the connection between the calculations and the abstractions is a little tighter in linear algebra than it is in, say, real analysis.  A real analysis student might have a hard time connecting that abstract material to the corresponding concrete topic in calculus, at least in some spots.  In linear algebra, those connections seem easier to me.

Could someone please help me with this linear algebra proof?

Warm up: Assume A² = 0.

You have the matrix Ⅰ-A and you don't know what to do with it. So try squaring it -- why not?

      (Ⅰ- A)² = (Ⅰ- A)(Ⅰ-A) = ⅠⅠ- AⅠ-ⅠA + A² = Ⅰ- 2A + 0

That's interesting. Notice how the A² term cancels? That leads us to try something similar:

      (Ⅰ- A)(Ⅰ+ A) = ⅠⅠ- AⅠ+ⅠA - A² = Ⅰ- A + A - 0 = Ⅰ

Zing! The inverse of Ⅰ- A is the matrix Ⅰ+ A.




Now let's try to do the real problem, applying the same reasoning:

      (Ⅰ- A)(Ⅰ+ A^9 ) = Ⅰ- A + A^9 - A^10 = Ⅰ- A + A^9

Not quite right. We'll have to be more careful ... how can we make all the terms cancel? Oh right...

      (Ⅰ- A)(Ⅰ+ A + A² + A³ + .... + A^9)

Expand that. You will get tons of cancellation -- it's the rule for finding geometric sums, if that rings any bells -- and you're left with

      (Ⅰ- A)(Ⅰ+ A + A² + A³ + .... + A^9)

      = Ⅰ(Ⅰ+ A + A² + A³ + .... + A^9)
            - A (Ⅰ+ A + A² + A³ + .... + A^9)

      = Ⅰ+ A + A² + A³ + .... + A^9
            - A - A² - A³ - .... - A^10

      = Ⅰ- A^10 = Ⅰ- 0 = Ⅰ

Thus (Ⅰ- A) is invertible. Its inverse is (Ⅰ+ A + A² + A³ + .... + A^9).

QED



So that's the inverse. Now this idea also works in general -- the number 10 wasn't special. Do you see?

How does one best study proofs in linear algebra?

You’re in fact asking two questions, and not one :How does one best study proofs in linear algebra ?How can one learn linear algebra without explicit examples ?Lucky for me, your two questions share a common answer. The key lies in “linear algebra” itself. The whole theory was invented to formalize what we used to do with our hands : geometry. Linear geometry, that is.The best way to learn a theorem is to prove it yourself, and the best help you can get for linear algebra comes from visual intuition.Do you know what linear applications do the the space ? Not what they look like, or how you can compute them, but how, visually, they modify the space ? If not, I recommend you the Essence of Linear algebra by 3Blue1Brown (YouTube), it’s a short series of 10-min videos which are great to build an intuition for linear algebra.In any case, I don’t know how deep you are in linear algebra, and of course theorems like “In finite dimension, any real endomorphism with empty spectrum allows a stable plan” can’t really be built from intuition.But hopefully there will come a day when you are pretty convinced that [math]\displaystyle M_n(\mathbb R)\setminus GL_n(\mathbb R)[/math] is somehow a surface in [math]\mathbb R^{(n^2)}[/math], and therefore it’s pretty intuitive that [math]GL_n(\mathbb R) \ [/math]is dense in [math]M_n(\mathbb R)[/math]

TRENDING NEWS