Monday, 20 April 2015

Maths on Steroids (Accelerated Mathematics 1, 2015): 3

If-statements, for-loops, while-loops, what are they good for? Absolutely...everything in programming?

For most of you, last week would have been the first time that you've seen an if-statement in Matlab. And you were given a slightly involved task: you were asked to write up a function that takes in a matrix \(A\) and does the following:
  1. checks if \(A\) is square;
  2. checks if the columns of \(A\) each add up to \(1\);
  3. checks if the entries of \(A\) are all non-negative;
  4. puts out a message if any of the checks fail;
  5. outputs the row-reduced form of \(I-A\) if \(A\) satisfies the checks.
To begin with, let's have a think about how to write if-statements for each of the checks in Matlab.

1. If \(A\) is square.

Here's a simple if-statement to check for square-ness:

[m,n]=size(A);
if m~=n
    'Not square enough?'
end

So, what does it do? We first compute the size of the matrix A, this is a vector whose first entry is the number of rows that \(A\) has and whose second entry is the number of columns. Thus, writing "[m,n]=size(A)" is to assign "[m,n]" to be this vector; i.e.: m=the number of rows of \(A\) and n=the number of columns of \(A\). Then, if m isn't equal to n, this little if-statement outputs the message "Not square enough?", but if m is equal to, then nothing happens.

One question that you should be asking at this point is: what's the deal with the semi-colon ";" at the end of the first line. Why do we bother having it at all? Well, semi-colons at the end of a command lets you run that command without displaying the results. For our purposes, we wanted to assign numbers to m and n, but we didn't really want to see them. Hence the semi-colon. On the other hand, we definitely wanted to be able to read the message "Not square enough?", so we didn't put a semi-colon after that.

2. If the columns of \(A\) add up to \(1\).

This first idea might be to write something like the following:

[m,n]=size(A);
if sum(A)~=ones(1,n) (THIS IS WRONG!)
    'columns need to sum to one!'
end

And where this goes wrong (my mistake - I actually thought that this worked during the Tuesday lab), is that the if-statement then just checks if the first entries of "sum(A)" and "ones(1,n)" are equal. So, it only checks if the first column of \(A\) adds to \(1\).

Here's the "intended" if-statement:

[m,n]=size(A);
T=sum(A);
t=0;
for k=1:n
    if T(k)~=1
        t=t+1;
    end
end
if t>0
    'columns need to sum to 1!'
end

We start by creating a vector \(T\) corresponding to the column-sum of \(A\). Now, we'd like to check if each entry of \(T\) is equal to \(1\), and the above code deals with this by creating a variable \(t\) that increases by \(1\) for every entry \(T(k)\) of \(T\) that isn't equal to \(1\). So, after going through the entries of \(T\) using the for-loop in the third line, either \(t=0\) in which case \(T\) is a row of \(1\)s, or \(t>0\), in which case \(T\) isn't a vector of \(1\)s and hence the columns of \(A\) don't each sum to \(1\). Note that again, nothing happens if the columns of \(A\) each sum to \(1\).

Aaaaaaand, now that you've understood how that worked, let me tell you about alternative ways of doing step 2. To begin with, let's understand how truth values work. You saw in this tute that writing "m==n" lets you check if m is equal to n. But have you tried typing this into Matlab and seeing what you get? Try "3==3" and "3==2" (without the quotation marks). You should have gotten "ans=\(1\)" and "ans=\(0\)" respectively: that's right! Matlab thinks of \(1\) as "true" and \(0\) as "false". Soooo, what happens if I type in something like "[3 2]==[3 3]"? Well...TRY IT! You should get in return a vector "[1 0]", signifying that this is true for the first value, and false for the second value. Sooo....how can we use this to our advantage? Here's a first idea:

[m,n]=size(A);
if sum(sum(A)==ones(1,n))~=n
    'columns need to sum to one!'
end

So, what's going on here? The command "sum(A)==ones(1,n)" checks the column-sum vector of \(A\) against a row of \(1\)s and outputs a vector filled with \(1\)s and \(0\)s signifying whether the corresponding entries of these two row vectors match up. IF every column sums to \(1\), then we should expect "sum(A)==ones(1,n)" to produce a length n vector of \(1\)s - signifying "true" in every entry. If not, then some of the outputs will be \(0\). Now, the sum function not only adds up the columns of a matrix \(A\), it also adds up all the entries of a row vector (if you didn't know this, you could have still taken a transposition of a row vector and then applied the sum function). So, the above code basically checks if "sum(A)==ones(1,n)" is n and displays a message if it's not.

And here's another variant:

[m,n]=size(A);
if sum(abs(sum(A)-ones(1,n)))>0
    'columns need to sum to one!'
end

The absolute value function "abs", when combined with "sum" adds up the difference between sum(\(A\)) and a row of \(1\)s. So, if there is no difference, then this should be \(0\) and if there is a difference, then it'll be strictly above \(0\).

See if you can come up with something-else?

3. If every entry of \(A\) is positive.

To begin with, let's start with something that doesn't work:

[m,n]=sum(A);
if A~=abs(A) (THIS IS WRONG!)
    'slightly too negative =('
end

Again, the problem here is that "A~=abs(A)" only checks to see if the first two entries of these two matrices are equal, and neglects the rest. =/

The "intended" solution is something like the following:

[m,n]=size(A);
u=0;
for j=1:m
    for k=1:n
        if A(j,k)<0
            u=u+1;
        end
    end
end
if u~=0
    'slightly too negative =('
end

Like before, the \(u\) here is a marker for how many entries are negative. The two nested for-loops lets us go through all of the entries of \(A\) and checks if they're (strictly) negative.

And, here's an alternative method - see if you can figure out what's going on:

[m,n]=size(A);
if sum(sum(A==abs(A)))<m*n
    'slightly too negative =('
end

And another variant:

[m,n]=sum(A);
if sum(sum(A-abs(A)))>0
    'slightly too negative =('
end

4. Putting all of this together in just a single for-loop, and in the form of a function, we have:

Method 1:

function B=xx_solve(A)
[m,n]=size(A);
T=sum(A);
t=0;
for k=1:n
    if T(k)~=1
        t=t+1;
    end
end
u=0;
for j=1:m
    for k=1:n
        if A(j,k)<0
            u=u+1;
        end
    end
end
if m~=n
    'Not square enough?'
elseif t>0
    'columns need to sum to 1!'
elseif u>0
    'slightly too negative =('
else
    B=rref(eye(n)-A);
end

This is the most rudimentary, and in some sense, most "universal" version. The ideas that you use here apply to most programming languages.

One thing that you might ask is why there is a semi-colon after "B=rref(eye(n)-A)", as this would withhold the displaying of the matrix \(B\) - the desired output? Well, that has to do with the first line: the fact that it's written "B=xx_solve(A)" means that inputting \(A\) into the function xx_solve will output "B" (provided that everything goes right).

Method 2:

function B=xx_solve(A)
[m,n]=size(A);
if m~=n
    'Not square enough?'
elseif sum(sum(A)==ones(1,n))<n
    'columns need to sum to 1!'
elseif sum(sum(A==abs(A)))<m*n
    'slightly too negative =('
else
    B=rref(eye(n)-A);
end

This should be pretty self-explanatory, right?

Method 3:

function B=xx_solve(A)
[m,n]=size(A);
if m~=n
    'Not square enough?'
elseif sum(abs(sum(A)-ones(1,n)))>0
    'columns need to sum to 1!'
elseif sum(sum(A-abs(A)))>0
    'slightly too negative =('
else
    B=rref(eye(n)-A);
end

And here's a fnal method...it's the intended method in Matlab, but requires knowing a specialised command called "isequal" that basically tests if two matrices are completely equal.

Method 4:

function B=xx_solve(A)
[m,n]=size(A);
if m~=n
    'Not square enough?'
elseif isequal(sum(A),ones(1,n))
    'columns need to sum to 1!'
elseif isequal(A,abs(A))
    'slightly too negative =('
else
    B=rref(eye(n)-A);
end

This is the most efficient method, but it's also pretty Matlab-specific? And to be fair...it's a little bit on the boring side...so...um, totes go with the other methods guys!?!? Totes.


Thursday, 16 April 2015

Maths on Steroids (Accelerated Mathematics 1, 2015): 2

The week before Easter, a friend of mine showed me the following question before my Tuesday morning tute, I solved it and decided that it'd be a good question to give to you guys. =P

Actually, we could do this question using Descartes' circle theorem, but that's probably a bit overkill. Instead, let's tackle it with basic primary school geometry, and a bit of induction and telescoping! Consider the following figure:

Problem: order the radii of all the little yellow circles up from the bottom of the center as a sequence \(\{r_n\}\) and show that \begin{align}r_n=\frac{1}{2n(n+1)}.\end{align} When doing proofs for general results like this, it can often really really help to work out a few baby cases. So, let's start by showing that \(r_1=\frac{1}{4}\).

As you can see from the diagram below, we've got a (vertical) trapezium with two right angles opposite each other and with the parallel sides of lengths \(1\) and \(r_1\). Thus, by chopping off a rectangle at the bottom of this trapezium, we're left with a right-angled triangle with sides of lengths \(1-r_1,1\) and \(1+r_1\).
And so, Pythagoras' theorem tells us that \begin{align} (1-r_1)^2+1^2&=(1+r_1)^2 \\ \Rightarrow 1&=4r_1.\end{align} Oookay, that was fair straight forward. Let's try this again for \(r_2\):
So, we see that by repeat this trapezium trick to calculate \(r_2\) we get: \begin{align} (1-2r_1-r_2)^2+1^2&=(1+r_2)^2\\ \Rightarrow 1&=(1+r_2)^2-(1-2r_1-r_2)^2\\ &=4(r_1+r_2)(1-r_1).\end{align} Substituting in \(r_1=\frac{1}{4}\), we show that \(r_2=\frac{1}{12}\). Now, if we keep going with this trapezium trick (*), we see that to compute \(r_{n+1}\), we get: \begin{align} (1-2(r_1+\ldots+r_n)-r_{n+1})^2+1^2&=(1+r_{n+1})^2\\ \Rightarrow 1&=(1+r_{n+1})^2-(1-2(r_1+\ldots+r_n)-r_{n+1})^2\\ &=4(r_1+\ldots+r_{n+1})(1-r_1-\ldots-r_{n}).\end{align} Thus, \begin{align} r_{n+1}=\frac{1}{4-4(r_1+\ldots+r_n)}-(r_1+\ldots+r_n).\end{align} So, we've now got this recursive expression for \(r_{n+1}\) and recursion should really make us think: induction.

We're going to use a slightly stronger form (**) of mathematical induction than what you're used to seeing. Don't worry - the logic is the same. =) Okay, here goes...

Let's start off with a proposition: let the proposition \(p(n)\) be the statement that \(r_n=\frac{1}{2n(n+1)}\), and we're going to prove these statements for every \(n\in\mathbb{N}\).

Base case: \(n=1\)... well, we've actually already proven this above. We showed that \(r_1=\frac{1}{4}=\frac{1}{2\times1\times 2}\).

(Strong) induction step: now, we're going to do something a little different to showing that "\(p(k)\Rightarrow p(k+1)\)". We're actually going to show that \begin{align}p(1),p(2),\ldots, p(k)\text{ all being true }\Rightarrow p(k+1)\text{ is true.}\end{align} Take a moment to think about that logic, and I hope that you see that it's the same type of logic as you usually use in induction, except that it feels stronger because you're assuming more things for the induction step.

Okay....breathe out.

So, let's assume that \(p(1),\ldots,p(k)\) are all true. Now, we showed before that \begin{align} r_{k+1}=\frac{1}{4-4(r_1+\ldots+r_k)}-(r_1+\ldots+r_k).\end{align} And now, invoking all of our assumptions, we can try to work out what \(r_1+\ldots+r_k\) is equal to, and hence what \(r_{k+1}\) is equal to: \begin{align} r_1+\ldots+r_k&=\frac{1}{2}\left(\frac{1}{1\times 2}+\frac{1}{2\times 3}+\ldots+\frac{1}{k\times (k+1)}\right)\\ &=\frac{1}{2}\left(\frac{1}{1}-\frac{1}{2}+\frac{1}{2}-\frac{1}{3}+\frac{1}{3}-\frac{1}{4}+\ldots-\frac{1}{k}+\frac{1}{k}-\frac{1}{k+1}\right)\\ &=\frac{k}{2(k+1)}.\end{align} W000t! Telescoping! And substituting this in, we get that: \begin{align}r_{k+1}&=\frac{1}{4-\frac{2k}{k+1}}-\frac{k}{2(k+1)}\\ &=\frac{k+1}{2(k+2)}-\frac{k}{2(k+1)}\\&=\frac{(k+1)^2-k(k+2)}{2(k+1)(k+2)}=\frac{1}{2(k+1)(k+2)},\end{align} which is precisely what we want. I.E.: the proposition \(p(k+1)\) is true!

Therefore, by the principle of (strong) mathematical induction, the propositions \(p(n)\) are true for all \(n\in\mathbb{N}\).

=)

*: Bah, I didn't feel like drawing the diagram.
**: strictly speaking, it's not stronger - it's the same strength! It's just that it seems stronger.

Monday, 23 March 2015

Analysis for Realsies (2015): 1

Hey guys! Sooo, last week in Tute A y'all saw this cool fact that one way to check the divisibility of a (natural) number by \(3\) is to add up all of its digits and check to see if the resulting sum is divisible by \(3\).

E.g.: take \(832745812349\), now \(8+3+2+7+4+5+8+1+2+3+4+9=56\) and \(56\) isn't divisible by \(3\). Therefore \(832745812349\) isn't divisible by \(3\).

And in fact, this is also true for \(9\), you can add up the digits of a number and if this sum is divisible by \(9\), then the original number must also be divisible by \(9\) (and vice versa).

I then asked you to try to figure out a similar trick for \(11\), and you can find the rule (and a derivation/proof) here!

Well, I then issued like a super-ninja-hax-challenge version of this where I asked you to derive a similar rule for divisibility by \(101,1001,10001,100001,\ldots\) How did you go with this? I'll write the answer below, but I kinda want you to take a look at the above blog post first.

Here we go:

So, the trick for divisibility with respect to \(11\) of a \(k\)-digit whole number (with digit representation) \(\overline{n_0 n_1 n_2\ldots n_k}\) is to take the following "alternating sum" \begin{align} n_k-n_{k-1}+n_{k-2}-\ldots+(-1)^kn_0,\end{align} and if this sum is divisible by \(11\), then the original number is divisible by \(11\). Note in particular, the order in which we've go - we've taken the last digit, subtracted the second last...etc. And the reason that we've done this is because this presentation generalises more easily to \(101,1001,\ldots\)

So, what about \(101\)? Well, the rule is to take the last two digits, subtract the next pair, then add the next pair, and so forth. So, let's see an example:

the number \(74914071682\) satisfies that \begin{align} 82-16+7-14+49-7=101,\end{align} which is clearly divisible by \(101\). Thus, \(74914071682\) should be divisible by \(101\) and indeed:  \(74914071682=101\times 741723482\).

And the generalisation to \(1001\) involves us taking the last three digits, then subtracting the next triple, and adding the next triple...etc. And I the pattern goes on...

What about the proof!? I hear you ask. Well, it's pretty similar to the proof for our rule regarding divisibility by \(11\). So, I'll let this to you guys as a healthy challenge exercise. =)

Maths on Steroids (Accelerated Mathematics 1, 2015): 1

Alright, alright, alright, alright as promise: I'mma give a more detailed explanation of how to do question 24 from your tutorials using...symmetry. =D (*)

But because we launch into that, some of you might find this blog entry from a previous year useful. Those who are in my tutes will have seen the very first thing in this blog post, but those who're only in my lab might find it interesting. I also go through a line-by-line analysis of the code at the end of last week's lab - if you totes understood everything, then great! If not, this might be a useful reference (plus, I point out a few things along the way - who knows, maybe you missed something?).

Okay, on to symmetry?

Now, we want to prove that \begin{align}\left| \begin{array}{ccc} 1 & 1 & 1 \\a & b & c \\ a^3 & b^3 & c^3\end{array}\right|=(a-b)(b-c)(c-a)(a+b+c).\end{align} And the first thing that I want to do is to make the following observations:

  1. The determinant is going to be a degree 4 polynomial in \(a,b,c\). This is because each (monomial) term in the determinant is a product of matrix entries taken from different rows and different columns (have a gooooood think about how cofactor expansion works).
  2. If any two of the three variables are equivalent, e.g.: \(a=b\), then the matrix has repeated columns and hence will have determinant equal to \(0\) (**). This tells us that \((a-b),(a-c)\) and \((b-c)\) must all be factors of the determinant.
  3. Switch any two of the variables in the matrix has the same effect as switching the two corresponding rows! This means that the determinant changes sign every time you switch two variables
Aaaaand, that should be enough to get us started.

Now, we know that the determinant is some polynomial \(d(a,b,c)\). And observation 2 tells us that \begin{align} d(a,b,c)=(a-b)(b-c)(c-a)\times p(a,b,c),\end{align} where \(p(a,b,c)\) is another polynomial in \(a,b,c\). Add on the fact that observation 1 tells us that every single term in \(d(a,b,c)\) has to be degree 4, this tells us that the polynomial \(p(a,b,c,)\) actually has to be linear: \begin{align} p(a,b,c)=\alpha a+\beta b+\gamma c\text{, for some }\alpha,\beta,\gamma\in\mathbb{R}.\end{align}

Okay, so it's lookin' preeeetty good...we're getting a good handle-hold on what \(d(a,b,c)\) looks like.

Thanks to observation 3, we know that \(d(a,b,c)=-d(b,a,c)\) and so: \begin{align} (a-b)(b-c)(c-a)p(a,b,c)&=-(b-a)(a-c)(c-b)p(b,a,c)\\&=(a-b)(b-c)(c-a)p(a,b,c)\\ \Rightarrow p(a,b,c)&=p(b,a,c)\\ \Rightarrow \alpha a +\beta b+\gamma c &= \alpha b+\beta a+\gamma c.\end{align} Since \(a\) and \(b\) are independent variables, this means that \(\alpha=\beta\). By the same logic applied to \(a\) and \(c\), we get that: \begin{align} \alpha=\beta=\gamma.\end{align} Soooo, we've just shown that the determinant may be written as \begin{align} d(a,b,c)=\alpha (a-b)(b-c)(c-a)(a+b+c),\end{align} and all we need to do now, is to show that \(\alpha=1\). One way to do this is to substitute nice numbers into \(a,b,c\) and to work it out. In fact, if we just set \(a=0\) we see that \begin{align}d(0,b,c)=\left| \begin{array}{ccc} 1 & 1 & 1 \\ 0 & b & c \\ 0 & b^3 & c^3\end{array}\right|=\left|\begin{array}{cc}b & c\\ b^3 & c^3\end{array}\right|=bc(c-b)(b+c).\end{align} Since we know that \begin{align}d(0,b,c)=\alpha bc(c-b)(b+c),\end{align} this gives us the desired conclusion that \(\alpha=1\).

AND WE'RE DONE! Now, this technique can also be used to do the other determinant in this question. It's a little bit trickier though...and you'll need to show, using symmetry arguments, that at some point you can get the determinant \(d(x,y,z)\) into the following form: \begin{align} d(x,y,z)=(x-y)(y-z)(z-x)[\alpha (x^2+y^2+z^2)+\beta (xy+yz+zx)],\end{align} and then show that \(\alpha=0\) and \(\beta=1\). This is a healthy exercise and you should all totes give it a try!


*: Note to self: smileys look vaguely wrong in a serif font... =/
**: if you didn't know this, combine the fact that determinants aren't changed by transposition and the method of computing determinants using row reduction

Tuesday, 25 November 2014

I Refuse to Name This "Complex Analysis...Made Simple": 5 (final)

Well, well well, wellity well well...I have no idea what to write about for this last blog entry. =/

Let's talk about...Riemann spheres!

So, at some point in this complex analysis course, you'll have learned that we really ought to think of the complex plane \(\mathbb{C}\) as lying on the Riemann sphere \(\hat{\mathbb{C}}:=\mathbb{C}\cup\{\infty\}\). Doing so means that meromorphic functions \(f:\mathbb{C}\rightarrow\mathbb{C}\) (i.e.: they have poles) may be thought of as holomorphic functions \(f:\mathbb{C}\rightarrow\hat{\mathbb{C}}\). Moreover, if the behaviour of \(f(z)\) is nice enough at \(z=\infty\) (i.e.: no essential singularities), then \(f\) can be extended to a holomorphic function \(f:\hat{\mathbb{C}}\rightarrow\hat{\mathbb{C}}\).

One class of these holomorphic functions between Riemann spheres that you've encountered in this course are Mobius transformations: \begin{align} z\mapsto\frac{az+b}{cz+d}\text{, where }ad-bc\neq0.\end{align} You've already seen some nice facts about them. You know that they are conformal (definition: angle-preserving) transformations, that they take circles/lines to circles/lines and you know that they're bijections. In fact, they are precisely the collection of all bijective holomorphic maps between Riemann spheres.

Let's prove this!

Consider a bijective holomorphic function \(f:\hat{\mathbb{C}}\rightarrow\hat{\mathbb{C}}\). Since this is bijective, let \(a,b\in\mathbb{C}\) respectively denote the points which map to \(0=f(a)\) and \(\infty=f(b)\). We may assume by precomposing \(f\) with a Mobius transformation that \(b\neq\infty\). Now \(z=a\) must be an order 1 zero, or else there'd be a small neighbourhood around it where two points map to some \(\epsilon\) close to \(0\). Likewise we can argue that \(z=b\) must be an order 1 pole of \(f\) by considering the function \(f(\frac{1}{z})\). Thus, the function \begin{align}g(z)=\frac{(z-b)f(z)}{z-a}\end{align} is a holomorphic function on \(\hat{\mathbb{C}}\) with neither zeros nor poles. In particular, its absolute value function \(|f|\) is a continuous function on a compact domain (because spheres are totes closed and bounded) and hence must be bounded. Thus, by Liouville's theorem, the function \(g(z)\) is equal to some non-zero constant \(c\in\mathbb{C}\): \begin{align}f:z\mapsto\frac{cz-ac}{z-b}.\end{align} Finally, \(-bc+ac\neq0\) since \(c\neq0\) and \(a=b\) would contradict the bijectivity of \(f\). Thus, this is totes a Mobius transformation.

Note that using the same arguments, we can show that any holomorphic map between Riemann spheres takes the form of a rational function: \begin{align} \frac{a_0+a_1z+a_2z^2+\ldots+a_nz^n}{b_0+b_1z+b_2z^2+\ldots+b_mz^m}!\end{align}
In fact, I was going to play around with a few polynomials and count branch points and what not, leading us to the Riemann-Hurwitz formula; but it's getting late, and we really ought to give this semester a rest. =)

Monday, 24 November 2014

Analysis for Realsies: 8 (final)

Okay, as promised - my last words. =P

In my previous post, I promised you guys that I'd show you how to differentiate \(\sin(x)\) from first principles and that I'd also prove a cool geometric fact to do with the lunes of Alhazen.

So, you suddenly wake up in the dead of night with cold beads of sweat on your forehead aperch'd, and there is but one solitary thought plaguing your addled-mind: how the heck do you actually differentiate \(\sin(x)\)? Well, when in doubt...write it out in first principles: \begin{align}\frac{\mathrm{d}\sin(x)}{\mathrm{d}x}=&\lim_{h\to0}\frac{\sin(x+h)-\sin(x)}{h}\\=&\lim_{h\to0}\frac{\sin(x)\cos(h)+\cos(x)\sin(h)-\sin(x)}{h}\\=&\lim_{h\to0}\sin(x)\frac{\cos(h)-1}{h}+\cos(x)\frac{\sin(h)}{h}.\end{align} So, we see that we need to calculate these two limits: \begin{align}\lim_{h\to0}\frac{\sin(h)}{h}\text{ and }\lim_{h\to0}\frac{1-\cos(h)}{h}.\end{align} To calculate the former, consider the following picture:
The height of the smaller triangle is \(\sin(x)\). This is shorter than the circular arc, which is of length \(x=\frac{x}{2\pi}\times2\pi\). Moreover, the area of the circular sector is \(\frac{1}{2}x=\frac{x}{2\pi}\times\pi\times1^2\), which is less than the area of the big triangle: \(\frac{1}{2}\tan(x)\). Putting these two inequalities together, we have: \begin{align}\forall x\in(0,\frac{\pi}{2}), \sin(x)\leq x\leq\tan(x).\end{align} Dividing by \(\sin(x)\) and taking the reciprocal, we see that: \begin{align}\cos(x)\leq\frac{\sin(x)}{x}\leq 1.\end{align} Since \(\lim_{h\to0^+}\cos(h)=0\), the sandwich theorem tells us that: \begin{align}\lim_{h\to0^+}\frac{\sin(h)}{h}=1.\end{align}Since \(\frac{\sin(h)}{h}\) is an even function, the limit from the negative side of \(h=0\) must also approach \(1\). Knowing this first limit allows us to compute the second:\begin{align}\lim_{h\to0}\frac{1-\cos(h)}{h}=&\lim_{h\to0}\frac{1-\cos^2(h)}{h(1+\cos(h))}\\=&\lim_{h\to0}\frac{\sin^2(h)}{h(1+\cos(h))}=\frac{1}{2}\times0=0.\end{align} Shoving this into our first principles computation of the derivative of \(\sin(x)\) and we see that \(\frac{\mathrm{d}\sin(x)}{\mathrm{d}x}=\cos(x)\). W00000t! Having assured yourself that you're perfectly capable of differentiating \(\sin(x)\), you drift back to sleep...

Only to be awoken by an infernal desire to prove that the area of the two dark shaded burgundy "lunes" (the crescent shaped moon things) in the following diagram add up to the area of the white triangle in the middle:

Orrite, let's prove this. First note that the white triangle in the middle is a right-angled triangle (this is true of all triangles circumscribed by a circle whose hypotenuse is the diameter), and we denote its side-lengths by \(a\leq b\leq c\), where \(c\) is the diameter of the big circle. Pythagoras' theorem tells us that \begin{align} a^2+b^2=c^2,\end{align} hence, the area of the yellow semicircle (\(\frac{1}{8}\pi c^2\)) is equal to the total area of the two (duo-toned) smaller red semicircles (\(\frac{1}{8}\pi a^2+\frac{1}{8}\pi b^2\)). Now, the area of the yellow semicircle is the same as the area of white triangle and the two lighter red sections, and therefore:

Area of the white triangle+area of the light red bits=area of the dark red bits+area of the light red bits.

Cancelling the light red bits from both sides then yields the desired result. =)

Well, it's been a good semester guys, I hope that you have an awesome Christmas break!

Sunday, 26 October 2014

Analysis for Realsies: 7

Hey guys! W0000, SWOTVAC! It's time to learn every subject that you've done this semester from scratch within a week! Awwww yisssssss!

A week or two ago, you'll have seen a question in your tute B prac sheets asking you if the following sequence converges: \begin{align}\sum_{k=1}^{\infty}\sin(k+k^{-1}).\end{align} And the answer is no, by the divergence test. That is: the sequence \((\sin(k))\) doesn't converge to \(0\), so the series can't actually converge. But, we didn't really show that this sequence doesn't converge to \(0\), so just said that this is true by obviousness. There are probably a fair few ways that you guys can prove this, but let's just use something that you're all totes guns with now: \(\epsilon-N\) proofs!

And let's do this by proof-by-contradiction: assume that \(\lim_{k\to\infty}\sin(k+k^{-1})=0\). This means that for \(\epsilon=\frac{1}{3}\sin(1)>0\), there exists some \(N\in\mathbb{N}\) such that for every \(n>N\), \begin{align}|\sin(n)-0|<\epsilon=\frac{1}{3}\sin(1).\end{align} Consider an arbitrary integer \(k>N\), then \begin{align} |\sin(k+k^{-1})|<\epsilon\text{ and }|\cos(k+k^{-1})|>\sqrt{1-\epsilon^2}=\sqrt{1-\frac{\sin^2(1)}{3}}>\frac{\sqrt{8}}{3}.\end{align} Now, since \(k+1\) is also larger than \(N\), we see that: \begin{align}\epsilon>&|\sin\left(k+1+\frac{1}{k+1}\right)-0|\\=&|\sin\left(k+\frac{1}{k}-\frac{1}{k(k+1)}\right)|\\ =&\left|\sin\left(1-\frac{1}{k(k+1)}\right)\cos(k+k^{-1})+\sin(k+k^{-1})\cos\left(1-\frac{1}{k(k+1)}\right)\right|\\ =&\left|\sin\left(1-\frac{1}{k(k+1)}\right)\cos(k+k^{-1})--\sin(k+k^{-1})\cos\left(1-\frac{1}{k(k+1)}\right)\right|\\ \geq&\left||\sin\left(1-\frac{1}{k(k+1)}\right)\cos(k+k^{-1})|-|\sin(k+k^{-1})\cos\left(1-\frac{1}{k(k+1)}\right)|\right|,\end{align} where this final line is due to (one of the forms of) the triangle inequality. This in turn means that: \begin{align}\epsilon=\frac{\sin(1)}{3}>&\sin\left(1-\frac{1}{k(k+1)}\right)|\cos(k+k^{-1})|-|\sin(k+k^{-1})|\\ >&\frac{\sqrt{8}}{3}\sin\left(1-\frac{1}{k(k+1)}\right)- \frac{\sin(1)}{3}, \end{align} for all \(k>N\). This is false when \(k\) is large enough (because \(\sqrt{8}-1>1\) and \(\lim_{k\to\infty}\sin(1+(k(k+1))^{-1})=\sin(1)\). Soooooooo, we see that the sequence \((\sin(k))\) can't possibly converge to \(0\), and we're done!

Now, if you were in my tute A classes last week, I would have given you the following two variants of the above problem:

Do the following series converge? \begin{align} \sum_{k=1}^{\infty} \sin\left(\pi k+\frac{1}{\pi k}\right)\text{ and }\sum_{k=1}^{\infty}\sin\left(2\pi k+\frac{1}{2\pi k}\right).\end{align}
Answer: yes (for the left series) and no (for the right).

To see that \(\sum \sin(\pi k+\frac{1}{\pi k})\) converges, observe that \begin{align} \sin\left(\pi k+\frac{1}{\pi k}\right)=(-1)^k\sin(\tfrac{1}{\pi k}).\end{align} The desired convergence follows from invoking the alternating series theorem. =)

To see that \(\sum \sin(2\pi k+\frac{1}{\pi k})\) diverges, observe that \begin{align} \sin\left(2\pi k+\frac{1}{2\pi k}\right)=\sin(\tfrac{1}{2\pi k}).\end{align} Intuitively, when \(k\) is epically huge, \(\frac{1}{2\pi k}\) will be tiny. And for \(x\) really close to \(0\), \(\sin(x)\approx x\). Thus, we expect that for large \(k\), \begin{align}\sin(\tfrac{1}{2\pi k})\approx \tfrac{1}{2\pi k}.\end{align} Hence, the whole series behaves roughly like \(\frac{1}{2\pi}\) times the harmonic series...which totally blows up to infinity!

But...how can we formally set this out? Let's do it using the mean value theorem!

So, basically, the MVT tells us that there exists some \(c_k\in(0,\frac{1}{2\pi k})\) such that \begin{align}\cos(c_k)=\frac{\sin(\tfrac{1}{2\pi k})-\sin(0)}{\tfrac{1}{2\pi k} - 0}\text{, and thus:}\\ \sin(\frac{1}{2\pi k})=\frac{1}{2\pi k}\cos(c_k)\geq\frac{1}{2\pi k}\cos(\tfrac{1}{2\pi}). \end{align} This in turn means that: \begin{align}\sum_{k=0}^{\infty}\sin\left(2\pi k+\frac{1}{\pi k}\right)\geq \cos(\tfrac{1}{2\pi})\sum_{k=0}^{\infty}\frac{1}{2\pi k}.\end{align} Since the right series blows up to infinity, our series \(\sum \sin(2\pi k+\frac{1}{\pi k})\) is divergent! (Note that we're really just applying of the comparison test for series)

Okay...so, this blog post has gotten way longer than I had in mind...and I had hoped to show you guys how to differentiate \(\sin(x)\) from first principles and also a nice little geometric fact that Pat showed me in class...maybe...I'll do another blog post at some point...