Jump to content

Strange/Interesting Mathematical Phenomena


Nathanael D. Striker

Recommended Posts

What are your favorite mathematical concepts that are strange due to them violating preconceived notions or are just interesting? To get this topic started, let me show you a proof of one of my favorites: the probability of hitting a rational number in the set [0,1] is zero.

 

Before we begin, let us define the Lebesgue Measure. In essence, it is a way to measure subsets of Rn (for a more detailed definition, click on the above link). For our purposes, we are interested in the following property: the Lebesgue Measure of a closed interval [a,b] is m([a,b]) = b-a, where a

 

The Lebesgue Measure of [0,1] is trivial to find as it is a direct application of the property in question: m([0,1]) = 1.

 

The Lebesgue Measure of Q is not as straight-forward to find. In order to motivate this result, let us consider the following: Q = U(qi). What this says is that Q can be thought up as the union of an infinite number of rational numbers qi (there are a countably infinite number of rational numbers). This separation of Q allows us to use our property above: m(Q) = m(U(qi)) = sum(m(qi)) = sum(0) = 0, where sum() is the summation function.

 

So, we have that m([0,1]) = 1 and m(Q) = 0. This is enough to show that the probability of hitting a rational number in the set [0,1] is zero, but it begs the question of what makes m([0,1]) = 1 if m(Q) = 0. Well, we can define the irrational numbers as R/Q, which denotes the set of elements of the real numbers that are not in the set of rational numbers. As we are concerned with the set [0,1] and not R, we can redefine the irrational numbers as [0,1]/Q. Now, the m([0,1]) = m(Q ∩ [0,1]/Q) = m([0,1]/Q) = 1. Therefore, the Lebesgue Measure of the irrationals in the set [0,1] is 1.

 

The above result not only suggests that the probability of hitting a rational number in the set [0,1] is 0, but it also suggests that the probability of hitting an irrational number in the set [0,1] is 1. Now this makes sense since P(S) = 1, and we have defined S to be S = [0,1] in our example.

 

And with that, let us discuss the topic at hand: What are your favorite mathematical concepts that are strange due to them violating preconceived notions or are just interesting?

Link to comment
Share on other sites

YvlMWnN.png

 

 

 

The Maclaurin series expansion for cos(x) is given by

 

EsQu8Id.png

 

(you can plug i*x for x in the above series)

 

cos(i*x)= zw2oIpc.png

 

Maclaurin series for sin(x) is

 

8x365Du.png

 

447a79826774707026bbefcd76962d3a.png

 

(you can plug i*x for x in the above series)

 

-i*sin(ix)=

 

Dn389G9.png

 

cos(ix) - i sin(ix)=e^x e^(π*i)= cos (i* π*i))- i(i* π*i)= cos ( -π) - i sin ( -π) = 1-0=1

 

 

Pretty simple overall

Link to comment
Share on other sites

I'm pretty sure he knows and wanted to give a general proof for Euler's equation too. I might be wrong though.

 

And there exists a straight-forward proof for that as well.

 

Let's define y = cos(x) + i*sin(x).

Taking the derivative: dy/dx = -sin(x) + i*cos(x) = i[cos(x) + i*sin(x)] (i^2 = -1) = i*y

Rearranging both sides: dy/y = i*dx

Integrating both sides: ln(y) = i*x + c

Turning both sides to a power of e: y = exp(i*x+c) (exp(ln(y)) = y due to inverse)

So, we have exp(i*x+c) = cos(x) + i*sin(x). At x = 0, exp© = 1 ==> c = 0. Therefore, exp(i*x) = cos(x) + i*sin(x)

 

Proving exp(i*pi) + 1 = 0 can be shown by letting x = pi in the above equation.

Link to comment
Share on other sites

And there exists a straight-forward proof for that as well.

 

Let's define y = cos(x) + i*sin(x).

Taking the derivative: dy/dx = -sin(x) + i*cos(x) = i[cos(x) + i*sin(x)] (i^2 = -1) = i*y

Rearranging both sides: dy/y = i*dx

Integrating both sides: ln(y) = i*x + c

Turning both sides to a power of e: y = exp(i*x+c) (exp(ln(y)) = y due to inverse)

So, we have exp(i*x+c) = cos(x) + i*sin(x). At x = 0, exp© = 1 ==> c = 0. Therefore, exp(i*x) = cos(x) + i*sin(x)

 

Proving exp(i*pi) + 1 = 0 can be shown by letting x = pi in the above equation.

Or that

Link to comment
Share on other sites

And there exists a straight-forward proof for that as well.

 

Let's define y = cos(x) + i*sin(x).

Taking the derivative: dy/dx = -sin(x) + i*cos(x) = i[cos(x) + i*sin(x)] (i^2 = -1) = i*y

Rearranging both sides: dy/y = i*dx

Integrating both sides: ln(y) = i*x + c

Turning both sides to a power of e: y = exp(i*x+c) (exp(ln(y)) = y due to inverse)

So, we have exp(i*x+c) = cos(x) + i*sin(x). At x = 0, exp© = 1 ==> c = 0. Therefore, exp(i*x) = cos(x) + i*sin(x)

 

Proving exp(i*pi) + 1 = 0 can be shown by letting x = pi in the above equation.

 

Yes I know. I'm just really confused how you can think that complex integration with several steps is any more straight-forward than equating 2 sides of a series expansion.

 

Either way my first point still stands with both variants of the proof. 

Link to comment
Share on other sites

Yes I know. I'm just really confused how you can think that complex integration with several steps is any more straight-forward than equating 2 sides of a series expansion.

 

Either way my first point still stands with both variants of the proof. 

Complex integration doesn't need infinite srs

 

My personal fav tho is way to evaluate any indefinite integral I came up on the toilet

 

Basically got to this result from Integration by parts

∫f(x)g'(x)dx= f(x)g(x) - ∫f'(x)g(x)dx

Therefore is stands that

∫f(x)g(x)dx=f(x)∫g(x)dx-∫f'(x)(∫g(x)dx)dx

(this can be verified by differentiating both sides)

=>

f(x)g(x)=f'(x)∫g(x)dx+f(x)g(x)-f'(x)∫g(x)dx

Given: ∫f(x)g(x)dx=f(x)∫g(x)dx-∫f'(x)(∫g(x)dx)dx

set g(x)=1

∫f(x)1dx=∫f(x)=f(x)∫1dx-∫f'(x)(∫1dx)dx=xf(x)-∫f'(x)*x dx

Repeat the "product rule for integration) for ∫f'(x)*x dx

which gets you:

xf(x)-f'(x)∫x dx+∫f''(x)(∫x dx)dx = xf(x)-x2f'(x)/2+∫f''(x)x2/2 dx

Now expand

∫f''(x)*x2/2 dx

(you might notice a trend here)

xf(x)/(1)-x2f'(x)/((2)(1))+f''(x)x3/((3)(2)(1))-∫f'''(x)x3/((3)(2)(1)) dx

You can expand that too, but the pattern is pretty clear at this point

Terms alternate in sign, starting on + for the first term. Let's call this n=0

You multiply the nth derivative of the function you want to integrate f(x) by xn+1 then divide by (n+1)!

∫f(x)dx=∑n=0 ((-1)nfn(x)xn+1)/(n+1)!

Benefits of this over say Talyor? It has an infinite radius of convergence given you can infinitely differentiate a function at a point.

Example: Finding an expansion of ln(x+1)

f(x)=1/(x+1)

∫1/(x+1) dx:

(-1)0(x)(1/(x+1)) / (1!) = x/(x+1)

+

(-1)1(x2)(-1/(x+1)2) / (2!) = x2/(2*(x+1)2)

+

(-1)2(x3)(2/(x+1)3) / (3!)= x3/(3*(x+1)3)

Might notice another pattern

ln(x+1)= ∑n=0 (x/(x+1))n/n

Let's test it:

ln(1)=0

ln(0+1)= ∑n=0 (0/(0+1))n/n = 0

Link to comment
Share on other sites

Yes I know. I'm just really confused how you can think that complex integration with several steps is any more straight-forward than equating 2 sides of a series expansion.

 

Either way my first point still stands with both variants of the proof.

It was just integrating i, so it just a simple integration. If it was something like (3+4i)^2, then that would need a little more work. And tbh, I do a lot of integration, so it just comes more naturally.

Link to comment
Share on other sites

I have nothing to add but I want to say I'm weirdly happy to see this kind of thread here of all things. Good on you, nerds.

Math was always something I couldn't get into due likely to the teachers I had for it. It's something that really needs a good teacher to interest anyone.

Link to comment
Share on other sites

Archived

This topic is now archived and is closed to further replies.

×
×
  • Create New...