Friday, February 24, 2017

The Brachistochrone Problem

           This mathematical blog post will be focused on the Brachistochrone problem. This is a famous problem that dates back to the time of Newton, Leibniz and Bernoulli. Unlike many of the other problems mentioned in this blog, this one has been solved, regardless it is still a great mathematical problem. The first part of the post will cover the history of the problem. Then the second part will be an explanation of Bernoulli’s solution, and finally closing with a part on the geometrical representation of the problem.

            Brachistochrone comes from two Greek words, meaning, the shortest time. Johannes Bernoulli used this to describe a problem concerning traveling between two points with the effect of gravity. The problem can be imagined with two points, A, and B. These two points are on a plane, point A being slightly elevated above point B. The problem is, what would be the best past path for an object to roll down form point A to reach point B? This of course being which path would be the fastest path, hence the Greek term for shortest time. This is not as straight forward of a problem as it seems. There are two factors that come into play. The first being the distance traveled. A straight line has the shortest distance, so it would take an object less time. The second being the speed of the object. If there is a curve in the line, it allows for the object to accelerate, and therefore cover the longer distance faster. Galileo had purposed before Bernoulli that an arc of a circle would be the best path. The circle is better than the straight line, but it is still not the best solution to the problem. The balance lies in between a circle arc, and a straight line. Bernoulli came up with a solution to the problem, but rather than sharing the solution, he sent out the problem as a challenge to the other mathematicians at the time. These mathematicians included Leibniz, Newton, and Johannes Bernoulli’s brother Jacob. Bernoulli challenged them in order to show that he was the cleverest, or the smartest mathematician at the time. Some historians speculate that he was really only trying to rival his brother, but either way he was trying to show off his mathematical skills. It may have back fired on him though, because Newton solved it over night, when it took Bernoulli two weeks to solve it. Bernoulli, though not being able to solve it the fastest, came up with a rather clever way of looking at the problem.

            Bernoulli came up with a way of explaining the curve with light. Fermat at the time came up with a principle that light would move with different angles, through different mediums, to maximize the speed of the light. This can be proven with Snell’s Law. The important part of Snell’s law that needs to be understood with relation to this post is that light will take the best path when passing through different mediums. If light passes through multiple layers of a medium it starts to curve in line segments. If the number of mediums increases, the curve becomes smoother. If the number of mediums approaches infinite the curve for the best path of our object becomes apparent. This was Bernoulli’s solution, and it is correct.


            Johannes Bernoulli also saw some geometry in the curve. He recognized that at any point on the curve, the sin of the angle between the tangent line at that point, and the vertical line at the point, divided by the square root of the distance from the point, to the top of the start of the curve, is constant. He recognized that to be what is called a cycloid. A cycloid is the path of a rolling wheel along it radius. Imagine a wheel, and on the edge of the wheel an expo marker was taped so that it was flush with the radius and did not affect the roll of the wheel. If the wheel was rolled next to a white board a cycloid would be draw with the marker. However, the relation between the cycloid, and the discovery made by Bernoulli, is not as obvious as Bernoulli made it seem. At first the sin(theta)/sqrt(y) does not seem to have any relation to a rolling wheel. However, when looking at the wheel, and using some geometry it can be proven that the relation is directly proportional to the diameter of the wheel, this meaning since the diameter is constant the ratio of the sin(theta)/sqrt(y) is constant as well.

(cycloid)






Thursday, February 23, 2017

P vs. NP

In this blog post the focus will be on the P vs. NP problem. This problem is easy enough to explain in a post, but the real world applications of this problem are much too vast to be covered in a single post. So the main focus will be on the problem, and a few of its applications. First the problem, then how it relates solving Sudoku puzzles to curing cancer.

            The P vs. NP problem arose in the 1970’s. This is the time period when advances in computer technology were starting. At the time computer scientists realized that some problems were easier to compute then others. For example, a computer could do multiplication within a small amount of time, but could not compute the next perfect chess move in a timely manner. Computer scientists started grouping these problems as P and NP problems. Focusing first on P problems, P problems refers to problems that a program can solve in a reasonable amount of time, such as multiplication, or alphabetizing a given set entry of names. Then outside P problems, but still including P problems, is another category called NP problems. The NP problems are problems that take a long time to solve, but can be easily checked if given the solutions. A common example of this is a Sudoku puzzle. It takes a reasonable amount of time to solve a Sudoku, but it only takes seconds to check to see if the solution is valid. Now Sudoku puzzles have become easy to solve with the advancement of computers, but that does not change it to being a P problem. P stands for polynomial time, meaning the time in order to solve the problem, can written as a polynomial function. For example, going to back to multiplication, the bigger the numbers being multiplied get, the more time it takes to solve the problem. However, the time that it takes to solve then can be given as a polynomial function, like x^2 or x^3, meaning that the relation can be described using polynomials. NP refers to Non-deterministic polynomial time. This means that the amount of time it takes to solve a problem cannot be written as a polynomial function. Going back to the Sudoku problem, if the size of Sudoku board is scaled up the time taken to solve goes up exponentially and becomes difficult even for super computers. P problems are a sub category of NP problems, and over time computer scientists have found that some of the formerly thought NP problem were P problems. This is obviously not the case for all problem because then there would be no P vs. NP, but the idea that one day all NP problems will become P problems is a possible theory. This is a bit of controversy in computer science. Some computer scientists and mathematicians think that P will one day equal NP, meaning all NP problem will become P problems, and others think they will remain separate. This is the P vs. NP problem, can all NP problems become P problems, or will they remain as they are. The main problem with this problem, is the P vs. NP problem is an NP type problem in itself. That makes the problem tricky unless, it can actually become a P problem.

            There is another sub group, in fact there are many other grouping, but the focus is on the NP complete sub group. NP complete refers to root NP problems. In 1970’s computer scientists found that the NP problems were all the same kinds of problems with a little bit of variance. These root problems connect all the NP problems, and if they can be turned into P problems then all NP problems can change to P problems. In the 1970’s a grouping was released, but since then additions to the list include: Sudoku, Tetris, protein folding, and even crossword puzzles. Protein folding problems are apart our understanding of curing cancer, so it means that if there is a faster way found to simplify solving Sudoku puzzles, it could lead to the curing of cancer. This is why P vs. NP problem is such a big deal. This is why the P vs. NP problem is a Millennial prize problem. If it can be solved there is a possibility that the world around us could change immensely. There is also the possibility that it will not, if the NP and P problem remain separate. The point is no one knows, and it would be worth finding out. 

Friday, February 17, 2017

The Problem with 33

   
     In this weeks installment, the topic of the post will be on a problem concerning the number thirty three. This is a simpler problem, with not a large amount of backstory so the focus will primarily be on the problem itself, and not its contributors. This is partially due to the fact that there is limited information on the founders. In order to compensate for this the founders of the math used in the problem will be covered. That will be covered secondly  The first part of the post will deal with the problem itself. 

     The problem of 33 seems simple on the outside, but it has rather complex working behind it. This is because the problem stems from number theory, which deserves many blog posts about by itself, but will be mentioned briefly in this one. The problem starts out with a list, or sequence of numbers. The sequence stars out with one, then two, then three, then skips 4 and 5. There will be several numbers that are skipped that seem a bit random, but this will be addressed later on. The sequence continues without being interrupted, until it skips thirteen and fourteen, then it skips twenty two and twenty three, and finally it skips thirty one and thirty two. The sequence does continue on, but the sequence has reached thirty three, the important part of the problem. All the numbers on this sequence are thought to be sums of three cubic whole numbers. Meaning that they can be expressed as the sum of three whole numbers raised to the third power. In math terms the numbers on the sequence can be written as equaling a^3 + b^3 +c^3.  This is a Diophantus equation, and has been brought up in a previous post about Fermat's last theorem. In this particular case the negative values of the numbers are also allowed in order to allow for subtraction. In order to illustrate this the example of the solution to the number three will be provided. Three can be written as one cubed, plus one cubed, plus one cubed, or 1 + 1 + 1. This may seem simple, and many of the smaller integers do have simple solutions that can be worked out, but some numbers on the sequence have very large number solutions. It was not until 1999 when the solution for the number thirty was confirmed by the use of computers. The solution to the number thirty is, (2,220,422,9320)^3 + (-2,218,888,517)^3 + (-283,059,965)^3. This seems like a rather large sum of numbers, so it shows the scale of some of the answers to this sequence. 

     This is where the number thirty three comes along. Thirty three does not yet have a solution. Not only is the solution not know, whether there is a solution is also unknown. Not being able to determine if the number in the sequence can have a whole number solution to the Diphantus equation is irregular. So far it is know to all numbers up to 10^14 can be proven to have a solution or not have a solution. Now backtracking to the sequence itself there where a few numbers that were skipped. This is because they were proven to not have a solution to the Diophantus equation. It has been proven that if the number on this sequence can be written as nine multiple by a whole number plus four, or plus five, then it can not be written as a sum of three cubes. In mathematical terms if the number can written as 9n + 4, or 9n + 5, with n being a whole number, then it can not be a sum of three cubes. This has been proved, but the proof will not be covered in this post. So the number four was skipped because it can be written as 9(0) + 4. That is the problem concerning the number thirty three.

     As mentioned previously, this has to do with a branch of math called number theory. Number theory is the study of integers. Many of its contributor's work can be seen in this problem, or in work closely related to this problem. Diophantus equations are the biggest ones, but the work of Euler and  Gauss, with their summations can be seen.  Also the work of Ramanujan can be seen in the partitions of this problem, partitions being the ways you can break down number into sums and multiples. This means that even though this problem seems rather insignificant it has many ties to great mathematicians, and great math.

      Sources https://www.youtube.com/watch?v=wymmCdLdPvM
                    http://www.mathpages.com/home/kmath071.htm
                    

Thursday, February 16, 2017

The Good Will Hunting Problem

This installment about mathematical problems is going to be on the problem from the movie, “The Good Will Hunting”. The focus will first be on the story behind the actual good will hunting, and secondly on the problem illustrated in the movie. The problem in the movie does not require an extensive amount of work to solve, but it seems to be a more popular problem that anyone could take a crack at. The hope is that this post will be more entertaining, and a little bit easier to read then some of the other posts made previously.

Will Hunting is more of an urban legend then an actual person. There are however several candidates, but there are two main candidates for the actual Will Hunting. One of them is thought to be George Dantzig. One day, while in university, George was late to class. He got there sat down and copied some problems down from the board, these were homework problems related to the lecture. George took them home, and he thought that the last two were rather hard problems, but he worked them out anyways. He turns in the assignment, and nothing happens for six weeks. Then early in the morning he gets a knock on his door, and it is his professor. The professor explains to him that he was only supposed to do the first few problems, the last two were unsolved problems dealing with statistics. George had solved these problems as a part of his homework. Later, when George was completing his PhD, he asked his professor what to write about, and he told him to put those two pieces of paper in a binder, and call it good. The second Candidate for Will hunting is a man by the name of William Sidis. William was a child genius. He was said to have been giving lectures at Harvard at the age of 11. He had a natural ability with math, but he had trouble with the law. He attended communist rallies and assaulted an officer. This lead him to be put in jail. He was released from jail under the condition that he saw a therapist. His therapist being with father. When he was finished seeing his father, he described it as a being a living hell, and swore to be done with academia. William Sidis never worked in a major academic position again. He found office jobs, and in particular he liked running adding machine, because they fascinated him. Though these are the candidates for the real will hunting the movie is not actually based on either, in fact the movie is based on a mathematician by the name of Ramanujan. Ramanujan deserves his own blog post, so in this post he will be covered briefly, but much like Will in the movie he was a math prodigy. Ramanujan was born in India, and his work was impressive considering where he came from. Ramanujan did not have a formal education. When he was in his twenties his potential was seen, and he attended Cambridge. Sadly he died soon after. Much of his work was done concerning partitions. Partitions are the ways you can break down numbers into multiples, and additions.

In the movie, “Good Will Hunting” an MIT professor presents a problem on the board. He claims that it took MIT professors two years to solve the problem. This is probably an exaggeration made for the film, because it is not that difficult to solve. The hardest part about the problem is to understand what the problem is asking. The question is to draw all the homeomorphically, irreducible, trees of size n=10. Breaking that down one piece at a time, trees refers to a network of lines and dots without cycles, meaning no closed polygons. Homeomarphically means you can stretch, and rotate a network tree but it still counts as the same trees. In doing so it puts a finite number on the number of solutions. Next, irreducible means that if there is a tree with a point that could be removed, and not change the tree, then the point should not be there. This is because it could be reduced, so the point was serving no purpose. The last part, n=10, means that the trees must have ten points, that need to be connected with lines using these parameters. There will be an example of one of the solutions in order to fully illustrate the problem at hand. There are ten trees that match the criteria explained, and there will be a link to the ten solutions.


Friday, February 10, 2017

The Riemann Hypothesis Part 2

This blog post is part two of the Riemann Hypothesis, the first part is also on this blog. The first part covered zeta functions, this was in order to give background knowledge allowing for this post to jump right into the problem without a long explanation, and allow for a fuller in depth explanation of the problem. However, I did not mention in the previous post that it is necessary to understand complex numbers, and the complex plane. Complex number were looked over because, that concept is taught to many kids in eighth grade math, so hopefully most of the audience already understands it. The goal of this post is to explain the actual problem, and secondly why the problem is important.

It was mentioned in the last post that zeta functions were a big part in understanding the Riemann Hypothesis. Also previously mentioned Euler worked with zeta functions. The difference between Euler’s and Riemann’s work was what they inputted into the zeta function. Euler focused on real numbers, Riemann on the other hand worked with complex numbers. Complex numbers are real number and imaginary number pairs, such as 3+4i, 3 being the real number, and 4i the imaginary.  When Riemann plugged in complex numbers into the zeta function it created a plane rather than a line. but after Riemann, it was understood that there was a whole extra dimension of values that could be put through the zeta function. The zeta function can now be visualized on a plane, the x axis being real numbers, and the y being imaginary numbers. Like all functions the zeta function has a domain, its domain is when x is great than 1. That includes all values both up, and down the imaginary axis, for x values greater than one. Then Riemann proved that the function has a rather nice piece of symmetry. Through analytical continuation Riemann prove you can extend the domain to all values except for x equals one. This leaves a strip between x=0 and x=1, this strip is referred to as the critical strip. Within the strip there is a critical line at x=1/2. That information may be hard to visualize so a graph will be included to help clear up any problems.





Now that the domain of the zeta function has been cleared up, the problem can be looked at. The million-dollar question is where does zeta (s) = 0? S is referring to the complex number as well as the real numbers. Another notation could be seen as s = x + iy. So the search is on for what point on this plane when plugged into the zeta function results in a zero. There are some freebees though, these are called trivial zeros. They are zero that we already know, and really do not care about. Such as every negative even number will result in a zero. The real question is, where are the zeros on the critical strip? Riemann hypothesized that all the zeros in the critical strip would lie on the critical line, at x=1/2. So far no one has found any zero that can disprove Riemann, and nobody has proven him either.

            The question might be asked, who cares? What does this arbitrary function have to with anything important? Something that was not mentioned in the zeta functions paper was that the zeta function could also be written in terms of prime numbers. So the position of these zeros allows for mathematicians to understand the position of prime numbers. For example, if the Riemann hypothesis was proven, it could be known how many prime numbers are there from 1 to a billion, or 1 to a trillion. This function also has ties to physics and quantum mechanics. This tells us that there is a lot of connections in math and physics that have not been made yet. If the function is linked to primes, and quantum mechanics, then how is quantum mechanics related to primes? There are lots of these connections that we do not understand at this moment, so if the Riemann Hypothesis is proven it opens the flood gates of math and physics to new ideas and connections. In doing so, unraveling mysteries and understandings of nature as a whole. That is why it is a million-dollar question, and why people are so interested in it.

        Sources https://www.youtube.com/watch?v=VTveQ1ndH1c
                      https://www.youtube.com/watch?v=sD0NjbwqlYw
                      http://mathworld.wolfram.com/RiemannZetaFunctionZeros.html