Wikipedia:Reference desk/Mathematics

From Wikipedia, the free encyclopedia

This is an old revision of this page, as edited by Braveorca (talk | contribs) at 23:55, 21 September 2006 (Bijection z^2 to Z). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.


Science Mathematics Computing/IT Humanities
Language Entertainment Miscellaneous Archives
How to ask a question
  • Search first. It's quicker, because you can find the answer in our online encyclopedia instead of waiting for a volunteer to respond. Search Wikipedia using the searchbox. A web search could help too. Common questions about Wikipedia itself, such as how to cite Wikipedia and who owns Wikipedia, are answered in Wikipedia:FAQ.
  • Sign your question. Type ~~~~ at its end.
  • Be specific. Explain your question in detail if necessary, addressing exactly what you'd like answered. For information that changes from country to country (or from state to state), such as legal, fiscal or institutional matters, please specify the jurisdiction you're interested in.
  • Include both a title and a question. The title (top box) should specify the topic of your question. The complete details should be in the bottom box.
  • Do your own homework. If you need help with a specific part or concept of your homework, feel free to ask, but please don't post entire homework questions and expect us to give you the answers.
  • Be patient. Questions are answered by other users, and a user who can answer may not be reading the page immediately. A complete answer to your question may be developed over a period of up to seven days.
  • Do not include your e-mail address. Questions aren't normally answered by e-mail. Be aware that the content on Wikipedia is extensively copied to many websites; making your e-mail address public here may make it very public throughout the Internet.
  • Edit your question for more discussion. Click the [edit] link on right side of its header line. Please do not start multiple sections about the same topic.
  • Archived questions If you cannot find your question on the reference desks, please see the Archives.
  • Unanswered questions If you find that your question has been archived before being answered, you may copy your question from the Archives into a new section on the reference desk.
  • Do not request medical or legal advice.
    Ask a doctor or lawyer instead.
After reading the above, you may
ask a new question by clicking here.

Your question will be added at the bottom of the page.
How to answer a question
  • Be thorough. Please provide as much of the answer as you are able to.
  • Be concise, not terse. Please write in a clear and easily understood manner. Keep your answer within the scope of the question as stated.
  • Link to articles which may have further information relevant to the question.
  • Be polite to users, especially ones new to Wikipedia. A little fun is fine, but don't be rude.
  • The reference desk is not a soapbox. Please avoid debating about politics, religion, or other sensitive issues.


September 10

Answer to a Problem

Ok, this is a question about a specific aspect of an extra credit thing that my math teacher gave us. We were given the following problem:

"Find an easy and useful way to determine the sum of all of the number between 1 and 500. Then, using this, determine the sum."

Now, I used the summation symbol to accomplish this as such:

where n=500. and then using the sum sequence feture of my graphing calculator I calculated the sum because I didn't feel like typing out all of the numbers between 1 and 500 and adding them together. Turns out, she didn't like this particular solution and asked me "well how does the summation symbol work." I was taught (by a different though very qualified teacher) that the summation symbol is just a short hand way of writing out 1+2+3 and so on. Any help would be very much appreciated and if I'm not specific enough, let me know.

Deltacom1515 01:54, 10 September 2006 (UTC)[reply]

I think your problem is that the teacher is trying to get you to use your mathematical skills, not your skills at taking advantage of the computational abilities of your graphing calculator. What you've done makes it easier for you to compute that sum. As you've noted, your calculator is doing, internally, exactly what you would have done if you'd actually typed in "1+2+3+....", which is just as much work.
What your teacher almost certainly wanted you to do is describe can be rewritten as a closed form; this is very useful for two reasons. Firstly, the closed form can be calculated much more easily (an addition, a multiplication, and a division, no matter how big "i" is). The second is that rewriting in this way allows you to do all sorts of things you can't do with a summation, which come in very useful in more advanced maths.

If you're really clever you might consider using proof by induction to show that the closed form you work out is correct :)--Robert Merkel 02:24, 10 September 2006 (UTC)[reply]

Wow, thanks for your help. I appreciate that. One thing though, which link do I click on at the colsed form page? Deltacom1515 02:34, 10 September 2006 (UTC)[reply]

Well, let's put it this way - you're not using any calculus, are you? Try Solution in closed form. Confusing Manifestation 02:53, 10 September 2006 (UTC)[reply]

Nope, no calcus is necessary. Deltacom1515 03:12, 10 September 2006 (UTC) Ok, I got it on the page [[1]] I didn't even know such a thing existed. You guys were a BIG help. Deltacom1515 03:30, 10 September 2006 (UTC)[reply]

IIRC, Gauss first noticed the technique you are trying to find, when he was a child. Dysprosia 05:12, 10 September 2006 (UTC)[reply]
Is that the story about the teacher giving the class busywork? --jpgordon∇∆∇∆ 05:21, 10 September 2006 (UTC)[reply]
Yes. Dysprosia 06:43, 10 September 2006 (UTC)[reply]
Are you saying that Gauss was the first to discover it (which I strongly doubt), or simply that he came up with it when he was a child (which is indeed well known)? -- Meni Rosenfeld (talk) 06:52, 10 September 2006 (UTC)[reply]
The latter. Dysprosia 07:33, 10 September 2006 (UTC)[reply]


Happy Numbers

About Happy Numbers, do they serve any practicable function or are they merely another way for mathematicians to humor themselves. (this really is a serious question, I myself came across this article and was laughing all the way through it) Autopilots 08:48, 10 September 2006 (UTC)[reply]

I have no prior knowledge of happy numbers, but this quote from the Wikipedia article suggests the latter: "The study of happy numbers is an example of recreational mathematics in that it can involve extensive mathematical knowledge, but the topic is not a central part of serious research." Note that base 10 (or any other base, other than 2) is completely arbitrary, so definitions that are based on manipulating the decimal digits of a number are very unlikely to have serious uses. -- Meni Rosenfeld (talk) 10:51, 10 September 2006 (UTC)[reply]
I have met this concept some years ago in the context of teaching UK school students about mathematical investigations. It was chosen as something which would be unlikely (at that time) to have ben encountered previously by students, so giving no advantage to students who might have read more widely than others. Apart from that pedagogical use, I think they only have recreational interest to mathematicians. Madmath789 11:03, 10 September 2006 (UTC)[reply]

Number of permutations with at least one 2-cycle

I was asked this question: "There are 70 people at a party and they put their names into a basket. What is the probability that two people get each-other's, if each person then picks one name from the basket." I thought this should be a high-school math puzzle, but I can't seem to find any solution to it. Mathematically, I'm asking for the number of permutations of length 70 with at least one cycle of length 2. I've looked at Stirling numbers and searched the encyclopaedia of integer sequences, but I can't find anything about this ... Any ideas? — Preceding unsigned comment added by 88.196.100.193 (talkcontribs)

Do you mean the probability that at least one pair get each other's name? Exactly one pair get each other's name? One pair in particular, and only they, get each other's name? One pair in particular, regardless of other possible matches, get each other's name? There is something of a difference.—86.132.167.165 13:00, 10 September 2006 (UTC)[reply]
It seems like the asker (a reminder: it's recommended to sign your posts by typing --~~~~) is referring to the first variant (the number of permutations ... with at least one cycle of length 2). That's an interesting question! But let's try to build a recursion. Let there be people; let the asked number of permutations be . If we number the people , then for any such permutation there is a unique person who is part of one such cycle (a "pair") and whose number () is the lowest. Let's call him/her the "lowest paired person" and the pair he/she is part of the "lowest pair". For every combination of and there are different ways of choosing the other person to form the lowest pair together with . The people with lower numbers than may be permuted, provided they don't form any pairs with anyone (by assumption). Thus, if we denote the number of permutations with the lowest paired person and total length by , there is a relation:
,
because removing the lowest pair from a permutation with the lowest person (and lowering the people's indices as necessary to fill in the two gaps), we get a new permutation either without any pairs or with a lowest paired person ; there are possible places for the person is paired with. Now we need some initial conditions. Clearly, . Also, for any , , because if we say that the first person is paired with anyone, then there are people who can be his/her "companions" and the other people may permute freely. To sum up:
To get the asked total number, we have to sum over all the possible -s:
All that can be done in a table:
n a = 1 2 3 4 5 6
1 0 0
2 1 0 1
3 2 1 0 3
4 6 2 1 0 9
5 24 12 6 3 0 45
6 120 72 48 30 15 0 285
So, the sequence answers your question. But is there a way to calculate it more easily (such tables are quite error-prone on paper and need of computer memory)? Let's see... (sequence A027616 in the OEIS)... we have the answer! The formula is:
Don't ask me to prove that right now. Anyhow, combinatorics is fun!  Pt (T) 19:24, 10 September 2006 (UTC)[reply]
Since the question was about probabilities, don't we actually want ? -GTBacchus(talk) 19:33, 10 September 2006 (UTC)[reply]
That's true. So, the probability is:
a sum which Mathematica (but not yet I) can handle:
Here in the numerator, we're dealing with the incomplete gamma function. Putting in , we get:
By the way, there exists a limit: . differs from it by only !  Pt (T) 19:53, 10 September 2006 (UTC)[reply]
(somewhat prolonged edit conflict, rendering my addition rather moot) He means, the probability that at least one pair of people got each other's names. I think I've figured out a way to solve it. It's like Pascal's triangle, but more complicated. Take the number of people, n. The nth level of the triangle (more like a pyramid) is an (n+1)xfloor(n/2+1) matrix. Each element (r,c) is the number of ways r of those people could have gotten their own name while c pairs of people switched names. The pyramid is recursive, then, with each level determined from the information in the level above. Imagine lining a group of five people up and numbering them 1-5. Mix up their names. Now add a sixth person to the line. Every possible permutation of the 6-group is identical to a permutation of the 5-group or a permutation of the 5-group plus a switch between the sixth person and one of the others. Take a particular permutation of the 5-group, with a particular (r,c) category. Adding a sixth person and allowing them to keep their name adds 1 to the (r+1,c) bin of the 6-level for each such permutation. Adding a sixth person and switching their name with someone who had kept their own name adds 1 to the (r-1,c+1) bin of the 6-level, for a total of r per permutation. Adding a sixth person and switching their name with someone who had already traded names adds 1 to the (r,c-1) bin of the 6-level, for a total of 2c per permutation. Adding a sixth person and switching their name with someone who hadn't traded names but didn't have their own name adds 1 to the (r,c) bin of the 6-level, for a total of n-r-2c per permutation. And that produces every possible 6-group permutation. So, each element (r,c)n+1 = A(r-1,c)n + B(r+1,c-1)n + C(r,c+1)n + D(r,c)n where A=1, B=r+1, C=2c+2, and D=n-r-2c. Anything that involves coordinates outside the matrix (coordinates less than zero, in this case), can be ignored or set to zero, like in Pascal's regular triangle. The level 1 matrix would be {{0},{1}}, making level 2 {{0,1},{0,0},{1,0}}, level 3{{2,0},{0,3},{0,0},{1,0}}, etc. You can get the number of permutations involving no switches by adding up column zero (0+1, 0+0+1, 2+0+0+1, etc). Subtracting that from n! and dividing by n! gives the probability that there will be at least one switch. It might be possible to get this in closed form, but I don't see how. Black Carrot 19:59, 10 September 2006 (UTC)[reply]
Searching the first few terms in the OEIS yields A027616. – b_jonas 22:10, 10 September 2006 (UTC)[reply]
Oh, sorry, I just see Pt has already found that. For the record, I calculated the terms using brute force with the following J snippet:
      (+/@:(2&e.@:(#@>)@:C."1)@:(i.@:!A.i.))"0>:i.9
which gave this answer:
      0 1 3 9 45 285 1995 15855 142695
b_jonas 22:13, 10 September 2006 (UTC)[reply]

One more question on the question: If a person draws their own name, do they return their name and draw again ? StuRat 22:38, 10 September 2006 (UTC)[reply]

On the subject of drawing one's own name, the probability of at least one person doing this tends to 1 - 1/e or 0.63212... as n tends to infinity, a result possibly better known from the "letter in envelope" equivalent problem. The similarity in result, with the square root missing, is noteworthy.—86.132.234.126 18:19, 11 September 2006 (UTC)[reply]
Thanks to a friend of mine I now know a way to prove and even generalize this property. But firstly let's prove the formula for pairs:
.
gives us the number of the permutations of people, where there is at least one 2-cycle. Numbering the people as I earlier did, let be the set of all the permutations of people, where people with numbers and are paired:
.
Here denotes the set of all the permutations of the first natural numbers. is the number of different elements in the set of all possible -s. Without loss of generality we may assume that , because that way we already go through all the possible pairs. Let denote the cardinality of a set (that equals the number of elements in it, since we'll only be dealing with finite sets); let be the set of all the natural numbers . Then:
.
Clearly, there can be at most pairs among people. Let's make (for every ) the set of all the unique collections of exactly pairs of people (within a collection every person can be in at most one pair) and denote it by :
.
Now, by the inclusion-exclusion principle, we can write as:
.
is simply the set of all possible permutations of the people, where the people , , …, are paired. Others may or may not be paired. Thus, the cardinality of this intersection of sets is equal to the number of possible permutations of "the others", the people not fixed with the combination of -s and -s. (As we fixed earlier that , there is no way we could permute the paired people anymore.) There are, in total, -s and -s. So we are left with:
,
not depending on the particular choice of -s and -s! So far we have:
.
is the number of ways to choose non-overlapping pairs from among our people. Let's calculate it! Firstly, for the first people of the pairs (the -s) there are different possibilities. If we, for one moment, omit the requirement that the pairs must be ordered for counting, we count each pair twice, choosing the -s completely freely from among the people not referred to by any . As long as we take care of it a moment later, we may do so. Thus, we can choose the -s from people, giving us possibilities (since order matters here, we must use the formula for permutations, not combinations). This must be now divided by the total number of different orderings of the pairs, , giving us:
.
Therefore:
and:
.
For the summand is , so after taking a out of the sum, we get what we wanted to:
,
Q.E.D. This derivation can be quite readily be generalized from 2-cycles to -cycles, where is an abitrary natural number. Then we simply observe the patitions into -tuples etc., the same principles apply. Thus, the number of such pemutations of people that contain at least one -cycle is:
.
The probability for people to contain an -cycle is:
.
I'm sure this is a known result, because, for example, the formula for lies neatly in the OEIS (sequence A027617 in the OEIS). Now what about the limit of this probability? Well, writing the limit of the sum as , it's evidently just the Taylor series for taken at , that is, . Thus:
Really, it's fun!  Pt (T) 16:15, 12 September 2006 (UTC)[reply]

Hello, I'm the one who originally posted the question, and I'm just so impressed with the quality of the response here that I have to say a big THANK YOU!!!! I was quite surprise that the integer sequence entry have almost the same keywords as I have in the title of this post, yet I was not able to find it! Anyway, thanks again for the explanations, and sorry for not signing, I will create an account :) --193.40.37.156 10:52, 15 September 2006 (UTC)[reply]

SPPS graph

Hi: Is it possible to do a histogram or bar chart with SPSS for a grouped frequency distribution table? Thanks much.-- Hersheysextra 20:22, 10 September 2006 (UTC)[reply]

Area Problem. Shapes etc.

Ok well I encountered this problem (I added the y to simplify some calculations a bit, but otherwise its exactly the same information i got). I tried to solve but I wasnt sure wether I had done it correctly. So I thought I'd post here for someone to maybe review it and find the faults or confirm wether its correct. So heres what I went about doing. I numbered the steps for reference in your replies.

  1. Well firstly isnce the triangles in the corners have x on both sides, we can assume they are isoceles right angled triangles, therefore the angles are 90, 45, and 45, thus meaning that all the angles in any of the shapes in this case are either 45 or 90.
  2. (pythagorus)
  3. The white triangle in the centre of the right hand edge is (assuming the two sides identical in length are variable z)
    1. (pythagorus)
comment - this line should be Richard B 22:10, 10 September 2006 (UTC)[reply]
  1. the L shape can now be divided into 3 segments, one square in the centre which is , and 2 identical rectangles which are y by z. And as such the grey area consists of
  2. As such the entire grey area works out to equal this when the left part of the shape is included
  3. Therefore, the entire size of the shape should equal
  4. The two triangles with their sides against the x by 2006 shaded areas two identiacal sides can be calculated (assuming the two sides identical in length are variable a)
  5. Substituting in the other formulae, to give in terms of x leaves
  6. So the formula for the entire square equals
  7. So now we have 2 formula for the entire shape, so we can substitute them together

Unfortunately this is where I got stuck. So if anyone knows where to go from here... The help would be appreciated. Philc TECI 21:15, 10 September 2006 (UTC)[reply]

You could work out the total unshaded area. It's just the sum of 5 triangles. The leftmost two triangles have a total area of 10032 - they're isosceles - and they meet at exactly half way down the shape (i.e. 2006/2). The 2 small triangles at top and bottom right corners have a total area of x2. The larger triangle on the right side has an area of (1003-x)2. So the sum of all unshaded area = x2 + 10032 + (1003-x)2.
Total area must be 2006(1003 + 2x) - so
Now it's just a quadratic in x - and straightforward to solve - one of the roots of the equation will lead to a negative length - so choose the other one. Richard B 21:55, 10 September 2006 (UTC)[reply]
You can simplify the arithmetic by introducing y = x/1003, which gives
Both roots are positive, but you can discard the root that is greater than 1, as this would make x greater than 1003. Gandalf61 10:45, 11 September 2006 (UTC)[reply]
I don't get the same number for the 5th white triangle as Richard B. Rather than an area of (1003-x), I have (based on the 1-1-root2 ratio for 45-degree right triangles). Plugging this into the resulting quadratic, I get imaginary roots, as my c term is roughly double the size of Richard's. I also attempted finding the area of the shaded stuff and had imaginary roots there, too. Am I missing something re: that (1003-x) area triangle? — Lomn | Talk 15:35, 11 September 2006 (UTC)[reply]
But you do get the same as me for the 5th white triangle. My area is (1003-x)2 - yours is;
So we actually do agree on the area of the 5th triangle Richard B 16:42, 11 September 2006 (UTC)[reply]
I seem to have skimmed right over the "squared" term in your original math. That clears it up. — Lomn | Talk 16:57, 11 September 2006 (UTC)[reply]
Take the lower half of your diagram. By symmetry, it's 5/8ths shaded as well. Label the unshaded (middle) segment on the lower edge u. Labe the unshaded upper segment on the right edge v. Then we know the following things:
  1. u = 1003. It's the height of the half we kept.
  2. x+v = 1003. Same reason.
  3. (2x + u)(x+v) = the area. This is (2x + 1003)(1003).
  4. 1/2 (u^2 + v^2 + x^2) is the unshaded area. This is 1/2 (1003^2 + (x - 1003)^2 + x^2).
  5. 1 - 5/8 = 3/8 = unshaded fraction = 1/2 (1003^2 + (x - 1003)^2 + x^2) / ((2x + 1003)(1003))
Solving the resulting quadratic ( 8x^2 - 14042x + 5030045 = 0) gives x = 1003/2 and x = 5014/4. -- Fuzzyeric 23:04, 11 September 2006 (UTC)[reply]

Earwig apartment complex

This is a bit mathsy, I guess, so it goes here. If there were some tiny people whose eyes were at the level of an earwig's, and we built them a building that was the same height as the average human male (average for us, not for the earwig-people :P), how many stories would it have? Vitriol 21:17, 10 September 2006 (UTC)[reply]

As many as you give it. You didnt define a storey height. Philc TECI 21:32, 10 September 2006 (UTC)[reply]
I thought there was a standard size or something. More fool me, I guess. Vitriol 22:05, 10 September 2006 (UTC)[reply]

Well, we can approximate something, I suppose. Let's say an earwig's eyes are at 1cm, and that an average human's eyes are at 150 cm. So we're at 1/150 scale. So, if we were earwig people, your question could be rephrased as: how many storeys is a building that's the height of a giant person 150 times normal size. If normal is around 5'6" (switching from metric to imperial), then 150 x normal would be 825 feet, which is like an 80 storey building or so, roughly? -GTBacchus(talk) 22:12, 10 September 2006 (UTC)[reply]

That sounds good, except for the average story being 5'6". I'd say the interior distance from floor to ceiling is typically more like 7 feet, with an additional foot for the thickness of the floor, for a total story height of 8 feet. This is for residential structures, industrial structures tend to have stories more like 10 feet high. StuRat 22:23, 10 September 2006 (UTC)[reply]
I didn't take the average storey to be 5'6", that was the height of an average human. I took the average story to be about 10 feet: hence 825' ~ 80 storeys. If it's more like 8 feet per storey, that'd be a 100 storey apartment building. -GTBacchus(talk) 00:54, 11 September 2006 (UTC)[reply]

calculate

there is $100,000 invested in a c.d. at 5.1 interest for 5 months....how is the money earned calculated....thanks

More info is needed, is that 5.1% interest annually, compounded monthly ? StuRat 22:27, 10 September 2006 (UTC)[reply]
Sounds very much like a homework question. We do not solve such questions for people (they would learn nothing that way), but we are prepared to give hints if people tell us how far they have got, and where they hit a snag ... Madmath789 22:25, 10 September 2006 (UTC)[reply]
See compound interest and interest Richard B 22:32, 10 September 2006 (UTC)[reply]

September 11

No questions today

Aw shucks! hydnjo talk 00:05, 12 September 2006 (UTC)[reply]

That's the second day in the last two months!! — [Mac Davis] (talk) (Desk|Help me improve)
Must be the no homework policy. Tends to drive away the lazy bastards. 202.168.50.40 00:29, 12 September 2006 (UTC)[reply]
Experience shows that the "no homework policy" does not tend to drive away the lazy bastards. -- Meni Rosenfeld (talk) 17:13, 12 September 2006 (UTC)[reply]

September 12

Fermat's Factoring Method

Suppose that n is odd composite. Then, Fermat assures us that it ay be written for some m and d integers. Suppose . Then we may show that . Equivalently, . For what moduli does a theorem of this form hold, and how do we lift from a statement for a small modulus to a statement about a larger modulus without a quadratic increase in the number of cases to be retained? (I.e. if we lift to the modulus 16, then depending on the residue of n we find that one of m and d is constrained to one value (mod 4) and the other is constrained to two values (mod 8) (that are not congruent (mod 4). If we then lift (mod 3) then we get two or four cases (depending on whether either of m or d can be congruent to zero (mod 3)), and using the Chinese remainder theorem to glue these cases to the cases derived from n (mod 16), we end up with four or eight cases -- some from residue classes (mod 12) and some from residue classes (mod 24).)

So, how do we encode the retained cases without creating an exponential explosion in the encoding of the retained cases and retaining the ability to perform additinoal lifting and additional applications of the CRT. -- Fuzzyeric 03:33, 12 September 2006 (UTC)[reply]

STATISTICS

Require a method to calculate confidence intervals for weighted sums. The sum is of the form SUM wi xi and SUM wi = 1.

Thank You

Gert Engelbrecht Pretoria South Africa

This is not possible without further information about the distributions of the random variables xi.  --LambiamTalk 15:50, 12 September 2006 (UTC)[reply]

square centimeters

need to know how many square centimeters are in a piece of tisse that measures 4cm X16cm --need the answer - not the formula. I am not a student needing assistance with homework ---

Why do they teach pupils to do multiplication before they teach them to read the large notice at the top of the page saying 'NO HOMEWORK'? They also seem to have omitted the lesson in which pupils are taught to thank people who answer their questions. —Daniel (‽) 20:03, 12 September 2006 (UTC)[reply]
The answer is 4 * 16 = 21013 square centimeters. There I have just given you the exact answer. 202.168.50.40 23:06, 12 September 2006 (UTC)[reply]

octahedral rotational symmetry

Can you draw a graph to show octahedral rotational symmetry

  1. rotation about an axis from the center of a face to the center of the opposite face by an angle of 90°: 3 axes, 2 per axis, together 6
  2. ditto by an angle of 180°: 3 axes, 1 per axis, together 3
  3. rotation about a body diagonal by an angle of 120°: 4 axes, 2 per axis, together 8

Many thanks!--82.28.195.12 20:17, 12 September 2006 (UTC)Jason[reply]

No. What variables would be plotted on the graph?
There is some discussion of symmetries in octahedron. ColinFine 23:21, 12 September 2006 (UTC)[reply]
Better still, try octahedral symmetry. It has many figures. Consider whether you wish to deliberately exclude any reflection symmetry; most simple examples naturally include it. --KSmrqT 23:28, 12 September 2006 (UTC)[reply]

Calculus, Limits and First Principles.

Hello. This is one of those problems which hits you hard when realise you don't know how to do it.

-So- much mathematics is based on the result that d/dx(e^x) is e^x, or written in a different way.. the integral of 1/x is ln(x). The question is, how do we prove this?

We can go back to first principles easily enough, and say that the derivative of a^x, as h tends to zero, is:

(a^(x+h) - a^x)/h

Factorise out a^x, and get:

a^x(a^h - 1)/h

Now, we know from basic calculus that differentiating this should go ln(a).a^x, so we're looking to show that the below limit is true:


(a^h - 1) / h = ln(a), as h tends to zero.

This doesn't look tricky, does it? But remember that we're trying to prove a result fundamental to calculus, so what we can use is limited (no pun), we can't use l'hopital's rule (which would give the right answer), as it relies on differentiating an exponential - our thing to be proven.

So, basically, I would really, really appreciate it if somebody could attempt to prove my limit is true, or comment that they could not (so I know the ability levels it's going to take).

Thank you, and remember: No circular reasoning! No proving that e^x differentiates to itself by assuming it in the first place!

Michael.blackburn 20:56, 12 September 2006 (UTC)[reply]

The first question, I suppose, is the definition of e. If you use the Taylor series for ex, as the definition of e, you can use term by term differentiation of a polynomial:
Assuming (without proof here, although it can be proven) that
you can differentiate and find the result. --TeaDrinker 21:01, 12 September 2006 (UTC)[reply]

I found an easy solution:


Using definition


so iff
This would seem to require L'Hôpital's rule, but what do you all think? M.manary 21:11, 12 September 2006 (UTC)[reply]

For reference: The proof of the fact that does not require l'Hôpital. There is a rather elementary proof, directly based on the definition of the limit of an expression. Hint: Can you simplify ? JoergenB 18:02, 18 September 2006 (UTC)[reply]

I also believe that TeaDrinker's solution will rely on a formula already using d/dx e^x, as Taylor series can ONLY be derived from that notion (try it yourself and see), so that solution is no-go. M.manary 21:15, 12 September 2006 (UTC)[reply]

Formally, I have actually used the Taylor series as the definition of ex, so no derivation of the Taylor series is needed. The proof from first principles does depend on your definition of e. --TeaDrinker 21:25, 12 September 2006 (UTC)[reply]

M.manary, I think using L'Hôpital's rule is okay here, as long as we don't use it with exponentials. It can be proven from fairly basic principles. Michael.blackburn 21:17, 12 September 2006 (UTC)[reply]


I just found a proof that you can read at: http://www.ltcconline.net/greenl/courses/106/ApproxOther/lhop.htm so: Q.E.D. M.manary 21:19, 12 September 2006 (UTC)[reply]

Nice. The way we learned it in Calculus, and the article agrees, the natural logarithm is defined as the area under the graph of 1/x from 1 to b. And the natural exponent is defined as the inverse of the natural logarithm. So,
  • f(x) = ln(x)
  • f'(x) = 1/x
  • g(x) = ex
  • f(g(x)) = x
  • f'(g(x))g'(x) = 1
  • g'(x) = 1/f'(g(x))
  • g'(x) = 1/(1/(ex))
  • g'(x) = ex
The fifth step uses the chain rule, which makes no assumptions about the functions themselves, other than that they can be differentiated in the first place. Black Carrot 01:15, 13 September 2006 (UTC)[reply]
This is fine if you replace "ex" by "exp(x)", where exp is defined to be the inverse function of ln. Next, it has to be shown that there exists a constant e such that exp(x) = ex.  --LambiamTalk 03:41, 13 September 2006 (UTC)[reply]
I was under the impression such a proof existed. And that it in no way compromised this one. Do you know it? Black Carrot 06:14, 13 September 2006 (UTC)[reply]
After defining "exp", you define power as ab = exp (b lna ). You need to show that this agrees with the definition of powers with rational exponents, which isn't hard. It is also easy to show that ln is bijective, so you define e as the preimage of 1. Then you're pretty much good to go. -- Meni Rosenfeld (talk) 08:08, 13 September 2006 (UTC)[reply]

(Another) on Calculus, with Radians.

Okay, thank you (Very) much for the awnsers to the previous question. This one is probably a lot more obvious, yet through searching Wikipedia and the internet I've still failed to find a solution. My mathematics teachers were also uncertain.

Basically, the question is "Why radians?"

From thinking about it, it's obvious that the Calculus can only work with a single angle measure, and obviously it's the radian. But the best explaination I've recieved is that solving Taylor series of Trig. functions works in radians only... but those Taylor series surely required Radians in the first place.

Can -anybody- offer a sensible 'proof' or reasoning that you have to use 2pi 'units' per circle for the Calculus to work, without starting in radians in the first place?

Thank you again!

It's not a proof at all, but consider a small angle segment of a circle. The arclength = rθ - with theta in radians. But this is approximately equal to r * sin (θ). So sin(θ) ~ θ, if theta is in radians (for small angles). You can show this by simple geometry Richard B 21:36, 12 September 2006 (UTC)[reply]

Oh, so.. if you used degrees for example, and arclength is.. kθ, so sin(θ) ~ kθ in degrees (where k isn't one), and so the results go icky. Is that it? Michael.blackburn 21:44, 12 September 2006 (UTC)[reply]

So the definition of a radian is that in a circle of radius r, a central angle of one radian will define an arclength of the circle that is exactly r long. If the radius of a circle is r, then there are (by circumference) 2π radii in the arclength of the circle. SO when we say an angle = 2 radians, it inscribes an arclength of 2r out of its circle.

The numbers actually don't work out that well, as you may notice, because we always have to say an angle is π/3 or 2π/7 radians, which really aren't that great numbers... M.manary 23:12, 12 September 2006 (UTC)[reply]

2π/7 radians, eh? How'd you write that in degrees, 360/7°, or 51.43° or 51°25' or (51+3/7)°? If you write that, can you tell faster how many of those you need for a full cycle? Personally, I think even π/5 is easier to understand than 36 degrees. – b_jonas 11:23, 13 September 2006 (UTC)[reply]
If you look at the Taylor series' defining sin and cos, you'll notice that there aren't any unnecessary coefficients. If you wanted to express the same thing in degrees, you would need to put π/180 in front of every x term (and square it for x^2, etc...). This will give you a complicated sequence of coefficients to keep track of. The radian is the simplest unit to use in this kind of calculation. There are also interesting geometric properties, and probably other reasons as well, but the main reason (to me) is that it avoids keeping track of unnecessary coefficients in calculation, or when using calculus. - Rainwarrior 23:27, 12 September 2006 (UTC)[reply]
Exactly. It's the same as why we have to use e as the base of exponentials. If you want to solve the linear differential equation , you find the roots of the polynomial in the form of and then the base solutions are (not counting the cases of roots with multiplicity). Here, you have to use as the base of the exponent and the sine function using radians. – b_jonas 11:15, 13 September 2006 (UTC)[reply]

Inverse

We are learning determinants and inverses in matrices in math. My teacher said that if you have a determinant of "0' then there is no inverse. I went on to say that in some remote field of mathematics, there is probably a way to fond the inverse of matrix with d=0. She said, "Maybe, but I doubt it. Why don't you look that up and tell us tomorrow." I took look at abstract algebra, and it was kind of confusing. I did some relevant Google searches, but to no avail. My question is: am I right? Is there, in some field of mathematics, a way to find the inverse of a matrix that has a determinant of 0? schyler 23:37, 12 September 2006 (UTC)[reply]

I don't think so. If a matrix A is invertible, then AA-1=I where I is the identity matrix. You can check by simuntaneous equations that you get two contradictions:
If you go on to more difficult mathematics, you might understand a bit more about matrices when you learn systems of linear equations and related topics. x42bn6 Talk 02:12, 13 September 2006 (UTC)[reply]
The articles on Determinant, Invertible matrix, Identity matrix, and Matrix multiplication look nice, and link to other stuff. I don't know much about matrices, but I think I can give some general advice. First, if they say it's true, there's probably a good reason, especially in something as exhaustively studied and widely used as matrices. Even if in some obscure branch of mathematics someone decided they could do it, or made up a system in which it worked, that doesn't change that if you wind up with {{1,1},{1,1}}A={{1,0},{0,1}}, and you need A, you're screwed. I wish you luck finding the exception, though. While I was in Junior High I decided it was ridiculous to say that division by 0 was impossible, as everyone kept repeating, so I spent a few years figuring out how it works. I was, as you might imagine, rather upset to find out that limit notation and functional analysis have existed for a few centuries and nobody told me. But still, that doesn't change that 1/0 is meaningless in arithmetic and algebra. Black Carrot 02:21, 13 September 2006 (UTC)[reply]
There is a thing called a Pseudoinverse, which is the closest thing to an inverse matrix which a given matrix (even a singular or non-square one) has. If the determinant of some matrix is not 0 but an infinitesimal, you will be able to talk of an inverse with infinite entries (of course, this does not work with real numbers). -- Meni Rosenfeld (talk) 05:46, 13 September 2006 (UTC)[reply]
A nice way to see why d = 0 is a problem is to look at the following. Simplifying this somewhat, if A is a matrix and d its determinant, its inverse can be given by 1/d X, where X is some other matrix (see Application section on Adjugate matrix). Clearly if d = 0, you will have problems -- division by zero. Dysprosia 05:53, 13 September 2006 (UTC)[reply]
Another nice way to understand why an inverse matrix does not exist if det(A)=0 is to think of an nxn matrix A as representing a linear transformation of n dimensional space. So a 2x2 matrix is a linear transformation of 2-d space (i.e. the plane). The inverse matrix (if it exists) then represents the inverse of this transformation. But the inverse transformation is only defined if the original transformation is 1-1. Some linear transformations are not 1-1 because they map n dimensional space onto a linear sub-space of itself in a many-1 way - for example, the transformation represented by the matrix
maps the plane onto the line y=2x, because
and so all the points along each of the parallel lines x+2y=t are mapped to the single point (t,2t). A many-1 linear transformation (for which an inverse transformation cannot, by definition, exist) is always represented by a matrix with determinant 0, which in turn does not have a matrix inverse. Gandalf61 10:50, 13 September 2006 (UTC)[reply]


The question is provocative, and fruitful. Multiplication is defined for two compatible matrices, where compatibility means that the number of columns of the left matrix equals the number of rows of the right matrix. The usual algebraic definition of inverse depends on a definition of identity. If a matrix is not square, we might have two different identities (left and right), suggesting the possibility of a left or right inverse. For example, the 2×3 matrix
can be said to have a right inverse 3×2 matrix
because the product AB is the 2×2 identity matrix. This also shows that A is a left inverse for B.
Determinants, however, are only defined for square matrices. It is impossible for a square matrix to have a left inverse but not a right inverse (or vice versa), because the row rank and column rank are always equal. The determinant of a square matrix is nonzero precisely when the matrix has full rank, which means that if we look at the image space of n-vectors under the action of the n×n matrix, it also has dimension n.
Put more geometrically, a singular matrix collapses one or more dimensions, smashing them flat. The determinant measures the ratio of output volume to input volume, so a zero determinant tells us that such a collapse has occurred. And because the flattening has thrown away information in at least one dimension, we can never construct an inverse to recover that information. The rank of a matrix is simply the number of dimensions of the image space, while the nullity is the number of dimensions that are flattened. (Thus we have the rank-nullity theorem, which says that the sum of the two is the size n.)
Thus your teacher's skepticism is justified. But in some practical situations we will be satisfied with less than a full inversion. If we can only recover the dimensions that are not flattened, that's OK. The singular value decomposition of a matrix (of any shape) reveals both the rank and the nullspace of a matrix beautifully.
where A is m×n, U is an m×m orthogonal matrix, V is an n×n orthogonal matrix, and Σ is an m×n diagonal matrix with non-negative entries. (The diagonal entries of Σ are called the singular values of A.) Let Σ+ be Σ with each nonzero entry replaced by its reciprocal. Then we may define the Moore–Penrose pseudoinverse of A to be
This accomplishes what we hoped, and perhaps a bit more. For rectangular matrices whose columns (or rows) are linearly independent, we get the unique left (or right) inverse discussed earlier. For invertible square matrices, we get the ordinary inverse. And for singular matrices, we indeed get a matrix that inverts as much as we can. (This discussion also extends to matrices with entries that are complex rather than real.)
In applied mathematics, use of the pseudoinverse is both valuable and common. We need not wander into some remote and esoteric realm of pure mathematics to find it. So congratulations on your instincts; you may have a promising career ahead of you. (And, of course, congratulations to your teacher, whose instincts were also correct.) --KSmrqT 10:36, 13 September 2006 (UTC)[reply]

Estimating a contour map from known spot heights.

Please tell me if I've got this right, or is there a better formula I could use?

I have ten spot heights scattered irregularly over a retangular map. I intend to estimate the height of every point on this map so that I can create a contour map.

I am going to estimate the height of each point by the weighted average of all the spot heights. The weight I am going to use is the inverse of the distance squared.

In fact I am going to use a further refinement - instead of just using the square, I am going to estimate the exact power to use by disregarding each of the ten spot heights in turn, and finding what the power is that best predicts the disregarded spot height from the nine known spot heights, and then calculate the arithmetic average of these ten powers.

So my weights will be 1/d^n where d is distance and n is the power. My questions are-

a) is there any better (ie more accurate estimator) formula to use for the weights than 1/d^n?

b) should I use an arithmetic average of the ten powers, or some other kind of average?

c) is there any better approach I could use, even though the scheme described above is easy to program? Thanks 62.253.52.8 23:55, 12 September 2006 (UTC)[reply]

I find the above idea interesting, but here's a totally different one. Get the delaunay triangulation of the ten points, then for every other point find the triangle it's in and just find the height from the plane the triangle defines. Your contour map would be all straight lines. A problem is that anything outside the convex hull of your spots is undefined, using the plane of the closest triangle should work. (You might be able to fix that and add curvature to the triangles using the adajacent triangles and spline interpolation, maybe.)
Are the points with known heights just random samples or were they chosen because they are relative maxima/minima ? Your method would work well in the latter case. However, the inability of your method to give elevations above the highest sample point or below the lowest sample point would be a problem if you are just using random points, resulting in a flatter geography than is really the case. StuRat 05:25, 13 September 2006 (UTC)[reply]
The spot heights are not max/min, so they must be the other choice. Actually they are not actually heights: I am interested in creating a contour map of house prices. The spot heights represent the prices in various towns. The house prices in the surrounding country are sometimes more, sometimes less. Another possibility would be to also weight the 'spot heights' by the population of each town, so I suppose I would get something like where p is population.
I see three problems here:
  • There is no reason to think that house prices are a continuous function. In fact, houses prices frequently vary dramatically on the other side of some barrier, such as a river, highway, railroad tracks, or city/school district boundary. So, you are using a method that depends on having a continuous function when you don't actually have one, which will lead to poor results.
I am working with a large area, so the fine detail is unimportant. If I was working with just a town or city, then such step changes are the very things I would like my map to make clear.
  • You can only compare prices on comparable houses. For example, 20 year old, 2000 square foot, 3 bedroom, 2 car garage houses. Otherwise, you are comparing "apples and oranges", so this doesn't tell you much about the premium/penalty of placing a home in that location. To distinguish between this "location premium" and the size and quality of houses built in an area, you might want to compare vacant lot prices. This should only give you the "location premium".
The statistical series I am working with are for a very large number of sales, not individual houses. The sub-series are for different types of house. I agree that prices may vary both by the quality of the house, and the favourableness of the location. This is less of a problem when comparing year on year price changes.
  • If you only look at houses offered for sale, there might be a built in bias there, in that people will want to move more when something is wrong (basement floods continuously ?). Therefore, houses offered for sale may not be typical of the true value of houses in the area. StuRat 02:07, 16 September 2006 (UTC)[reply]
All the statistical series are for sold houses. I am in the UK: here the statistical series are I'm sure different from what you have in what I assume is the US.
Another suggestion (though I have no idea if it is a good one) is to match your data points to a polynomial of the form
which, conveniently, has 10 coefficients. -- Meni Rosenfeld (talk) 11:04, 13 September 2006 (UTC)[reply]
Sorry, I don't quite understand this. Would a.....j be the spot heights, and x, y the position on the plane, or what please?

Thanks. Although Delaunay triangulation is attractive, I think it would be far too difficult to program - it would require several days I expect.

I did wonder if there would be any advantage in using weights based on formulas such as:

where x is a constant. I think the magnitude of x would determine to what extent the estimated local height was based on the average of all heights, rather than those of the nearer spot heights. Are there any better formulas I could use?

I don't actually know how I would estimate x and n by regression or otherwise - could any help me please? Thanks.

There's a fundamental problem here. To pick a method of fitting to data, some additional requirements have to be applied. In sciences where there is a theory, the theory predicts a form for the data. Just add data, fit the form, and then you know the values of the (typically unspecified) constants. However, in the absence of a model (i.e. a form with some parameters identified as free for fitting), onehas a vastly wider range of equally bad solutions. Examples:
  1. Construct the Voronoi diagram of the locations of the data points, then for each sample, raise it's height to the sampled altitude. This is a solution that from (at least) one point of view assumes the least: no intermediate altitudes are inferred and each point is the height indicated by the nearest sample.
  2. Fit the bivariate polynomial suggested by Meni, above. This has the advantage that the result is smooth. It has the disadvantage that it can "blow up" going rapidly to "unreasonable" values outside the region where the data is provided.
  3. Take the logs of your input heights, fit the bivariate polynomial to the logs, then take .
This much freedom should indicate that there are a lot of ways to do this, and which one is right depends on what a priori information you have (or even suspect) about the result. This is similar to trying to capture imprecise or subjective priors in Bayesian analysis.
The method you describe of omitting some data to evaluate a proposed model is a good one (q.v. Cross-validation), but... Ten data points is small to begin with, so one should expect large random effects caused by small sample sizes using this method. It may turn out that your data is "very nice" meaning that the method (surprisingly) works well.
Given that, the method for gluing the (slightly) mismatched exponents is also a problem in model choice. Perhaps a better method would be to use Bayesian inference on your 10 subsets to estimate the maximum likelihood choice for the exponent. -- Fuzzyeric 01:58, 15 September 2006 (UTC)[reply]
I have been wondering how I should best average ten different formulas of the form . I'm not sure if taking the arithmetic average of x and perhaps the geometric average of n would necessarily be the best thing to do.
To clarify my polynomial, if you choose to give it a try - x and y are indeed the coordinates on the map, but a...j are coefficients you need to find. By substituting in the polynomial the x, y coordinates of a known point, and equating it to its known height, you get an equation in a...j. By doing it for all 10 points, you'll get ten equations, which you can solve. -- Meni Rosenfeld (talk) 04:40, 15 September 2006 (UTC)[reply]

September 13

Friday the 13th

Here is a question. What proportion of the 13th of a month is a Friday? Is it 1/7 as most people would expect? My gut feelings tells me that I need to find a formulae that turns (YYYY,MM,DD) into the day of the week. Is there such a formulae? 202.168.50.40 04:50, 13 September 2006 (UTC)[reply]

There's calculating the day of the week. Frencheigh 05:11, 13 September 2006 (UTC)[reply]

In the long run, of all the 13th days of the month, 1/7th will be Fridays, yes. However, when looking at shorter time periods, like a year, the ratio may be quite a bit different. StuRat 05:18, 13 September 2006 (UTC)[reply]

That's not true. I've made the calculation, and the proportion is actually 43 / 300. -- Meni Rosenfeld (talk) 05:32, 13 September 2006 (UTC)[reply]
OK, that makes it 43/300 instead of 43/301 (one seventh). A small, but significant, difference. I stand corrected. StuRat 15:09, 13 September 2006 (UTC)[reply]
That's because not only the structure of the years repeat every 400 years, but, as it it happens (I've checked), the days of the week also repeat. So you don't get a true random distribution. For reference, the number of times sunday through saturday are the 13th of the month in a cycle of 400 years is {687, 685, 685, 687, 684, 688, 684}. -- Meni Rosenfeld (talk) 05:38, 13 September 2006 (UTC)[reply]
And, as you'll notice, Friday actually has a tiny advantage over the other days ... spooky, no? Confusing Manifestation 10:30, 13 September 2006 (UTC)[reply]
I've done the calculation once (with a computer of course), and indeed, as User:Meni Rosenfeld says, I've got that Friday was the most frequent day for 13th. – b_jonas 10:51, 13 September 2006 (UTC)[reply]
Perhaps it's also worth noting that the cause of all this mess is the fact that the number of days in the 400-year cycle, 146,097, is divisble by 7. Otherwise we would get a perfect 1/7 for questions like this. -- Meni Rosenfeld (talk) 19:49, 13 September 2006 (UTC)[reply]

proability

I need some help with a problem that has been bugging me for a while: Suppose the odds of an event happening are 1/n. If I repeat the event n times, what are the odds that the given event will happen at least once? For example, if I flip a coin twice, the odds are 3/4 that at least once I get heads. If I roll six dice, what are the odds of at least one landing on 1? If I pull one card at random from 52 different decks, what are the odds that at least one will be the ace of spades? I know the answer involves calculating the odds that it WON'T happen every trial, but that's as far as I got. Thanks! Duomillia 15:32, 13 September 2006 (UTC)[reply]

You're on the right track. The odds of "getting something at least once" are the same as one minus the odds of not getting it at all, and that's an easier thing to compute. For the coins, the odds of not getting heads are 1/2 per flip, so the odds of that happening twice are (1/2)*(1/2) = 1/4, so the odds of getting heads at least once is 1 - 1/4 = 3/4 (as you said). For the dice, the odds of not getting a one are 5/6, so the odds of that happening six times are (5/6)^6 ~ 0.335, so the odds of getting at least 1 one is 1 - (5/6)^6 ~ 0.665. You should be able to see the pattern from here. -- SCZenz 15:40, 13 September 2006 (UTC)[reply]
(Edit conflicted.) Yes, you're exactly right. This is classic introductory probability theory. If the odds of something happening are p the odds of it not happening are q = 1 - p. The odds of it not happening twice are q2. The odds of it not happening n times are qn. So to get the odds of it happening at least once (but possibly more) it is 1 - qn = 1 - (1 - p)n. The reason it is easier to calculate with the odds of it not happening is that if you calculate the odds of something happening in multiple independent trials, you have to account for when it happens (did it happen the first time or the fourteenth?) and how many times it happened. But to calculate the odds of it not happening, you just calculate the odds of the same thing (not) happening at each trial. See binomial distribution for (much) more. –Joke 15:41, 13 September 2006 (UTC)[reply]
To put it all together, the general answer is . For large n, this is roughly equal to . -- Meni Rosenfeld (talk) 15:43, 13 September 2006 (UTC)[reply]
I think you mean 1-1/e, which would be about 0.63212. Black Carrot 19:36, 13 September 2006 (UTC)[reply]
(getting rid of evidence) Yeah, that's what I said, 1 - 1/e. (evil grin) :-) -- Meni Rosenfeld (talk) 19:44, 13 September 2006 (UTC)[reply]
So, as n approaches infinity, the odds of n in n trials approaches 63% and a little? Duomillia 21:31, 13 September 2006 (UTC)[reply]
Yeah. That's cool—and see how close it is already for n=6... -- SCZenz 22:16, 13 September 2006 (UTC)[reply]
Well, 6 is a big number in the "1, 2, 3, lots, many" system ;-) -- AJR | Talk 23:18, 13 September 2006 (UTC)[reply]
In the Discworld troll counting system, they go "1, 2, 3, many, many-one, many-two, ..., many-many-many-three, lots". Although the article does point out the alternative of "one, two, many, lots" and not bothering about the rest of the numbers. Confusing Manifestation 03:45, 14 September 2006 (UTC)[reply]

September 14

Unexpected symbol behvior w/ LaTex

Would someone please take a look at the following page:

http://en.wikipedia.org/wiki/Beer-Lambert_law

I created a new section entitled "Derivation" with several equations. Some look fine, but there are two types of problems with others ...

1. The equation font size differs depending on whether there is a fraction in the equation or not. When there is a fraction, the equation looks fine (e.g. 'Absorbance' or 'Transmittance"). In equations without a fraction, however, the symbols are almost too small to read. How can I make all of the math fonts the same?

2. Several of the equations have a small hyphen "-" at the end, and I can't figure out how to get rid of them.

I haven't used LaTex before, so that may be the problem, except that I don't see the problem occurring in the first section, 'Equations' that was written by someone else.

Thanks for any help,

Axewiki 00:25, 14 September 2006 (UTC)[reply]

I've fixed the first problem - the thing is, the software Wikipedia runs on has a default setting that if it can turn an expression in <math> tags to regular text and still look ok, it will, otherwise it will turn it into a PNG image. However, the definition of "looking ok" doesn't always work, especially if you're trying to make a bunch of equations look the same, so sometimes you have to trick it into making the equation into a PNG by adding a small space in the form of \, to the end of the equation. I got that trick from m:Help:Formula#Forced PNG rendering. As to the second, I seem to remember seeing that kind of thing happen elsewhere, but I can't remember the fix. Confusing Manifestation 01:32, 14 September 2006 (UTC)[reply]
Thanks ConMan! Parts of it are much better ...
Axewiki 02:16, 14 September 2006 (UTC)[reply]

Cylindrical sections

I am looking for information on cylindrical sections, i.e., ellipses; specifically, I want to determine what kind of a curve the ellipse will map to if the cylinder is unrolled after a cut in its long dimension (it appears to be a sine or cosine, based on a pencil rubbing of a wood model I have). I have searched the web and Wikipedia, and have not found much on this subject.

If the angle of the cut is 45 degrees (the case I am interested in), then the long-axis, or height, dimension (assuming the cylinder is standing on its end) is a simple function of the ellipse's x coordinate, when plotted in the ellipse's own plane, but the horizontal dimension on the unrolled cylinder is much more difficult, and I have not been able to visualize it well enough to calculate the formula for the transformation. It equals the length of the circular arc subtended at that height, but that depends on the angle, which is difficult for me to see. (I hope that makes sense.) Thanks in advance for any help you can give.

-- Ed


On the cylinder, we can pick our coordinates so the z-axis is coincident with the axis of the cylinder and the plane is at the bottom face of the cylinder. Suppose the radius of the cylinder is R, and the height of the cut is . Converting to Cartesian coordinates with the same z-axis and (positive) x-axis pointing along the ray , we find that the point is mapped to the point . So the cut is mapped as . But that's just (where we've used a notational dodge to roll it for us). This tells us that the cut is straight (not wavy) (equivalently planar) and sinusoidal when unrolled. -- Fuzzyeric 04:58, 14 September 2006 (UTC)[reply]


Fun Game

Someone in my math club proposed a variation on Nim: instead of removing n objects from a heap, you remove n objects from each of m heaps, where m and n are at least 1. Anyone have suggestions on solving it? Black Carrot 04:49, 14 September 2006 (UTC)[reply]

I'm assuming you mean "in a given step, from each remaining heap remove n objects", "there are initially m heaps", "the m heaps may have different starting sizes". Use Bouton's method, described in the article you link, and find a solution driving t to zero in each heap simultaneously. In general, this will require taking large counts to make most of the heaps vanish so that the remaining heaps can be simultaneously optimized.
If you mean one takes n objects, distributed as one likes from m heaps, then the distinction into m heaps is superfluous since there's effectively only one heap.
If you mean one must take >1 object from each of m heaps and the first player to deplete any heap is the loser, then use Bouton's method on each heap independently.
If n is upper bounded, one may not be able to reach a t=0 state in one move. For instance if the bound is <m and you must draw an object from every heap to reach a winning intermediate state. Then, similar to some more complicated games, the first moves are sort of random and the loser is the first player to make a mistake noticed by his opponent. In fact the early play is probably to mini-max the ability of the other player to drive the game out of the non-winning equilibrium. -- Fuzzyeric 05:10, 14 September 2006 (UTC)[reply]
No, I meant "in a given step", from each of "as many heaps as you feel like (m)" remove n objects. The starting condition could be whatever the players agree upon. So, for instance, from (3,2,1) I would take one from each (2,1), they might take the entire first heap (1), then I would take the remaining one and win. Or from (7,3,3,1,1) I would take three from each of the first three heaps (4,1,1), they might take one from the first and last (3,1), I would take one from the first (2,1), and it would end the same way as before. Black Carrot 15:32, 15 September 2006 (UTC)[reply]
Then you mean the variation described above as "If you mean one takes n objects, distributed as one likes from m heaps, then the distinction into m heaps is superfluous since there's effectively only one heap." This is equivalent to one heap as there is no consequence to heap boundaries. Use Bouton's method on the total number of objects, regardless of heap membership. -- Fuzzyeric 04:02, 16 September 2006 (UTC)[reply]
If that was what I meant, that's what I would have said, and I wouldn't have had any trouble solving it myself. Where did I say it could be distributed as you like? The same amount (n) must be removed from each of as many stacks as you like (m). To put it visually, it's like yanking out a rectangle of arbitrary dimensions, and it's been devilishly difficult to solve. If you had stacks (7,3,3,1,1) as above, for instance, under your interpretation, I could just remove all of them and have done with it. Under the one I've been trying to explain, you could remove 7, 6, 5, or 4 from the first one, or 3 or 2 each from any of the first three of them, or 1 each from any of them. Reduced to two stacks, it simplifies to Wythoff's Game. Black Carrot 22:02, 18 September 2006 (UTC)[reply]

Recurrence Relations

The recurrence relation article doesn't talk about the cases of nonlinear recurrence relations. Could anyone indicate me what would be standard methods for solving relations such as where (just an example, I don't really want to know what that particular solution is). What could the function f(x) be that would leave the recurrence relation relatively easy to solve (no Z / Laplace transforms or others...). I'm also interested in the inhomogeneous cases and thoses where the degree is greater, but I'll start with this... --Xedi 21:01, 14 September 2006 (UTC)[reply]

There should be an article on the "cycle structure" of solutions of ... perhaps somewhere near Chaos theory? In any case, for Möbius transformations, such as the one you mention, the solution is fairly simple, and contained within that article. As , there are "obvious" properties; e.g., if f is injective, so is f(n), and there are various techniques that work in the vicinity of a fixed point of f. Other than that, solutions usually depend on a change of coordinates; finding a function g such that, e.g. f(g(x)) = g(x + 1) (at least locally), leading to f(n)(x) = g(g-1(x) + n).
Thanks for the swift reply, I did actually notice the periodicity of the recurrence relation I gave where . I'll try to understand clearly everything and then come back. Thanks again ! --Xedi 21:42, 14 September 2006 (UTC)[reply]

September 15

Quadratic forms

What is the typical way of working with quadratic forms with added linear terms? Diagonalization of the associated matrix, and then folding in the linear terms through translation?

I have not spent too much time thinking about this yet, but I wonder whether it would be more elegant to consider x^T = (1 x y), and defining a 3 by 3 matrix M so that x^T M x = 0 is what we want to work with. Is this what is typically done for such things?--HappyCamper 00:33, 15 September 2006 (UTC)[reply]

Depends what you're doing. Examples:
  • projective (homogeneous coordinates): , where the matrix is usually restricted to be symmetric or triangular
  • polynomial: , where the matrix is almost always symmetric.
  • eliminate the linear terms -- the method is usually taught in analytic geometry as the method of moving conic sections into standard form -- i.e. rotate to eliminate cross terms, complete squares, and represent the result as, for example, .
From the point of view of the theory of quadratic forms inclusion of linear or constant terms is an affine transformation, suggesting that the first or third is more natural when extending from that theory. However, it is equivalent to the polynomial form and sometimes the polynomial form is more convenient for expressing certain proofs. Despite this, both of the displayed forms above are inconvenient for cubes and higher, so the Einstein notation is commonly used. This notation is fundamentally equivalent to the polymer form above, but is both more compact and easier to work with (with some practice, because initially you don't realize just how many relations you've written down with one equation). -- Fuzzyeric 02:28, 15 September 2006 (UTC)[reply]
By definition, a quadratic form has no linear terms. But suppose we have a quadratic polynomial in two variables, such as one used for the implicit equation of a rotated and displaced ellipse,
Using homogeneous coordinates we can rewrite this so all terms have total degree 2.
Et voila! We have a polynomial of degree 2 which is homogeneous in the sense that all terms have the same degree, and thus a valid candidate for a quadratic form in three variables. As with any quadratic form, we can convert this to a bilinear form, and hence to a symmetric matrix, your proposed M.
Indeed, this is a convenient and popular technique in applications like computer graphics. The matrix has uses that are not immediately obvious. For example, suppose p is a point outside the ellipse; then
is the equation of a line intersecting the ellipse in the two points where a ray from p is tangent. (This line is called the "polar" of "pole" p.) If p is on the ellipse, then this line goes through p and is tangent to the ellipse there. Or consider the last column of M; it gives the homogeneous coordinates for the center of the ellipse.
So congratulations, your instincts in this are quite fruitful. --KSmrqT 05:29, 15 September 2006 (UTC)[reply]
Thanks for the elaborate responses - this is quite helpful. I think I am most interested in the homogeneous coordinate representation. What I am unsure of, is the interpretation of this result...
From above, we have . Now, since M is symmetric, there exists an orthogonal matrix O such that . D is a diagonal matrix which will tell us everything about the conical intersection centered at the origin. Now, what does the product mean? This seems to be an affine transformation on the original coordinate system. However, is it not true that we are transforming the identity element as well when this is done? This is what seems odd. --HappyCamper 16:05, 15 September 2006 (UTC)[reply]
From Morse theory every quadratic equation is equivilent to Ax2+By2 when translation and rotation are removed. The equation will generally have one critical point, say. What happens with the transformation is that , and , that is it leaves the critical point stationary.--Salix alba (talk) 19:10, 15 September 2006 (UTC)[reply]
Equivalently, the matrix O rotates the coordinate unit vectors to be parallel to the eigenvectors of the matrix. Equivalently, rotates the matrix to be diagonal (each ellipsoid axis is parallel to a coordinate axis). -- Fuzzyeric 04:05, 16 September 2006 (UTC)[reply]
The term "identity element" is misplaced. If (w:x:y) are the homogeneous coordinates of a point, each point in the plane is defined by an infinite number of triples, all non-zero scalar multiples, (σwxy), σ ≠ 0. That's the reason we often use colons (":") rather than commas (","); only the ratios matter. We are definitely not obliged to have w = 1. In fact, we have a whole line of points with w = 0, sometimes called "points at infinity".
Although we can diagonalize M with an orthogonal matrix Q, we can also use any invertible matrix A. For example, let
This will diagonalize σM, where σ = 10−4 and M is the matrix that I used as my example, producing a circle x2+y2w2. (Note that the scalar σ has no effect in homogeneous coordinates.) We see that A is merely an affine change of coordinates, combining rotation, translation, and non-uniform scaling. The matrix has been reduced to a diagonal form involving only +1, −1, and 0, revealing the signature of the quadratic. Acting on points p, the effect of A is to transform them into a coordinate system where points on the ellipse become points on an origin-centered unit circle. --KSmrqT 05:37, 17 September 2006 (UTC)[reply]
This description is quite useful. The note about Morse theory is quite intriguing too. So it seems, a better way to think of this, is that in homogeneous coordinates, the introduction of w induces a convenient parameterization of the problem. I must read up on this more. This is far, far, far too interesting to be missing out on. --HappyCamper 03:05, 18 September 2006 (UTC)[reply]

The red car or the blue car?

The terrorists has stolen a nuclear device and escaped with the help of a truck. Chasing after them in his Lotus is James and his two gadgeteers Q and R.

Catching up to the truck in a back alley way, James watched helplessly as the terrorists abandoned the truck for two getaway cars. The red car sped away to the east while the blue car sped away to the west.

Luckily Q and R aim their prototype radiation detector (disguised as CANON and NIKON digital SLR cameras respectively) at the two separate cars as they disappeared in the distance. Q aimed at the red car while R aimed at the blue car.

Since there is only one nuclear device, it's not certain which car contains the nuke.

Q said "My detector has detected the nuke in the red car. If the red car do indeed have the nuke, my detector will say so in 85% of the time!"

R injected "But if the red car do not have the nuke, Q's detector will give a false positive 70% of the time!"

R said "My superior detector did not detected any nuke in the blue car. If the blue car does not have the nuke, my detector will give a negative (or true reading) 98% of the time. On the otherhand if the blue car do have the nuke, my detector will also give a false negative 60% of the time."

Question: Which car should James go after?

210.49.155.134 10:04, 15 September 2006 (UTC)[reply]

Here's the chances each result would be found under each scenario:
Scenario 1     Q results   R results
==========     ==========  ==========
Red has nuke   85% chance
Blue doesn't               98% chance

Scenario 2 
========== 
Blue has nuke              60% chance
Red doesn't    70% chance
I would multiply the results, to get a (0.85)(0.98) or 83.3% chance we would get both the Q and R results if Scenario 1 is correct, and a (0.60)(0.70) or 42% chance if Scenario 2 is correct. Let's normalize those results to get 83.3/(83.3+42) or around a 66.5% chance the red car has the nuke and 42/(83.3+42) or around a 33.5% chance the blue car has the nuke. StuRat 10:25, 15 September 2006 (UTC)[reply]
Recommend using Bayesian analysis to estimate the posterior probability that the bomb is in each car given the data. This explains the method used by StuRat. -- Fuzzyeric 04:12, 16 September 2006 (UTC)[reply]
Since the problem involves the event 'positive Q' and 'negative R readings', we need to know whether Q and R detectors are independent in order to solve the problem. (Igny 16:13, 17 September 2006 (UTC))[reply]

I things it is implied that the detectors are independent. M.manary 17:20, 17 September 2006 (UTC)[reply]

September 16

Exponential functions

Take the simple equation of an exponential function.

y = a^x

why can't "a" equal 1 or be a negative real number?

a can be 1 or negative. 1x is always 1. (-3)x = (-1)x3x, for example. If you're working off the definition of the Exponential function article, we just call exponential functions with base a those with positive base a; it doesn't mean it can't happen. See also on that page "On the complex plane", which goes into more technical reasons for this. Dysprosia 05:35, 16 September 2006 (UTC)[reply]

I thought when a = 1 (the equation now being y = 1^x), if this was now drawn onto a graph there would only be a straight line no matter what x equals. Also if the equation becomes y = (-a) ^x, what happens then? Please answer this on a maths B level.(medium level)i don't want to know the technical side.

What's wrong with a straight line? It's a perfectly legitimate function. However, if you have the function (-a)x = (-1)xax (a positive of course), you will find that it will only be a real number if x is an integer, but that doesn't mean that there's anything really wrong, because we can still find values for the function (that's the advanced bit). Dysprosia 06:25, 16 September 2006 (UTC)[reply]
In more detail, consider two specific examples of a.
  • If a = 1, then y = ax becomes y = 1x, which is the constant function y = 1. This is a function whose graph is a horizontal line, which is both reasonable and useful.
  • If a = −1, then for x = 12 we have y = √(−1). As you may be aware, and can easily verify, no real number squares to −1, because negative times negative yields positive, as does positive times positive. Although we can switch to more sophisticated tools, especially complex numbers, the conclusion is that negative values of a break the definition for most values of x.
This latter issue comes up in the context of one generalization of the Pythagorean distance function, (x2+y2)1/2. Everywhere a "2" appears, replace it with a real number, p, with the stipulation that p is at least 1. This almost works, but breaks for the reason we just saw. Instead we use
By ensuring non-negativity, we get a well-defined family of interesting distance functions, including Pythagorean distance as the special case p = 2. --KSmrqT 04:48, 17 September 2006 (UTC)[reply]
Actually, if I'm interpreting the question right, if you have a = 1, it's just a straight line and doesn't really exhibit any of the properties of an exponential function. Like others said, if you have negative a, you get weird things happening in-between the "nice numbers". Even if you just look at the integers (using (-2)x as an example), you get something like "1, (-2), 4, (-8), 16..." which also isn't really so "exponential". —AySz88\^-^ 05:22, 17 September 2006 (UTC)[reply]
The only thing that fails for 1x is the derivative function; everything else holds trivially. You could interpret (-2)x at the integers to be "exponential-like" in that the terms oscillate exponentially, but that's not really something formally used. Dysprosia 08:36, 17 September 2006 (UTC)[reply]
... also the flatness of the graph when a=1 means that the inverse function is not well-defined for a=1 i.e. you cannot take logs to base 1. Gandalf61 09:55, 17 September 2006 (UTC)[reply]
Well, don't forget we're dealing with "medium level" math which is probably pre-calc, and their description of "exponential function" is probably more based on the really representative case. But yeah, it's kinda misleading to completely exclude someting like 1^x from the set of "exponential functions", instead of saying something like "1^x is just a really boring exponential function"... —AySz88\^-^ 19:23, 17 September 2006 (UTC)[reply]
a0 is equal to one, right? --AstoVidatu 19:52, 17 September 2006 (UTC)[reply]

and are both indeterminate forms and needs to be evaluated. M.manary 21:12, 17 September 2006 (UTC)[reply]


Rephrasing the problem to coincide with the domain of definition provided in Exponential function, i.e. that , for a > 0. The requirement that a>0 is equivalent to the requirement that natural logarithm in the definition yields a Real number. This comes to two cases:
Case a = 0... Consider 0^x. As x -> 0, this has the value 0. Contrariwise, a^0 -> 1 as a -> 0. Thus, the function a^x is discontinuous at 0^0 and so the value at 0^0 is a matter of choice. We may associate this behaviour with the essential singularity in the complex logarithm at zero.
Case a < 0... The complex logarithm has a branch cut along the negative real axis. A consequence is that the value of the logarithm may be continuously continued (analytic continuation) from its values along the positive real axis to values along the negative real axis. But, you get different answers if you go clockwise or counterclockwise. So there's not one answer on the negative real axis. Again, there's a choice. Typically, the "resolution" is to switch from the Argand plane (complex plane) to the Riemann surface for the logarithm, which stacks an infinite number of values over each point in the complex plane. This method extends logarithm to a multi-valued function. Then, which value you take for the value of the logarithm on the negative real axis depends on which branch of the Riemann surface you're on -- i.e. whether you continue in the clockwise or counterclockwise direction and how many times you go around zero. (This method of going from one copy of the complex plane to another induces a group structure on the Riemann surface modulo the complex plane. For the logarithm function, this group is infinite and isomorphic to addition on the integers.)
To sum up... We require a > 0 so that we can have the result be single-valued (under the conventional orientation of log's branch point and branch cut). We may extend to all complex a (except zero), but then logarithm generates an infinite number of values per input and the result of the exponentiation is multiple-valued. (... unless we take another step back in generality and don't project the Riemann surface onto the Argand plane.) -- Fuzzyeric 21:20, 17 September 2006 (UTC)[reply]
M.manary : No. Both and are equal to 1. What you meant to say is that , where and (or and , or and ) is indeterminate. Regardless, it is very common to define 00 = 1, so a0 = 1 for every a. About ax for negative a, a natural definition can be given if we restrict ourselves to real x:
-- Meni Rosenfeld (talk) 05:02, 18 September 2006 (UTC)[reply]
Nope. I had exactly the limits I wanted. Of course, neither of them were limits at infinity... -- Fuzzyeric 14:41, 18 September 2006 (UTC)[reply]
Is there any clearer way to specify that I am addressing User:M.manary than writing "M.manary : " at the beginning? -- Meni Rosenfeld (talk) 15:16, 18 September 2006 (UTC)[reply]
<sigh> Sorry. Maybe if it had been linked? -- Fuzzyeric 15:28, 18 September 2006 (UTC)[reply]
Fixed :-) -- Meni Rosenfeld (talk) 15:34, 18 September 2006 (UTC)[reply]

September 17

Multidimensional stochastic processes

I am studying stochastic processes in applications to biology. I need to find textbook/publications/other sources regarding multidimensional random walks and multidimensional diffusion. When I say multidimensional I mean that it is really multi: dimension is in thousands. The most popular textbooks, like Cramer-Leadbetter, Feller, Gardiner, Van Kampen, etc. usually after pronouncing word "multidimensional" immediately reduce the problem to dimesion 2, sometimes 3, cite the Polya result regardting recurrence and they are done. So far I failed to find a good treatment of even such simple problem as computation of diffusion coefficient in Fokker Plank equation in N dimensions, Can anybody help? Thank you

How about searching for things like "multidimensional random walk protein"? Does this help? Fokker-Planck equation might have something, but I suspect this is not what you need. --HappyCamper 02:52, 18 September 2006 (UTC)[reply]
(Edit conflict) Are you asking about a multidimensional generalization of ? This is called the Fokker-Planck equation in Bluman and Kumei and bears an obvious relation to the diffusion equation. This doesn't answer your question, but have you seen this book?:
  • Berg, Howard C. (1983). Random Walks in Biology. Princeton University Press. ISBN 0-691- 00064-6. (1993 reprint)
---CH 03:00, 18 September 2006 (UTC)[reply]
(I added a header to the question so that it shows up in the contents. Feel free to improve the title if you can. – b_jonas 18:36, 18 September 2006 (UTC))[reply]


Currency matters

Hello sir/mam

what is yen? and how it is converted in rupees ? can u please explain me.

Yours faithfully,

ASHOK.

For yen, see our article Yen. To convert yen to rupees, go to a large bank with a bag of yen and have them convert to rupees. Or if you just want to know the equivalent value, use any of several currency converters on the Internet. --LambiamTalk 19:21, 17 September 2006 (UTC)[reply]
The easiest one is simply by using google. Type in "100 Japanese yen in Indian rupees" and you'll get a good result. See [2]. Oskar 01:14, 18 September 2006 (UTC)[reply]
Google can do that now? Neat. --HappyCamper 02:19, 18 September 2006 (UTC)[reply]
That's amazing. I wish I had known about that last week, when I was in Canada, and I could have easily done things like this. Chuck 16:15, 21 September 2006 (UTC)[reply]

Sinc Filter?

Could someone please explain to me the concepts behind a Sinc Filter especally how they relate to Anti-Aliasing and the function sin(x)/(x). Any input is appreciated, thanks. HP 50g 23:56, 17 September 2006 (UTC)[reply]

The sinc function in the time domain is a windowing function in the frequency domain. Browse around Fourier transform for some goodies - Continuous Fourier transform might be of use too. To prevent aliasing, you want to fit the frequency content of your signal of interest inside the entire window. --HappyCamper 02:19, 18 September 2006 (UTC)[reply]
A sinc filter is a lowpass filter (with a really, really sharp cutoff). If you are resampling something below a certain frequency (i.e. scaling an image down), applying a sinc convolution filter beforehand to the original would prevent aliasing of frequencies greater than the new nyquist frequency. - Rainwarrior 07:11, 18 September 2006 (UTC)[reply]
It's hard to answer briefly, since this topic is part of a much broader discussion. In digital signal processing, a core theorem is that the frequency response of a (time-invariant, linear) filter is the Fourier transform of the filter impulse response. That is, if we feed the filter a Dirac delta function, essentially an instantaneous unit spike, and record the output for all time, then the Fourier transform of that output describes the effect of the function across frequencies. (To get a sense impression of an impulse response, stand in a large empty room with eyes closed, sharply and loudly clap your hands once, and listen.) Another core fact is that such a filter can only scale or phase shift a sinusoidal input, but not change its frequency.
When a continuous signal is digitally sampled at periodic intervals, the frequency spectrum of the input is replicated periodically as well, with period inversely proportional to the sampling period. This can produce the phenomenon called "aliasing", where high frequencies masquerade as low ones. When this occurs, we can no longer reliably reproduce the continuous signal from the sampled one. To avoid this, we need to select a single period in the frequency domain, to filter the input before we sample. The ideal filter has a simple frequency response, one that looks like a square box.
So now the question is, what filter has a frequency response that looks like a box? Yep, you guessed it. And what is its impulse response? Right again. --KSmrqT 11:17, 18 September 2006 (UTC)[reply]
Thanks for the responses. I be sure to read about the topics you told me about. Thanks. HP 50g 14:52, 18 September 2006 (UTC)[reply]
I award KSmrq with this mathematical equation for his comprehensive contributions to the Mathematics Reference Desk. :-) --HappyCamper 15:41, 18 September 2006 (UTC)[reply]

September 18

Arithmetic pre-Fibonacci

If Fibonacci introduced the Hindu-Arabic numeral system to Europe in the 13th century, what was used before? I presume that it was Roman numerals and Roman arithmetic. Was roman arithmetic actually done as listed in our article, or is that a "modern way that would work"? Are things like the Domesday book recorded in Roman numerals? What did bookkeeping look like in those days? -- SGBailey 09:36, 18 September 2006 (UTC)[reply]

I believe the Domesday Book used words for numbers ("one hundred and forty") instead of either Arabic or Roman numerals. StuRat 09:43, 18 September 2006 (UTC)[reply]

math questoin

if anything divided by 0 is infinity how come 0 divided by 0 is 1?

0/0 is undefined, as we have three different rules providing a paradox:
  • Anything divided by zero is either positive or negative infinity.
  • Anything divided by itself is one.
  • Zero divided by anything is zero.
So, since the answer can't be simultaneously equal to all of those values, the answer is undefined. StuRat 12:20, 18 September 2006 (UTC)[reply]
[Edit conflict] More generally, if x is any number, then for all a other than 0, . If we want it to hold for a = 0 as well, we will get . Hence you will sometimes see 0/0 mentioned as being equal to anything. This is usually little more than a memory trick - as a stand-alone expression, as StuRat explained, it is almost never defined. More details can be found in the article division by zero. -- Meni Rosenfeld (talk) 12:33, 18 September 2006 (UTC)[reply]

However, when the top and bottom of a fraction both approach zero, there may very well be an answer:

  • X/X, as X approaches zero from either side, is equal to 1.
  • X/2X, as X approaches zero from either side, is equal to 1/2.
  • X/X^2, as X appoaches zero from the positive side, equals positive infinity.
  • X/X^2, as X appoaches zero from the negative side, equals negative infinity.
  • X^2/X, as X appoaches zero from either side, equals zero.

See L'Hopital's Rule for more details.

StuRat 12:27, 18 September 2006 (UTC)[reply]

Klein bottle is its own 2-fold covering?

There is such information on the main page of Mathematics portal ("Did You know..."), but there is no further information in the article "Klein bottle". Anybody has any further information? At first glance it seems impossible for a space with nontrivial fundamental group to cover itself...

Double cover indication
I've sketched up an easy construction of a double cover and it should be visible on the right. Note that the square on the right is one patch, just drawn twice. The square on the left can be put in bijection with the square on the right by translation. This situation is simple enough to demonstrate by diagram. Note that this can be continued to any n-covering by stretching even further (or by replacing sub-diagrams on the right by copies of the diagram on the right). This is equivalent to the n-covering of the torus by the torus. Unlike the torus, this construction only demonstrates odd coverings in the other direction (vertically in the drawing) but not even. -- Fuzzyeric 15:51, 18 September 2006 (UTC)[reply]

Abstract -> Practical

I guess this could go either here or at the science desk, but I imagine you guys will be of more help. Basically I'd like to know some more abstract mathematical concepts that have been discovered and understood mathematically before we knew there was a practical application. Sort of like non-euclidean geometry was worked out before we discovered that there actually existed the configuration in the universe. Thanks--152.23.204.76 15:00, 18 September 2006 (UTC)[reply]

How about things from quantum mechanics? Much of the theory can be quite abstract, but still useful. Stuff from operator theory say. Group theory is also quite useful in chemistry - you can classify the types of spectra molecules will have with group theory.
On the other hand, sometimes things go the other way around. Fourier series is one example. We have a whole field called Harmonic analysis which deals with understanding this marvelous thing. --HappyCamper 15:34, 18 September 2006 (UTC)[reply]
Yes, Fourier series seem like a perfect example (even though the order seems a bit off). I'd like specific examples from Quantum mechanics. I understand there is some complex math involved, but I'd like to read some more about examples of this. And in reply to Maelin, yep primes are also a good example along with anything involved with cryptography.--152.23.204.76 17:32, 18 September 2006 (UTC)[reply]
Prime numbers were studied extensively ever since the days of Pythagoras, but they didn't really have any practical applications until the invention of computers and public-key cryptography. Maelin 15:46, 18 September 2006 (UTC)[reply]
  • Boolean logic didn't have much practical application until computers were invented. StuRat 18:26, 18 September 2006 (UTC)[reply]
  • Wave packet theory, describing the location of electrons, is a good example. Mathematically, it says the position of electrons is indeterminate. Figuring out how this can be physically the case is a bit more challenging. StuRat 18:17, 18 September 2006 (UTC)[reply]
  • More recently, string theory basically came from the math, not from observations. StuRat 18:17, 18 September 2006 (UTC)[reply]

For me, in addition to group theory, one of the classic examples is good old Hilbert space. Quantum mechanics is basically a theory about linear operators in Hilbert space, and many fields of applied mathematics (e.g. control, optimization and filtering problems in time series) are now formulated using Hilbert space techniques. –Joke 20:15, 18 September 2006 (UTC)[reply]

x+2^x=37

How do you do problems like these without using the guess and check method?

I know the answer. I just want to know if there is a better way to get is other than substituting random numbers for x and checking it. If, in other problems, x was irrational, guess and check would be useless in getting an excact answer. --Yanwen 20:54, 18 September 2006 (UTC)[reply]

This kind of problems cannot be solved with elementary functions. You'll have to use functions like Lambert W function (substitute ). However, you can use Newton's method to quickly find numerical solutions. Also, if the numerical solution turns out to appear to be a nice number, you can check that this number actually is a solution. -- Meni Rosenfeld (talk) 21:24, 18 September 2006 (UTC)[reply]
We can solve equations algebraically by applying the inverse of the operation in the equation. If the x is being added onto, we add the opposite number to both sides of the equation to solve for x. If the x is being multiplied, we multiply both sides of the equation by the reciprocal number. If x is an exponent in an exponential equation, we take the logarithm of the appropriate base of both sides of the equation. If x is the base of a power equation, we take the appropriate root of both sides of the equation. If x is the angle in a trigonometric equation, we take the inverse trig function of both sides of the equation.
However if x occurs in more than one type of function in an equation, most of the time we have no way to solve it algebraically. However, sometimes there are methods to change a multifunction equation into a single-function equation. For example, in an equation with two different trig functions, we may be able to use a trig identity to rewrite it with only one trig function. In solving polynomial equations, we may be able to factor the polynomial in to linear factors and solve each one. The theory of polynomial equations has developed formulas and methods for solving any equation of degree 2, 3 or 4; and has shown that there are no algebraic methods to solve all polynomial equations of higher degree. MathMan64 23:27, 18 September 2006 (UTC)[reply]
About those high degree polynomial equations, what is really meant by "algebraic methods"? Or, from the other perspective, if it's impossible to solve it with "algebraic methods and taking roots" (as I believe it is, right?), what methods are left? —Bromskloss 11:22, 19 September 2006 (UTC)[reply]
"Algebraic methods" usually mean addition, subtraction, multiplication, division, powers and roots. The "other methods" which are applicable to solving algebraic equations are, for example, Jacobi's elliptic functions. -- Meni Rosenfeld (talk) 11:30, 19 September 2006 (UTC)[reply]
For integer solutions, Category:modular arithmetic can (sometimes) work. In the given instance, the equation has to be tru to any modulus, so we might be able to lift a solution all the way to the integers...
First, if x<0 and an integer, then the left-hand side of the equation is negative, so x>=0. Also, x=0 isn't a solution, so x is positive. Both terms on the left are >37 is x>37, so 0<x<=37.
implies
, or "x is odd". Replace x with 2xx+1.
xor (which isn't a solution)
is equivalent to
, taking mods again...
implies
, or xx is even. Replace xx with 2xxx.
implies xxx is odd.
implies xxxx is a multiple of 4.
Now x = 2xx+1, xx = 2xxx, xxx = 2xxxx+1, and xxxx=4k for some k. So, xxx = 8k+1, xx = 16k+2, and x = 32k+5. The only values of k compatible with 0<x<=37 are k=0 or k=1. Equivalently, x=5, or x=37. Five works. Thirty-seven doesn't.
Therefore, x = 5 is the solution in integers. I also know that there are an infinite number of solutions in the complex plane, but this method won't find them without considerable extension. -- Fuzzyeric 16:17, 19 September 2006 (UTC)[reply]

September 19

Outlier test for a inverse squared distribution

I'm analyzing the number of inbound links to news articles. The number of links to articles follows an inverse power law; the number with zero is impracticably large to count, the vast majority with any inbound links have only one link, far fewer have two, a rare handful have, say, 10 inbound links. The distribution changes from day to day, on a slow news day there will be few or no outstandingly popular articles, on a fast news day there may be many.

What'd I need is a statistical test for outliers on a given day. The distribution of quantities of inbound links is not normally distributed, and the depth of my statistical knowledge ends with normal distributions. I can't just pick an arbitrary threshold value because of the fast news day effect or other background effects like the internet growing larger, etc.

One crude idea I had was to copy the distribution for a given time period and reflect it over the y-axis, simulating the shape of a normal distribution, then calculate the standard deviation and use a standard test based on that. I can hear the mathematicians cringing.

Can you give me a better way?

--Phig newton 01:41, 19 September 2006 (UTC)[reply]

Your distribution is discrete. If you believe that your distribution satisfies a power law, then you want the geometric distribution. Find the mean number of links, q. Set p=1/q. The CDF of this distribution is 1-(1-p)^n. Setting this equal to x, your confidence level (typically x = 0.95, 0.99, or similar number near 1) and solving for n gives ln(1-x)/ln(1-p). This is the number of links required to exceed x "percent" of the population of pages for that set of data. -- Fuzzyeric 18:26, 19 September 2006 (UTC)[reply]
I don't see how you go from a power law probability distribution (as observed in scale-free networks) to a geometric distribution. Power law distributions are heavy-tailed, quite unlike geometric distributions. --LambiamTalk 12:08, 20 September 2006 (UTC)[reply]
You might want to have a look at this book: Barnett, V. (1984). Outliers in Statistical Data. New York: John Wiley & Sons. {{cite book}}: Unknown parameter |coauthors= ignored (|author= suggested) (help) Vectro 18:34, 19 September 2006 (UTC)[reply]

Basic Statistics, Lottery Odds

My generally math-capable brain takes a vacation when it comes to statistics. I can handle the first question but the second one is giving me pause. I could probably figure it by poking at the web but I'm much more interested in understanding why and how than what the actual answer is. Could anyone please help?

1. Say there is a lottery in which we have fifty balls, numbered 1 through 50. Five of the balls are selected, at random. What is the probability that an arbitrary set of numbers matches the randomly chosen numbers?

2. Say we have have a similar lottery, except that the arbitrarily chosen numbers much match the randomly chosen numbers in sequence, that is, the player must choose which number is selected first, which is selected second, etc. What are the odds then?

Thanks, VermillionBird 04:06, 19 September 2006 (UTC)[reply]

Given a set of 50 balls, choosing 5 of them, there are 50!/(5!(50-5)!) combinations and 50!/(50-5)! permutations, respectively. Given a specific set of 5 balls, there is 1 combination and there are 5! possible permutations. To figure the odds, divide the number of ways it can match the desired results by the total number of possibilities. Black Carrot 05:18, 19 September 2006 (UTC)[reply]
Permutations! That's the word I forgot. Thanks. VermillionBird 22:34, 19 September 2006 (UTC)[reply]
Please refer to Permutations and combinations
1. basic combination
2. basic permutation

210.49.155.134 11:06, 19 September 2006 (UTC)[reply]

As to your question' why': There is exactly 1 chance in 50 that the first ball matches. If it doesn't, we failed; but if it did match, there is exactly 1 chance in 49 that the next ball matches; etcetera.JoergenB 11:31, 19 September 2006 (UTC)[reply]
Probability and statistics is largely about learning to count. The usual path to learning is to start with small, discrete examples where we can count explicitly; then move to larger discrete examples where we apply formulae we developed on the small examples; then tackle continuous examples.
An important general rule is how to combine probabilities. Let's try your second example, scaled down. Assume we have 10 balls labeled with the numbers 1 through 10, and that we will draw 3 balls in sequence without replacement. That is, after we draw the first ball we do not replace it in the pool of 10, so it cannot be drawn a second time.
How many different 3-ball sequences can we draw? We have 10 choices for the first ball, so there are exactly 10 "sequences" of 1 ball. Now there are 9 balls left in the pool, any one of which may be the second ball in a 2-ball sequence. Finally, we have 8 balls left to choose from for the third ball in a 3-ball sequence.
To sum up, we observe that for each different first ball chosen, we have a set of 2-ball sequences. Furthermore, for each different second ball we have a set of 8 final choices. Therefore the tally is 10×9×8 distinct sequences of 3 balls chosen from 10.
Explicitly, let's try choosing 2 balls from 3. The possible sequences are
1 2 1 3 2 1 2 3 3 1 3 2
Our reasoning says we have 3 choices for the first ball, and 2 for the second ball, giving 3×2 possibilities, which agrees with the 6 sequences listed.
So the key idea here is the multiplication of possibilities. This occurs so often we have a "descending product" notation, the factorial function n! = n×(n−1)×(n−2)×⋯×2×1. To chop this after the first k numbers we divide by (nk)!. Thus we have the general result that in choosing sequences of k balls from n distinct balls without replacement, the total number of distinct sequences (or "permutations") is
For the 3-of-10 example, we have P(10,3) = 720 distinct sequences. Your second example essentially asks the question, "What are the chances we will pick the right one at random?". The answer for 3-of-10 is clearly 1 out of 720, or approximately 0.00139, so you'd better be very lucky!
Your first example actually requires a more sophisticated computation. In the tiny case 2-of-3, the equivalent question is, "How many ways can we eliminate 1 ball?", which is, of course, 3. But that won't work for 3-of-10, or for your example. (In the 3-of-10 case the answer is that we can draw 120 different collections, ignoring order.) Anyway, you say you can handle this (though I wonder), so I'll stop here. --KSmrqT 12:54, 19 September 2006 (UTC)[reply]
No, you're quite right. I knew 2 should be less likely than 1 but I wasn't sure how. I thought I was underestimating the unlikelyhood of 2 when I was actually overestimating the unlikelyhood of 1. Thanks for your explanation. VermillionBird 22:34, 19 September 2006 (UTC)[reply]

Can the sine function be represented as an exponential function?

That is,

Solve for .


--ĶĩřβȳŤįɱéØ 08:15, 19 September 2006 (UTC)[reply]

No, but you do have
.
-- Meni Rosenfeld (talk) 08:26, 19 September 2006 (UTC)[reply]
And from this the "unhelpful" answer
can be constructed. -- Fuzzyeric 18:37, 19 September 2006 (UTC)[reply]
The "unhelpful" answer evaluates to , which is the wrong thing. Here is a "better" unhelpful answer:
Joke 20:10, 19 September 2006 (UTC)[reply]
Nope, this evaluates to . It's going to be difficult to get the minus sign required for sin. -- Meni Rosenfeld (talk) 20:24, 19 September 2006 (UTC)[reply]
I can't believe I wrote that exponentiation distributes over addition. <hangs head in shame>. Teach me to make last-minute "simplifications" before hittin the Save button. However, if it somehow did, this would work...
(powers of a diagonal matrix)
but it doesn't, so... (scribbles offline, and will be back in a bit) -- Fuzzyeric 23:37, 19 September 2006 (UTC)[reply]

Oh, that was dumb. At least I got . Not that this exercise has much point. –Joke 20:29, 19 September 2006 (UTC)[reply]

Finally remembered... Wrong trick. Should have used -- Fuzzyeric 02:24, 20 September 2006 (UTC)[reply]

Ummm... anyway... there should be an infinite number of positive solutions for N, I think. Any N for which N^x slopes down, since N^0 starts at 1, and sin(x) continutally returns to 1, there's going to be a solution for x as long as it moves downward. And... x will be negative if N > 1. So, how about N >= 0 for a solution? As for x, there's going to be tons of solutions for it, but eventually they will all approach positions where sin(x)=0. - Rainwarrior 04:51, 20 September 2006 (UTC)[reply]

Or are you actually hoping to create a sine with an exponential function? At best it would be a terrible approximation. With several terms having coefficients it could be improved... but, ah, I suppose even a broken clock is correct twice a day, similar with this one. - Rainwarrior 05:00, 20 September 2006 (UTC)[reply]

Finding coefficients of a series

Suppose I have a function which can be presented in the form:

The coefficients ak are unknown and need to be found. I can (using a different method) evaluate the function numerically for any integer x, but this is quite expensive computationally, especially if x is large. I also know that the ak are rational numbers with reasonable denominators, so if I find that a coefficient is numerically close to a rational number, I can assume it is equal to it. Any ideas about an efficient method to find the coefficients? Thanks. -- Meni Rosenfeld (talk) 13:08, 19 September 2006 (UTC)[reply]

You can take
,
and basically you need to find the coefficients of the Taylor series with the restriction of not evaluating function at small x. Maybe numerical differentiation will be helpful. Conscious 14:26, 19 September 2006 (UTC)[reply]
The formula you give is actually a Laurent series, which is similar to a Taylor series, but allows for negative powers as well. I have no idea if it makes any sense in your situation, but if you could extent your function to a holomorphic one, defined on the complex plane (or a subset thereof), perhaps you can use the standard procedure for finding the coefficients of Laurent series. —Bromskloss 15:14, 19 September 2006 (UTC)[reply]

Thank you for your response. Unfortunately, in practice this method seems to reduce to picking values x0, ..., xn, and solving the equations (for every 0 ≤ in):

and the numerical differentiation article does little to suggest which choice for the xi will be optimal. Also, it seems to me that there should be a better method. Any additional ideas will be appreciated. -- Meni Rosenfeld (talk) 15:02, 19 September 2006 (UTC)[reply]

A regular grid is usually used for differentiation, so this would suggest (N should be divisible by 1..n+1 if you can calculate f for integer arguments only). I'm not sure if it's optimal and if the method is efficient (though you get n coefficients from evaluating f in n points). As Fuzzyeric points out below, you can just use Lagrange polynomial (or any other form of interpolation polynomial) instead of solving the system. Conscious 15:45, 19 September 2006 (UTC)[reply]
Interpolation methods require that the function be smooth enough. If you know or assume this is the case, then...
If you can do some additional things to f(), then maybe:
  • If you can (repeatedly) integrate and compute the residue of f() at zero, then you can use what amounts to the Taylor series expansion for this Laurent series. (Evaluate at zero, get a0. Integrate. Evaluate at zero, multiply by a constant, get a1. Iterate.)
  • If you can analytically take the inverse Fourier transform of f() on a circle on the complex plane near enough to zero that no poles are contained in the circle, then you can do the above more or less all at once by application of Cauchy's integral formula.
  • If you know that f() has only a semi-infinite or finite domain of definition, then fitting by polynomials, orthogonal on the domain of definition, is a good idea.
If you are unable to enable to augment the implementation of f(), then...
  • You can replace x -> 1/x and use your polynomial interpolation method of choice. E.g. Lagrange polynomial.
  • You can numerically sample f() on a circle around zero and, as above, apply Cauchy's theorem.
You may be able to bound the growth of the ak if you find that f() doesn't converge inside a circle centered at the origin. (Equivalent to f(1/x) converges inside such a circle.)
You may be able to constrain the denominators of the ak if you find that f() is simple p-adicly for some p. -- Fuzzyeric 15:16, 19 September 2006 (UTC)[reply]

Thanks. Regrettably, most of these suggestions won't work for me, as my ability to manipulate f is very limited - I can only evaluate it numerically for positive integers. Perhaps I was too optimistic about how efficient this process can be. I guess the most useful one will be the Lagrange polynomial, which can streamline the solution of my equations above. Any suggestions on how to pick the interpolation points optimally? -- Meni Rosenfeld (talk) 15:42, 19 September 2006 (UTC)[reply]

It sounds like you're limited to evaluating f(n). So to convert the Laurent series to a Taylor series, you'll be working with g(1/n) = f(n). You can only evaluate g() at a sequence of points between 0 and 1 (and accumulating at 0). If you actually had freedom to choose points, I'd recommend the Chebyshev roots, but you don't have this freedom. Since you don't really get to move your points around, I have two recommendations:
Rigorous: Polynomial interpolation refers to the Lagrange form of the interpolation polynomial (which I learned as a ratio of determinants). This is a direct method for acquiring the coefficients.
Less than rigorous: I'd recommend selecting evaluation points to be linearly independent in their prime power vectors. (By the Fundamental Theorem of Arithmetic, there's a bijection between integers and the sequence of exponents of the primes in their prime decomposition. For instance 15 = 2^0 3^1 5^1 <-> {0,1,1, 0,0,0,...}). Make sure these vectors are linearly independent. A way to do this is to select integers that are powers of an irrational. Historically, powers of the golden mean are popular. (Probably due to convenient convergence properties for subdividing root-finding intervals.) -- Fuzzyeric 16:36, 19 September 2006 (UTC)[reply]

Okay, I guess this is enough information to get me going. Thank you all for your help. -- Meni Rosenfeld (talk) 17:37, 19 September 2006 (UTC)[reply]

Just to chime in...I wonder if you could make use of the z-transform of the function somehow? --HappyCamper 03:05, 20 September 2006 (UTC)[reply]

This won't be practical for me, no. -- Meni Rosenfeld (talk) 05:33, 20 September 2006 (UTC)[reply]

TI 83 Substitute

I am in need of a website that has functions similar to a TI 83 Calculator...or better, a TI 83 Emulator-type site? Pretty much, I will not have my TI 83 until it gets shipped out here, and I need a better calculator than the damn windows one. Thanks ChowderInopa 22:22, 19 September 2006 (UTC)[reply]

Not sure about TI-83 functionality, but G-calc is a nice graphing calculator: [3]. StuRat 22:54, 19 September 2006 (UTC)[reply]
There is a flash applet you can download of the TI-83 or 82 out there somewhere. — [Mac Davis](talk) (SUPERDESK|Help me improve)00:08, 20 September 2006 (UTC)[reply]
There's an emulator called VTI you can download here. You will need a ROM image for a TI83+/84+, but you can download them. I think they're of dubious legality unless you own one, so I won't link to it. 206.124.138.153 00:19, 20 September 2006 (UTC)[reply]

Statistics Q

I have the following table:

                       Unemployed (Y=0)  Employed (Y=1)   Tot
Non Graduate (x = 0)      0.045            0.709          0.754
Graduate (x = 1)          0.005            0.241          0.246
Tot                       0.050            0.950          1

And I am asked for the E(Y), which is the expected value of Y. Is it merely 0.95?

Yes, for the total population. In general, for a discrete random variable X, the expected value E(X) = Σ (v × Pr(X=v)), summing over all possible values v for X. So for a r.v. Y that has a range {0,1}, we find an expected value E(Y) = (0 × Pr(Y=0)) + (1 × Pr(Y=1)) = Pr(Y=1).  --LambiamTalk 11:19, 20 September 2006 (UTC)[reply]

Could use some tips for modelling a hot air balloon

I'm trying to make a simple hot air balloon simulator of sorts (Actually, a Morrowind mod). I'd like to make it reasonably realistic. So, I need to find how hot the inside of the balloon is as a function of how long it has been being heated by a flame. I've already found an equation online for calculating the bouyant force, but that requires the temperature. If any physicsy types could help out I'd really appreciate it; even a rough approximation would be great. BungaDunga 00:31, 20 September 2006 (UTC)[reply]

Well, as a rough model I suppose you could work with the following ideas: depending on how strong you've got the fire going, say heat is going into the balloon at a constant rate . Heat is also going out of the balloon at a rate proportional to the temperature difference between the air in the balloon and the surrounding air (Newton's law of cooling), and the temperature is roughly a linear function of the temperature, so essentially we have . You may have to vary depending on the height, though. Confusing Manifestation 00:58, 20 September 2006 (UTC)[reply]
Here k2 would be negative. --LambiamTalk 04:22, 20 September 2006 (UTC)[reply]
Suppose you meant 'and the temperature is roughly a linear function of the heat', ConMan. --CiaPan 05:45, 20 September 2006 (UTC)[reply]
Is that function the derivative of f(T) or something similar? I've yet to learn calculus in school, so I don't have much knowledge of it (though I shall next year). I'm not sure how I would apply that equation. Something like this, maybe?
temp = 5
k1 = 5
k2 = .1
tSurr = 3
xV = 0
onEnterFrame = function(){
temp += k1-k2*( temp - tSurr )
trace(temp)
}

September 20

Words for the terms in an implication/entailment

What are the words for the terms in an entailment? For example, if , is A the "implicand" or the "implicant" or the "antecedent" of this entailment? Similarly, what's the word for B? I'm looking for something similar to addend. Having this word on hand would really help me think about the mathematical logic work I'm doing. I checked the entailment article, and asked on the talk page there, but I haven't gotten any responses. Thanks! -- Creidieki 12:43, 20 September 2006 (UTC)

I'd call them antecedent (for A) and consequent (for B). You could also use premise and conclusion. "Implicant" means literally: "that which implies" and might be used for A, while "implicand" means literally: "what is to be implied" and might be used to refer to B, but not to A. The similarity of these two words in English is confusing, and as, furthermore, entailment is not the same as implication, it is better to avoid them for use in relation to entailment. --LambiamTalk 15:43, 20 September 2006 (UTC)[reply]
Thank you! On a related note, are "implicant" and "implicand" the best terms to use when talking about an implication (rather than an entailment)? -- Creidieki 16:44, 20 September 2006 (UTC)
I would still use "antecedent" and "consequent", if only because of the confusing similarity between "implicant" and "implicand"; moreover, these terms are not widely used. Our Wikipedia article on material implication also uses "antecedent" and "consequent". --LambiamTalk 20:19, 20 September 2006 (UTC)[reply]

San Juan

The lowest temperature ever recorded in San Juan, Puerto Rico, is 60 degrees Fahrenheit. Write an inequality for T representing San Juan's recorded temperatures.

Do your own homework: if you need help with a specific part or concept of your homework, feel free to ask, but please do not post entire homework questions and expect us to give you the answers. Letting someone else do your homework makes you learn nothing in the process, nor does it allow us Wikipedians to fulfill our mission of ensuring that every person on Earth, such as you, has access to the total sum of human knowledge. -- Meni Rosenfeld (talk) 16:36, 20 September 2006 (UTC)[reply]
In the problem statement, it is the intention that T stands for any temperature (in degrees Fahrenheit) that may have been recorded in San Juan, Puerto Rico. While we do not know which temperatures have been recorded, we do know that 14 °F is not one of them. But, for all we know – if we only use what is given – it may have been 451 °F on a particularly hot day in 1898. The inequality involving T must therefore be such that if you replace T in it by 451, it becomes a true statement. That already rules out many things, such as T > 666 and T ≤ 2. A possible solution is T-459.67. But that solution does not take account of everything that is given in the problem statement. The information from the first sentence is not used at all. You want not just any inequality that is true for all recorded temperatures: you want it to be as strong as possible. In particular, you want the inequality to be such that it rules out all temperatures that cannot have been recorded, namely those that are lower than 60 °F. So if you substitute any lower value, for example 59, for T, you want the inequality to become a false statement. I hope this helps. --LambiamTalk 20:40, 20 September 2006 (UTC)[reply]
This is a joke right? Or do you really have problems understanding the concept of lowest and the concept of greater than? 202.168.50.40 00:23, 21 September 2006 (UTC)[reply]
The first two responses are appropriate; this third one is absolutely not. We can reasonably expect someone to read and abide by the guidelines at the top of this page, and chide them when they do not. We can offer guidance in a subject area without providing homework answers.
We want people to ask questions here, and to feel comfortable revealing their areas of ignorance in order to learn. Mocking and insulting responses will not achieve these goals. Please do not do this again. --KSmrqT 13:16, 21 September 2006 (UTC)[reply]

Wythoff's Game

Does anyone know where I can find the solution to Wythoff's Game, with a proof? There are a few sites scattered around that claim staunchly that it's all about the Golden Mean and the Fibonacci numbers, and as far as I can tell they're right, but nobody bothers to really explain it. Black Carrot 17:13, 20 September 2006 (UTC)[reply]

Whoa! I just read the page. Rounding down to integers?! That sounds like things Ramanujan came up with. Such occult phenomena! Surprising number sequence, in any case. Do you have more link tips? —Bromskloss 20:00, 20 September 2006 (UTC)[reply]
I am sure I read a pretty full analysis of the game in John Conway's book "On Numbers and Games" - but that was nearly 30 years ago, so I might have mis-remembered ... but I still have the book somewhere, so if you are really desperate, I will try and find it :-) (unless you have access to a copy) Madmath789 20:16, 20 September 2006 (UTC)[reply]
Sure. Iirc, a complete proof is given in this book:
Csákány Béla, Diszkrét matematikai játékok. Polygon, Szeged, 2nd ed, 2005.
This is a nice easily understandable book that requires little prior knowledge. I recommend it heartily. – b_jonas 11:31, 21 September 2006 (UTC)[reply]

Searching for variations on -wythoff nim- gives quite a bit. If it helps, I can easily prove the claim at [4], which gives a vague link to Fibonacci-eque numbers. And the whole rounding-down thing comes pretty naturally (I thought of it too, just didn't have a number to plug in) from writing the numbers out in order. They aren't going by intervals of 1, or 2, or 1.5, or 1.6, but seem to be jittering around in that area. Yeah, I'd appreciate it if you could find that book. Black Carrot 13:34, 21 September 2006 (UTC)[reply]

Liquid Mixture Puzzle

A while ago I found this puzzle that I cannot solve. It states that:

  • There is a tank of water, with a capacity of 100 litres.
  • This tank contains 100 litres of LiquidA
  • This tank has 2 pipes connected to it, one at the top and one at the bottom
  • From the top pipe, LiquidB flows into the tank at 1 litre per minute
  • From the bottom pipe, the contents of the tank (both LiquidA and LiquidB - assume they mix perfectly throughout the tank) both flow out at 1 litre per minute

What is the ratio of LiquidA to LiquidB in the tank after one hour?

I do not seem to be able to solve this problem using algebra and after some searching, it seems like I need to know some calculus in order to solve it (which I don't). Can someone tell me the answer please, and how to work it out? If the answer requires knowledge of calculus, can someone please point me to a good reference that is easy to understand in order to learn it (if I wait for school to teach me it, I will have to wait around 3 to 4 years - I am currently 14)? Thanks for any help you can provide.

P.S. Side question: Does anyone know why in England, you have to wait until Year 13 to do logarithms? I have had a look at them and they don't seem particularly difficult. Also, they seem quite useful, but if you not take Maths for A Level, you will not learn about. Strange... -80.229.152.246 21:06, 20 September 2006 (UTC)[reply]

The exact symbolic solution to the problem is a differential equation. You can get an approximate numerical solution using repeated algebraic computations: see numerical analysis. --Serie 21:41, 20 September 2006 (UTC)[reply]

Here is how you solve it.

(1) Change the unit of time to minute. Having a consistent unit of time helps a lot.

(2) Have two variables VolA and VolB which represent the volume of each type of liquid within the Tank

(3) Write out the equation which represents the volume of each type of liquid in the Tank.

VolA(t) = input_VolA(t) - output_VolA(t)
VolB(t) = input_VolB(t) - output_VolB(t)
 determine output_VolA(t) and output_VolB(t)

(4) input_VolA(t) = 0

(5) input_VolB(t) = int(t=T0,t=T,1,dt)

(6) output_VolA(t) = int(t=T0,t=T,ratio_A(t),dt)

(7) output_VolB(t) = int(t=T0,t=T,ratio_B(t),dt)

(8) ratio_A(t) = VolA(t)/( VolA(t) + VolB(t) )

(9) ratio_B(t) = VolB(t)/( VolA(t) + VolB(t) )

(10) Solve the differential equation.

202.168.50.40 00:30, 21 September 2006 (UTC)[reply]

Math problem

I know you guys don't do homework, but I've got this question and our teacher just told us to use trial and error or the Pythagorean triplets to solve it because he said that the algebraic solution is too difficult. But im just curious, how do you solve this algebraically?:

a) Express the volume of a cone to the surface area of the same cone (includes the bottom)

b) Using the answer from part (a), make the ratio of volume to surface area equal to 1. What are the values of the radius, height and slant height?

The answer is radius=6, height=8 and slant height =10. But can someone post the solutions to part b? Jamesino 22:47, 20 September 2006 (UTC)[reply]

This requires that we work with dimensionless units. Equating the formulas for volume and surface area for a cone with radius r and height h gives us:
Divide both sides by πr/3:
Subtract from both sides, and square:
Multiply out and bring everything to one side:
Divide by h, and then solve for h:
.
Any value of r > 3 will give you now a value for h such that together they are a solution. For example, r = 21 gives h = 49/8, which results in a slant height of 175/8 and a volume and surface area of 7203π/8. If you want both r and h to be whole numbers, note that h − 6 = 54 / (r2 − 9) must also be an integer, so r2 − 9 must be an integer divisor of 54, leaving for r2 only the possibilities 10, 11, 12, 15, 18, 27, 36, and 63. The only square is 36, giving r = 6 and h = 8. --LambiamTalk 00:11, 21 September 2006 (UTC)[reply]
Niggling note: The step "Divide both sides by πr/3" should be accompanied by a test that . This test would show that r=0 is a solution (but not an interesting one). -- Fuzzyeric 15:47, 21 September 2006 (UTC)[reply]
Good point. It would also be necessary, because of the fifth step, to test h=0, but that's not a solution. Black Carrot 16:56, 21 September 2006 (UTC)[reply]
There is no need to test for r ≠ 0. In fact, how could we "test" for that? – we are trying to find the value of r here. What I'm doing is deriving a sufficient condition for the equation to hold. By reducing something of the form r·X = 0 to X = 0, I discarded the solution r = 0. However, since a cone is constrained to have a positive radius, it is harmless to drop that case, which I did tacitly. Likewise, I did not explicitly mention my throwing overboard the algebraic solutions with negative h. --LambiamTalk 19:53, 21 September 2006 (UTC)[reply]
Thanks alot =) Jamesino 21:04, 21 September 2006 (UTC)[reply]

September 21

Ok, so we have the trigonometric sine, and then the hyperbolic sine...

The trigonometric sine is related to the unit circle, while the hyperbolic sine is related to the unit hyperbola. So, can there be sines for other conic sections as well? The parabolic sine and elliptic sine, perhaps?

Thanks for satisfying my mathematical urges 0_0 --ĶĩřβȳŤįɱéØ 06:48, 21 September 2006 (UTC)[reply]

The unit circle is described by the parametric equation x = cost, y = sint. The unit hyperbola is described by x = cosht, y = sinht. More generally, the equation x = acost, y = bsint describes an ellipse (so there is no need for separate "elliptic" sine and cosine) and the equation x = acosht, y = bsinht describes a hyperbola. Since the equation x = at, y = bt2 describes a parabola, the natural candidates for "parabloic" sine and cosine are cospt = t, sinpt = t2, which aren't at all interesting. So no. -- Meni Rosenfeld (talk) 09:36, 21 September 2006 (UTC)[reply]
Not quite — there's nothing "natural" about choosing x = t, y = t2; x = t3, y = t6 would do equally well.
If you use the hyperbolic functions to parametrise the hyperbole, by setting, say, , the normalisation property is that (and the equivalent holds for cos and sin).
I'm going to leave solving this for the parabolic curve as an exercise for the reader, but would like to caution about the choice of "focus" in a parabola. Using the origin might not be the right thing.
RandomP 14:47, 21 September 2006 (UTC)[reply]
Not directly relevant to your original question, but there are some elliptic functions: sn, cn, and dn covered in the article Jacobi's elliptic functions that you might find interesting. Madmath789 10:04, 21 September 2006 (UTC)[reply]
There are also an infinite family of "sines" and "cosines" based on q-exponentials. --HappyCamper 15:05, 21 September 2006 (UTC)[reply]
Parabolic trigometry arises in the Kleinian geometry described in the book by I. M. Iaglom, A simple non-Euclidean geometry and its physical basis: an elementary account of Galilean geometry and the Galilean principle of relativity, which bears the same relationship to "Newtonian spacetimes" as Minkowski geometry bears to Lorentzian manifolds, for which see the book by Misner, Thorne, and Wheeler, Gravitation. More technically:
  • E1,1 is a plane geometry (defined by an indefinite but nondegenerate quadratic form) which can serve as the model for tangent spaces to a two dimensional Lorentzian manifold,
  • E1,0 is a plane geometry (defined by a degenerate quadratic form) which can serve as the model for tangent spaces to a two dimensional Newtonian spacetime,
  • E2 is a plane geometry (defined by a positive definite quadratic form) which can serve as the model for tangent spaces to a two dimensional Riemannian spacetime.
There are algebraic formulations in which these three geometries arise from three kinds of Cayley-Klein algebras (a generalization of the complex number field considered as a two-dimensional real algebra). I wrote a Wikipedia article about this stuff once but (sigh) some ignorant mathcrank munged it out of existence. Unfortunately the book by Iaglom (a translation from the Russian) is rather hard to find; the only remaining copy which I know of is in the collection of the research library of the Los Alamos National Laboratory. You might be able to get it on interlibrary loan, even through your public library system, if you happen to live in the Western U.S. It's a suprisingly elementary book (originally written I think for Russian high school students), and parabolic trigonometry is significantly easier than circular or hyperbolic trig. (If you've seen books by Jaglom or Yaglom, this is the same author; at different times his name has been transliterated in many ways, which leads to all kinds of confusion).---CH 21:44, 21 September 2006 (UTC)[reply]

The rule of 17

What is the rule of 17? It appears to be used in design considerations for equal temperament. --152.62.109.163 10:29, 21 September 2006 (UTC)[reply]

When placing frets on an equal tempered guitar (e.g.) the distance to the next fret (a minor second interval) can be calculated by multiplying the octave length (in inches, e.g.) with 1 minus the 12th root of 2 (1 - 2^(1/12), or approx. 0.059463. This is the same as dividing by 16.817, which is sort of close to 17. Maybe that's it.---Sluzzelin 13:40, 21 September 2006 (UTC)[reply]
Close, but no cigar. And two explanations I found on the web state that 17.817 is the 1/12th root of 2, which is nonsense.
The true explanation depends on a little physics, a little perceptual psychology, a little mathematics, and artistic taste.
A vibrating guitar string is not nearly as stiff as a metal bar, nor even a piano string; so half a string produces a frequency twice as high as a full string. Perceptually, a double frequency sounds similar to the original, and very harmonious. In terms of a musical scale, the higher note is said to be an "octave" above the lower one, from the eight steps in a Western major scale. (The notes of a C-major scale are C, D, E, F, G, A, B, c.) The scales we use in Western music are originally based on simple frequency relationships; for example, a "fifth" (C to G) was a ratio of 3:2, a "fourth" (C to F) was a ratio of 4:3, and a major "third" (C to E) was a ratio of 5:4. However, around the time of Johann Sebastian Bach composers and performers on instruments like the harpsichord began to find this tuning decidedly inconvenient. The problem was that what sounded good for C-major sounded awful for G-major or F-major. Thus a transition was made to a chromatic scale where the ratio between any two adjacent notes was exactly the same, with twelve steps in an octave. If an "A" had a frequence of 440 Hz, then the ratio of "C" to "B", say, had to be the 12th root of 2, approximately 1.059463:1. The fifth became 1.498:1 instead of 1.5:1, the fourth became 1.335:1 instead of 1.333:1, and the major third became 1.2599:1 instead of 1.25:1. Every interval except the octave has been perturbed a little, but now the intervals work equally well in every key.
The guitar is a fretted string instrument, and (to a first approximation) the ratio of full string length to fretted string length gives the frequency ratio of shorter to longer. Thus a fret halfway along the string doubles the frequency, producing a pitch an octave above the unfretted pitch. Where do we place the other frets? We expect the fifth to have a fret at about 1/3, producing a ratio of 3:2, but we really need something systematic.
If the "E" string, say, has a length of L = 650 mm, what proportion p of its length should we remove (by fretting) to produce an "F" note, the next chromatic note above it? Well, if we remove L/p we are left with LL/p, and the ratio of L to this should be the twelfth root of 2, 21/12. The answer is easily found, and happens to be
21/12(21/12−1) ,
which is approximately 17.8172 — hence the "rule of 17". (Musicians are not necessarily the best mathematicians!) And because the ratios are all equal, the next fret should cut off the same proportion of the remaining length, and so on. --KSmrqT 15:22, 21 September 2006 (UTC)[reply]
KSmrq's answer is, of course, nearly equivalent to Sluzzelin's: 16/17 is approximately 2-1/12, and 18/17 is approximately 21/12 (though Sluzzelin was closer, I wouldn't bet on many musicians realising that).
RandomP 15:43, 21 September 2006 (UTC)[reply]
Vincenzo Galilei was quite fond of 18/17 (and so were luthiers of his time), but there are even better rational approximations... 53/50, 71/67, etc... (I put a list up at Talk:Semitone recently.) - Rainwarrior 19:35, 21 September 2006 (UTC)[reply]

Probability Q

I dont understand how the correlation coefficient relates to finding the mean and variance of a summation. For example, Men make 40,000 a year with SD of 12,000, women make 45,000 with SD of 18,000. Now assuming I am given an Rsquared (CC) of 0.7 between male and female earnings, how does this affect the mean and variance of total household income for dual earner families? I assume the mean would be 85,000, but then I am afraid of multiplying the variances and square rooting, because this doesnt involve the 0.7...help?

Edit: I attempted to use the equation: Var(X+Y) = VarX + VarY + 2 Covariance (XY) but to find the covariance using the Correlation Coefficient (CC) I need to use the eqn: Corr(XY) = Cov(XY)/((SDX*SDY)) but that gave me a covariance of 151.2 million (because it becomes: 0.7 = Cov/(12000*18000)

...Help?

What's wrong with a covariance of 151.2 million? -- Meni Rosenfeld (talk) 19:17, 21 September 2006 (UTC)[reply]

Well...that would mean that the new variance of the total household income, using Var(X+Y) = VarX + VarY + 2 Covariance (XY), would be IMMENSE...and that would also mean that the Variances of the individual X and Y would barely factor in, which seems counterintuitive. Is the 151.2 million an accurate intermediate step? ChowderInopa 22:02, 21 September 2006 (UTC)[reply]

Histogram

Does any one know why a histogram is named such?--Willworkforicecream 17:51, 21 September 2006 (UTC)[reply]

The basic meaning of Greek histos is: "something that has been made to stand upright", with specific meanings of "mast", "beam" and "loom". The -gram part means: "something that has been written". Together: "something written with beams". --LambiamTalk 19:17, 21 September 2006 (UTC)[reply]

Displacement, Velocity, Acceleration, Jerk, ...

The first derivative (with respect to time) of displacement is velocity, the second derivative is acceleration, and the third derivative is jerk. Is there a word for the fourth derivative of displacement? —Mets501 (talk) 20:02, 21 September 2006 (UTC)[reply]

This is discussed in the Jerk article. Chuck 20:30, 21 September 2006 (UTC)[reply]
Wow I'm blind! I thought I read the whole article... Thanks anyway Chuck :-) —Mets501 (talk) 20:49, 21 September 2006 (UTC)[reply]


Bijection from Z^2 to Z

Sorry for the notation in the subsection header. I'm looking for a simple bijection . By the fact these two sets have the same cardinality, we know some bijection must exist, but I'm wondering if there's a fairly simple function that exhibits this property.

-- Braveorca 23:55, 21 September 2006 (UTC)[reply]