Jump to content

Wikipedia:Reference desk/Mathematics

From Wikipedia, the free encyclopedia

This is an old revision of this page, as edited by KSmrq (talk | contribs) at 04:02, 26 September 2007 (→‎PRIME NUMBERS: details, or not). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

Wikipedia:Reference desk/headercfg

September 20

Factoring a Binomial

Hey, I was studying together with an older friend and I was trying to factor this equation: ax2+bx+c=0, right? You know, trying to set it to (x-p)(x-q)=0 to find the roots.

So I go about trying to solve it the way they taught me in school, you know, find two numbers whose product=ac and their sum=b. I ask my friend how he does this problem, and he shows me that he uses this equation: "-b±√b2-4ac/2a". (That means it's all divided by "2a".)

I look into it, and I find out that the 2 numbers he gets aren't the p and q roots I'm looking for, but r1 and r2: 0=a[(x-r1)(x-r2)]. So I say to him he's wrong, but he just shows me this problem sheet he has titled "Signdiagrams"(which has the answers on the back), and it tells him I'm wrong. What's going on? --24.76.248.193 01:45, 20 September 2007 (UTC)[reply]

a*x^2 + b*x + c = 0
We also have
(x-p)*(x-q) = 0
x^2 - (p+q)*x + (p*q) = 0
Multiply both side by A
A*x^2 - A*(p+q)*x + (A*p*q) = A*0
Now all we have to do is match the following
Find p and q such that
(1) A = a (really easy task)
(2) A*p*q = c
(3) -A * (p + q) = b

202.168.50.40 02:04, 20 September 2007 (UTC)[reply]

Umm, what's the difference? If 0=a(x-r1)(x-r2), then 0=(x-r1)(x-r2). i.e. r1 and r2 are p and q. --Spoon! 02:28, 20 September 2007 (UTC)[reply]
Well, try substituting your equations into his. I'll assume a = 1 for simplicity, but you can put it back in:
and
Therefore: and , and also
So you have:
Looking at the square root term:
This means:
So... what's the difference between your two formulae? - Rainwarrior 02:57, 20 September 2007 (UTC)[reply]
The two of you may be solving different problems. Consider the quadratic polynomial 5x2+5x−60. When x is three the polynomial is zero, which is what your friend finds using the quadratic formula,
But when we consider factors, we notice that every term of the polynomial is a multiple of five; in fact,
We will have this same problem whenever the coefficient of the x2 term is not one, as we can easily verify by multiplying out (xp)(xq). For his root formula a common nonzero constant factor, λ, cancels out.
Before we try to find p and q, we need to divide the polynomial by a. Thus
In other words, we should have pq = ca and p+q = −ba. --KSmrqT 02:51, 20 September 2007 (UTC)[reply]

ahh.. thanks. I don't really understand what happened at

(x-p)*(x-q) = 0
x^2 - (p+q)*x + (p*q) = 0

but I'm sure I'll figure it out. --24.76.248.193 04:28, 20 September 2007 (UTC)[reply]


It's multiplication of polynomials using the law a(b+c)=ab+ac. Thus: (x-p)*(x-q) = (x-p)*x-(x-p)*q = x*x-p*x-x*q+p*q = x^2 - (p+q)x + (p*q) . Bo Jacoby 10:46, 20 September 2007 (UTC).[reply]

packing in S3

How much is known about sphere packing (or for that matter the Thomson problem) in , the surface of the four-dimensional sphere? If N>>120, are there regions of face-centered cubic packing, or what? What are the flaws like? —Tamfang 04:34, 20 September 2007 (UTC)[reply]

I think in four dimensions an optimal configuration is known. The article I think mentions that it's known exactly for 8 and 24, but for a lot of them it's up in the air, as far as I know. We have upper and lower bounds, and for some dimensions we know that the optimal packing is not a lattice, though most of the time the best known packing is a lattice. There's a really neat article linked in the references in Kissing number here: "Kissing numbers, sphere packings, and some unexpected proofs." PDF - Rainwarrior 16:26, 20 September 2007 (UTC)[reply]
Thanks but I think you misunderstood the question. —Tamfang 08:15, 21 September 2007 (UTC)[reply]
I'm sure I did, but I'm curious as to what it is exactly. Are you trying to pack 120 spheres onto a four dimensional sphere? - Rainwarrior 08:44, 21 September 2007 (UTC)[reply]
I'd bet that the best such packing for N=120 is to inscribe the spheres in the dodecahedra of the 120-cell. – In , the surface of the 3-ball, when N>>12 you typically get regions of mildly distorted "kissing pennies" lattice, with 12 flaws where a disk has 5 neighbors rather than 6; in other words, the Voronoi cells are usually 12 pentagons and N-12 hexagons, increasingly regular as N increases. – Since face-centred cubic is the densest packing of balls in flat 3-space, I'm guessing that it shows up in curved 3-space, mildly distorted, when N is big enough. —Tamfang 19:19, 21 September 2007 (UTC)[reply]
Well, if your suggested arrangement is optimal, I like the comparison between your 120-cell cell-centres, and the known optimal dodecahedron's face-centres in S^2. I've never looked at the arrangement in terms of "flaws" before; that's an interesting idea. I notice you say "usually" with your 12 pentagon rule for N > 12 in S^2, for what N does it not hold up? - Rainwarrior 03:27, 22 September 2007 (UTC)[reply]
Sometimes the convex hull shows a square instead of two triangles; for example the best solution for N=24 is (i believe) a snub cube, 6 squares and 32 triangles, rather than the expected 44 triangles, and every node has 5 neighbors [1] – in other words the Voronoi cells are congruent irregular pentagons. —Tamfang 00:30, 24 September 2007 (UTC)[reply]
There is no way to form a (topological) sphere of 12 pentagons and 1 hexagon, so for N=13 the Voronoi regions include a square. [2]Tamfang 17:45, 25 September 2007 (UTC)[reply]

convergence/divergence problem

For what values of "a" is convergent?

My answer was none, because no matter what, the cosine factor's oscillation will prevent any convergence.

Yes,no?--Mostargue 08:03, 20 September 2007 (UTC)[reply]

Also, using integration by parts several times (and also verifying it with Mathematica):

I found the indefinite integral to be:

If that is evaluated at infinity, the sin and the cosine will cause it to diverge, no matter what a is. But even if a=0, it is still divergent because then the integral just becomes sine, and that is also divergent.--Mostargue 08:14, 20 September 2007 (UTC) —Preceding unsigned comment added by Mostargue (talkcontribs) 08:14, 20 September 2007 (UTC)[reply]

I'm assuming here that "a" is a constant, but even if it was another function, I still can't think of how it could prevent divergence.--Mostargue 08:16, 20 September 2007 (UTC)[reply]

Are you assuming that a ≥ 0 ? I don't see that stated anywhere in the problem. What happens if a is negative ? Don't rely on your idea that oscillation prevents convergence - this is incorrect. For a counterexample, consider the series
- the partial sums of this series oscillate, but it still converges to ln(2). Gandalf61 08:28, 20 September 2007 (UTC)[reply]

Right, but that series is different because the individual terms approach zero. Looking at the graph of the original function, its terms get bigger and bigger. Not a good sign.--Mostargue 08:40, 20 September 2007 (UTC)[reply]

Only if a is positive. Think about what happens if a is negative. Gandalf61 08:48, 20 September 2007 (UTC)[reply]

If a is negative, then the terms will get smaller and smaller. But that doesn't prove convergence. Hmm.. But how would sin(b) be evaluated then?--Mostargue 08:55, 20 September 2007 (UTC)[reply]

Where b is what? If you mean b=infinity, remember you don't (strictly speaking) evaluate at infinity, but rather at finite values and take a limit. Algebraist 11:44, 20 September 2007 (UTC)[reply]
When a is negative, , so , since , a finite constant, and is also constant. So --Spoon! 13:07, 20 September 2007 (UTC)[reply]

standard deviation?

Why is standard deviation calculated with squaring?? Is it just a trick to convert negatives to positives? Wouldn't it amplify the extremely high values to even higher? I'm trying to learn stats and it seems like using absolute difference from the median would make more sense to me to give an indication of spread.

For the data: 2 4 6 7

How far from the median: 3 1 1 2 (reordered as 1 1 2 3)

Now take the median of those numbers is 1.5.

Wouldn't that give more info about spread? Why is standard deviation more popular? It seems unintuitive to me.

--Sonjaaa 08:33, 20 September 2007 (UTC)[reply]

Yes, the squaring amplifies high values to even higher, but the square root de-amplifies it again. Yes, the definition of standard deviation is unintuitive. But there is a formula for the standard deviation! There is no formula for your intuitive measure of spread - is does not depend on data in an analytical way. (See analytic function). Mathematicians want formulas. Bo Jacoby 10:37, 20 September 2007 (UTC).[reply]
Sonjaaa - the measure of dispersion that you describe is called the median absolute deviation or MAD. I can think of two problems with it. The first problem is that is difficult to calculate for large samples - I think you have to keep track of all the separate sample observations to calculate the MAD, whereas the standard deviation can be calculated just from knowing the sum of the observations and the sum of their squares. The second problem is that the MAD is insensitive to the values of outliers. In you example, look at what happens if we increase the largest observation, 7. If we replace 7 with 8, the MAD increases from 1.5 to 2. But if we then replace 8 by 10 or 20 or even 100, the MAD stays at 2. Intuitively, you would expect 2 4 6 100 to be more dispersed than 2 4 6 8, but with the MAD measure they have the same dispersion. Gandalf61 10:56, 20 September 2007 (UTC)[reply]
This might be slightly technical, but the main reason the SD is computed with squaring is that in reality the SD is a derived statistic from a (function of a) more primitive one, the second sample moment, AKA the sample variance. I say primitive because the population variance is an expectation, while the population SD is not. And the variance is computed from squaring, by definition. The SD is, however, in the same units as the original data, so is easier to interpret.
Ironically, what G61 mentions as the second drawback of the MAD is actually a benefit in those circumstances when it is often used. Baccyak4H (Yak!) 13:53, 20 September 2007 (UTC)[reply]
A further alternative is mean absolute deviation (from the mean), also MAD. This resembles the SD in that it is sensitive to outliers, and in fact for a population reasonably Normal, MAD≈0.8SD. There is a link on the page for this MAD to an article identifying some advantages compared with the SD—86.132.239.39 17:14, 20 September 2007 (UTC)[reply]
Another reason: the sum of squares is additive, i.e. the sum of squares from error can be added to the sum of squares from the "real effect" to give the total sum of squares. I don't think that works if you don't square it but I'm too tired to work it out. Gzuckier 13:55, 21 September 2007 (UTC)[reply]

Statistics

I am trying to understand a relatively simple concept in statistics. It probably relates moreso to semantics and interpretation (than to mathematical calculations per se). Nonetheless, I would like to know what the normal / standard / conventional approach would be in the field of statistics. Thanks. Consider the data below, representing the "Student of the Month" at XYZ High School. What would be the proper / appropriate / mathematically correct methods to calculate and report the following statistics to describe this data set or this group of people? The question really boils down to (I think?) ... how does the mathematician / statistician deal with the duplicity or repetition of certain group members.

Month Name Gender Race Class Age
September Ann F Hispanic Sophomore 80
October Bob M White Junior 22
November Carl M White Senior 53
December Dave M Black Junior 41
January Ellen F Hispanic Senior 10
February Ellen (again) F Hispanic Senior 10
March Ellen (again) F Hispanic Senior 10
April Frank M Black Freshman 39
May Ann (again) F Hispanic Sophomore 80
June George M White Senior 71

So, what are the correct Statistics for the XYZ High School Student of the Month Program?

  • (01) Number of Students: 7 or 10
  • (02) Number of Males (as a percent): 5 out of 10 ... or 5 out of 7
  • (03) Number of Females: 2 or 5
  • (04) Number of Females (as a percent): 2 out of 7 ... or 2 out of 10 ... or 5 out of 10 ... or what

etc. etc. etc. for:

  • (05) Number and Percent of Whites or Blacks or Hispanics
  • (06) Number and Percent of Freshmen or Sophomores or Juniors or Seniors
  • (07) Average Age of Students: Method A = (sum of all of the ages divided by 10) ... or Method B = (sum of all the distinct people's ages divided by 7)
    • Average Age of Students: Method A = 41.6 years old ... or Method B = 45.14286 years old

I guess the question boils down to this: does Ann (and all of her characteristics) "count" only one time or two times? Similarly, does Ellen (and all of her characteristics) "count" only one, or two, or three times? And why? And does the single (versus multiple) counts change depend on which statistic / characteristic involved (race versus age versus gender, etc.)?

I can see arguments going both ways. Take Ellen, for example. Point: She (her data) should count only one time, because she is one only person / one human being. Counterpoint: She (her data) should count three times, because each of her three wins/awards/titles generically represents a different Student of the Month honoree (January honoree, February honoree, March honoree) (with the irrelevant coincidence that those three honorees just happen all to be Ellen). So, clearly, the distinction is: the number and characteristics of (a) honorees/winners versus (b) distinct honorees/winners. (I think?)

Of course, I want to understand the general concept -- so the above is just a fake / easy / hypothetical data set for illustration. That is, all the specifics and "details" are irrelevant -- that it is a High School, that it is a Student of the Month program, their ages, their races, etc. How does math / statistics properly handle this? Thanks. (Joseph A. Spadaro 13:19, 20 September 2007 (UTC))[reply]

This seems like a question about the basic assumptions behind Data modeling.
For example, consider your examination of Ellen. Both distinctions [(a) versus (b)] are relevant -- a statistician should not be forced to choose between them. In instance (a) we are examining attributes of the data set itself ... (i.e., how many "rows" in our table relate to Ellen [this is not precisely correct, but close enough]). In instance (b) we are examining attributes of the entities that our data set happens to be talking about (i.e., How many individual human beings are recorded in our data, and is Ellen one of those human beings).
In a properly normalized data set, you would not have entries such as "Ellen" and "Ellen (again)" [just like we don't have "Male" and "Male (again)"] ... they would all just be "Ellen". Moreover, there would be a separate mechanism to properly distinguish whether multiple instances of "Ellen" actually referred to the same physical human being, or whether there were multiple students who just happen to have the same name.
All of these lines of inquiry are valid, and the issue is just a matter of the way you structure your data. You should not be forced to "pick and choose" which interpretations are valid. dr.ef.tymac 15:52, 20 September 2007 (UTC)[reply]
In other words, what kind of an analysis you do depends on what questions you are seeking an answer to. If you are interested in the number of individuals, as opposed to the number of rewards, then you count the individual persons. Ultimately, it depends on what you want to do with the answer. It's not any more complicated than that. 84.239.133.38 19:43, 20 September 2007 (UTC)[reply]
(Edit Conflict)Yeah. I would say the main problem is that both the questions you give and their answers are too vague. They have, as you pointed out, more than one interpretation. For instance, on (02), number of males as a percent of what? 5 males out of 7 students? 5 distinct male winners out of 10 months? 5 months with a male winner out of 10 months? 5 male winners in 1 year? 0 males out of 2 females? 5 males out of 5 males? 1 Male out of 2 genders? Each of these is a more specific and more useful question, and is much easier to answer. Each could be an answer to your original question, depending on the circumstances. This illustrates, as it happens, an important concept in statistics - numbers don't lie, but they don't have to. As you can see, you can get wildly different answers to the same question by interpreting it differently. For instance, given identical data regarding aircraft and highway fatalities, you can conclude that it is both more and less save to take a plane. You could measure deaths per hour of travel, which might very reasonably indicate that flying is dangerous (on an hourly basis). On the other hand, taken on a per-mile basis, flying could be safer. Flying, see, is much much faster than driving, so it takes many fewer hours to travel the same number of miles. Which is the right answer, then? The right answer is that the world is complicated, and any attempt to simplify it will leave out information. If you want a simple answer, accept that it will be wrong. If you want the full answer, accept that it will have to be specific and detailed. Black Carrot 19:47, 20 September 2007 (UTC)[reply]

Development of the decimal system

Hi there. Can anyone give me some basic information or point me towards a relevent website detailing the development of the decimal number system. I'm a teacher who wants to use this information at primary level. Nothing too advanced just a general timeline of key milestones such as Roman Numerals, arabic digits, invention of the zero symbol, place value etc. Thanks in advance for any help. Kirk UK —Preceding unsigned comment added by 82.21.49.82 (talk) 14:09, 20 September 2007 (UTC)[reply]

History of the Hindu-Arabic numeral system contains some references / links. (Joseph A. Spadaro 14:18, 20 September 2007 (UTC))[reply]

domain, range, co-domain, image

Alphonse claims that, in defining functions, mathematicians used to use the term "range" instead of the term "co-domain" but they changed to the latter term because "range" is not precise enough. He also claims that the term "range" is no longer favored, and the preferred term is "image".

Is Alphonse correct? Please explain your answer with simple clarification, please also note this is not homework, just an attempt to get these nits picked. dr.ef.tymac 15:04, 20 September 2007 (UTC)[reply]

Um, did you look at our article on range? In short, the usual terminology is that an image of a set is the result of mapping all elements of that set through a function. The co-domain is the set that the outputs of a function are in (in most elementary mathematics, we're looking at functions with a domain and co-domain of the real numbers or, less frequently, the complex numbers). The range is the image of the domain, which is a subset of the co-domain, which depending on the function, may or may not be the entire co-domain. Who is this Alphonse? Donald Hosek 17:13, 20 September 2007 (UTC)[reply]


Graph of example function,
Codomain, range, and image are all precise and in current use. Each has a different meaning, though with some overlap. Consider the example plot (shown right) that opens our function article. We have taken the domain to be the real interval from −1 to 1.5 (including both endpoints), and the codomain to be the same. We can see that all the function values do indeed lie between −1 and 1.5, as they must by definition of codomain. However, the values do not go below approximately −0.72322333715 nor above 13√10 ≈ 1.05409255339, so the range is smaller than the full codomain. The image of the full domain is synonymous with the range, but we can also speak of the image of a portion of the domain; for example, the image of the interval of the domain between 12 and 1 is the interval of the codomain between −12√2 and 0. Another useful term is "inverse image"; for example, the inverse image of 0 consists of all the points in the domain that map to 0, here −1, 12, and 12(1±√3). --KSmrqT 17:25, 20 September 2007 (UTC)[reply]

calculating probability of two events occurring at same time

Hello:

I have a dataset that looks like:

 Event1 occurred at Time1
 Event2 occurred at Time2
 Event1 occurred at Time3
 Event3 occurred at Time4
 Event2 occurred at Time4
 etc.

I'd like to know how often (how predictably?) different Events occur at or near the same time. But I have no clue which algorithm or technique or analysis to use.

I think I'm looking for an analysis that will assess each pair of events and provide a probability of having them occur within a specific timeframe of each other.

Example Data:

 Doorbell rings at 2:00:00pm
 Dog barks at 2:00:03pm
 Dog barks at 2:10:12pm
 Fridge door opens at 2:11:12pm
 Dog barks at 2:11:45pm
 Doorbell rings at 2:15:00pm
 Dog barks at 2:15:07pm
 Fridge door opens at 2:22:11pm

Sample/Expected results - making up numbers, of course:

 If the doorbell rings, you can expect the dog to bark within one minute 97% of the time.
 If the doorbell rings, you can expect the fridge door to open within 20 minutes 22% of the time. 
 If the fridge door opens, you can expect the dog to bark within one minute 67% of the time.  
 etc. 

What is this kind of analysis called? Can I 'create' it by using 'canned' stuff in something like SQL Server? Other advice?

Many thanks!

JCade 18:27, 20 September 2007 (UTC)Jennifer[reply]

(NAE-Not an expert) One option, depending on the computing resources you have available, would be to store and update the entire distribution for each pair or collection of events you want to know about. Say you want to know how soon after a door opens the dog will bark, provided the dog does so within twenty minutes. You could store an array of twenty integers (starting them all at 0), and add one to the proper minute-marks (possible several if the door keeps opening) every time the dog barks. Once you have the distribution, you can do anything you want with it. Black Carrot 19:29, 20 September 2007 (UTC)[reply]
Question: calculating probability of two events occurring at same time
Answer: Please define the term "the same time".
May I suggest that you pick a atomic time interval and stick to it. Say an atomic time interval of 5 seconds. If two events occurs in the same time interval then it is a simultaneous event. If two events occurs in adjacent time interval then they are a near event. Now you can calculate the probability that event A and event B are simultaneous event and/or near event. 202.168.50.40 00:33, 21 September 2007 (UTC)[reply]

NICE! I can easily define my interval for "same time" - and the concept of having "same" and "near" times is valuable. Thank you! But I have to be sheepish here - I still don't know how to calculate the probability of each pair of events being "same" or "near." My dataset is in SQL Server....is there a SQL tool/utility/function I can harness for this? Is there a term I can google to help me get to the equation part? Thanks again! —Preceding unsigned comment added by JCade (talkcontribs) 03:20, 21 September 2007 (UTC) Ooops! The newbie didn't sign.... JCade 03:21, 21 September 2007 (UTC)[reply]

This is a maths reference desk. we can only help you with maths questions and not SQL questions. 202.168.50.40 04:03, 21 September 2007 (UTC)[reply]
.
Assuming a time interval of 5 seconds. In a 24 hour period there are 17280 time intervals. If event A occurs 172 times in a 24 hour period (ie 172 time intervals which has event A) then the probability of event A in a time interval is 172/17280 = 0.0099537
.
if event B occurs 400 times (in a 24 hour period) the probability of event B in a time interval is 400/17280 = 0.023148
.
Next if event B is completely independent of event A then you would expect the probability of both event A and event B occurring in the same time interval to be (172/17280)*(400/17280) = (43/186624) = 2.3E-4
.
You would expect to find 17280 * 2.3E-4 = 3.98 such events (a time interval that contains both event A and event B) in a day.
.
I hope this helps. 202.168.50.40 04:18, 21 September 2007 (UTC)[reply]
Er? I'm assuming that event A (or event B) is NOT something that occurs more often in some parts of the day than others. 202.168.50.40 04:27, 21 September 2007 (UTC)[reply]

Okay - I see where you're going but I'm not sure it's where I want to go. The calculations you've shown (THANK YOU!) tell me how to calculate probability for this kind of situation. The final result of (ProbA)*(ProbB) gives me the probability of both events occurring at the same time. I'm getting the basic equations - whew! That makes sense, and now..... Is there a way we can remove some of the stiffness of the timeframe? Right now, we've divided a day into 5-second intervals to give us our definition of "concurrent." Can we define "same time" as "within 5 seconds after an event" or "plus-or-minus 5 seconds of an event"? I am trying to see if certain events (or pairs, actually) are not truly independent of each other, or even when two events are duplicates. I want to seek patterns where we can say things like "these two events almost always occur within 5 minutes of each other." I was assuming (bad word, I know) that we would look at the duration of time between pairs of events and analyse those. Perhaps probability isn't the right thing to be calculating? Should I be using another term like "correlation" or "association" or "causality" or something? Or maybe your answer is what I need, and I'm just making this much more complex than it really needs to be....you can tell me if that's the case! Thanks for everything so far!! JCade 16:17, 21 September 2007 (UTC)[reply]

Yes, you can create a correlation function, which will tell you (a) whether there is any kind of correlation, and (b) when it happens. If we have the following definitions:

Then the correlation function between a and b would be

(See Correlation#Sample correlation for the formula for r). Then, if there is a particular value of k for which the function is significantly large (I can't think of the right way to identify what "significantly large" would be right now, sorry) then you can surmise that, for example, every 5k seconds after the doorbell rings, the dog is likely to bark. Confusing Manifestation 13:26, 22 September 2007 (UTC)[reply]

Thank you so much! Some excellent concepts and good references -- now I'll go digest it all. Much appreciated! 199.64.0.252 14:44, 24 September 2007 (UTC)[reply]

pi

pi is 3.14 —Preceding unsigned comment added by 68.228.78.34 (talk) 21:58, 20 September 2007 (UTC)[reply]

If that's a question, then the answer is no, since pi is an irrational and transcendental number. Splintercellguy 22:48, 20 September 2007 (UTC)[reply]
More specifically, 3.14 < 22/7 – 1/630 < π. See Proof that 22 over 7 exceeds π#Quick upper and lower bounds.  --Lambiam 08:14, 21 September 2007 (UTC)[reply]
Here at the cafeteria, pie is only 1.75 (for one slice). Gzuckier 13:57, 21 September 2007 (UTC)[reply]
Your comment makes me wonder whether some form of "Blueberry π" appears on menus in Greece. In Brazil, an amusing thing you often see on menus is "X-Burger". This only makes sense when you know that the Portuguese pronunciation of "X" is "cheese", so they make a little cross-language joke. :) --Sean 16:33, 21 September 2007 (UTC)[reply]
Probably not since the Greek pronounciation of π is "pee". Donald Hosek 17:20, 21 September 2007 (UTC)[reply]
This is parallel in a sense (but orthogonal in another) to the common English abbreviations "X-mas", "X-tian", etc., which all rely on the fact that the English letter X looks like the Greek letter χ (the first letter of Χριστός, Christ). Tesseran 19:38, 21 September 2007 (UTC)[reply]
Pi = circumfrence / diameter. --71.28.244.180 01:58, 26 September 2007 (UTC)[reply]


September 21

Sign Diagrams

Hi, semi-continuation of the Binomial question: I was looking at that sheet, and I'm wondering, how do I solve an inequality through sign diagrams?? Typing "sign diagrams" in wikipedia turns up nothing. --24.76.248.193 04:52, 21 September 2007 (UTC)[reply]

If you have to solve, for example, the inequation f(x) < g(x) for x, you can equivalently solve g(x) – f(x) > 0, or, putting h(x) := g(x) – f(x), the inequation h(x) > 0. So if you can find an argument for which function h returns a positive value, you're done. A web site explaining about sign diagrams can be found here.  --Lambiam 08:08, 21 September 2007 (UTC)[reply]

Thanks Lambiam. Tell me, how did you find websites like that? --24.76.248.193 02:25, 22 September 2007 (UTC)[reply]

By entering "sign diagram" into the Google search box and hitting the button labelled "Google Search".  --Lambiam 11:33, 22 September 2007 (UTC)[reply]

Was it Google?? I went up to page 20 and I still couldn't find it! Major props for finding it! --24.76.248.193 05:16, 25 September 2007 (UTC)[reply]

For me it was entry #18 in the results of this search.  --Lambiam 21:32, 25 September 2007 (UTC)[reply]

mathematical fallacy

erm, really I've got this obvious non-truth here, and it appears to stem from taking logs of something to the power ni but I wasn't aware that was a bad mathematical step to take, any explanations?


taking natural logs leaves

this obviously proves n = m for any integers n and m in the real number set, so whats wrong? ΦΙΛ Κ 20:53, 21 September 2007 (UTC)[reply]

Short explanation is that the exponential function maps multiple values in the complex plane to the same image, and so its inverse, the complex logarithm, is a multivalued function. Gandalf61 21:05, 21 September 2007 (UTC)[reply]
The even shorter explanation: When dealing with complex numbers, taking logs is a bad step. -- Meni Rosenfeld (talk) 18:21, 22 September 2007 (UTC)[reply]

Amongst other things - wheres m ?87.102.17.252 15:16, 23 September 2007 (UTC)[reply]

The OP has spared us some mundane details. Since for every , for we have , so "taking logs" gives and .
He did, however, have a different mistake; it should be , not . -- Meni Rosenfeld (talk) 15:31, 23 September 2007 (UTC)[reply]
Such as when n=1,m=2 so therefor n=m X
the error in resoning could be explained in terms failure to notice the oscillating value of eix..87.102.17.252 16:02, 23 September 2007 (UTC)[reply]


September 22

Need Major Math Skills Advice

Ok so here is the deal. I am a highschool sophomore and am only at a beginning Algebra 1 level. So basically as of now math is not my best subject. Now in 9 months I want to take a college placement test so that I can just go to college full time in my Junior and Senior year of highschool but getting both high school and college credits. Now the college placement test I will be taking consists of mainly high school Algebra, and Geometry. So to get to the point.. My question is do you guys have any strong advice to getting to the math level I need to be at in 9 months for that test? I basically need to know Algebra and Geometry for it and I need some major advice in what to do. How can I get ahead and learn the math very quickly? Thank you for your time. 06:07, 22 September 2007 (UTC)

I'd recommend getting one or more maths text books at the level you are at, and working through the proofs/problems until you can do the maths easily. Somebody else might be able to recommend books that you can get in your country - practise makes perfect. And you could ask your teacher to recommend some books.87.102.89.127 16:02, 22 September 2007 (UTC)[reply]
Although most books I've seen include lots of silly stuff that the teachers skip. You might ask to see a syllabus for a geometry class from your current math teacher, with the page numbers assigned for each day. This will help you to avoid wasting time on the unnecessary parts. Geometry isn't all that dependent on algebra, so you can probably learn both at the same time. StuRat 03:10, 23 September 2007 (UTC)[reply]
Is there any way to find out what kind of questions will be on the test you're studying for? Black Carrot 23:52, 24 September 2007 (UTC)[reply]

Poisson Distribution

I have two questions about the Poisson Distribution. The probability function of the Poisson Distribution is:


where:


k = the number of events in a given trial

lambda = the mean number of events over all trials

e = e


Question 1). Does lambda follow a Poisson Distribution?

In other words, suppose I conduct an experiment in which I have 1000 trials. In each trial I record the number of events, then I obtain the mean number of events over all 1000 trials.

Then I repeat the experiment 100 times, so I have 100 estimated means. Do those 100 estimated means follow a Poisson Distribution?


Question 2). Is there a formal test for whether a variable follows a Poisson Distribution? I know there are statistical tests that can be used to estimate, for example, whether errors in regression are normally distributed. I am hoping to find a similar test to estimate whether a variable follows a Poisson Distribution. I suppose such a test would be estimating whether the mean of the variable equalled the variance of the variable.

Thanks for any help with this.

Mark W. Miller 18:26, 22 September 2007 (UTC)[reply]

I can only answer about question 1; First, don't confuse which is a "true" (possibly unknown) parameter of a variable's distribution, with estimates of it based on empirical data; Second, the answer is clearly no, since a variable distributed Poissonly can only take integer values, while and its estimates can take any nonnegative real value. -- Meni Rosenfeld (talk) 18:33, 22 September 2007 (UTC)[reply]
The parameter λ is equal to the expected value of the number of events. The quantity 1000m, where m is the observed mean over 1000 trials, is an integer and does follow a Poisson distribution, with parameter 1000λ. You can use Pearson's chi-square test to test the goodness-of-fit between the theoretical and the observed distributions. Make sure you group adjacent cells with low expected frequency and make sure there are enough cells with expected frequencies of 5 or more. For example, if λ = 4.78 and N = 65, you should lump 0 and 1 together, as well as 9 and higher, giving cells {0..1}, {2}, {3}, {4}, {5}, {6}, {7}, {8}, {9..}. Don't forget to subtract an extra 1 from the parameter for degrees of freedom, since λ was estimated from the observations.  --Lambiam 20:35, 22 September 2007 (UTC)[reply]
Knowing λ you cannot deduce k exactly, but k assumes nonnegative integer values following a poisson distribution with mean value λ and standard deviation equal to the square root of λ. Knowing k you cannot induce λ exactly, but λ assumes nonnegative real values following a gamma distribution with mean value k+1 and standard deviation equal to the square root of (k+1). Bo Jacoby 21:30, 22 September 2007 (UTC).[reply]
Thank you for the replies.
Regarding Question #2 I've looked into the Pearson's chi-square test for estimating goodness-of-fit and I understand that now.
I'm still a little confused about Question #1. I am using emperical data to estimate λ (and would like to estimate its standard error). I know the ks, the number of events per trial or the counts for each of the 1000 trials (or 1000 samples) in an experiment. Therefore, if I understand correctly, λ follows a gamma distribution (and the observed mean count over all trials in the experiment multiplied by the number of trials in the experiment follows a Poisson Distribution).
What confuses me a little bit is that λ would have a mean value of k+1 with a standard deviation of the square root of (k+1). That kind of suggests to me that λ could have several different means within an experiment, since the range of observed counts (the ks) within the 1000 trials could encompass several different values. The counts within a trial might range from 0 to 10. It kind of sounds like you are saying λ can be estimated separately for each trial within an experiment and that the mean λ would range from 1 to 11 if the counts range from 0 to 10.
Sorry about my confusion, and thanks again for the replies. They have been very helpful.
Mark W. Miller 14:46, 23 September 2007 (UTC)[reply]
I too find the statement about a distribution followed by λ confusing. That would be a posterior distribution, based on conditional probability, but don't you need a prior distribution for that?  --Lambiam 15:15, 23 September 2007 (UTC)[reply]

Answering Mark W. Miller: If you observe totally k events in 1000 trials, then the intensity per 1000 trials is gamma distributed k+1 ± (k+1)1/2 (using the convenient but slightly nonstandard notational convention that λ≈μ±σ indicates that the mean value of λ is μ and the standard deviation of λ is σ), The intensity per trial is λ≈(k+1±(k+1)1/2)/1000=(k+1)/1000 ±(k+1)1/2/1000. So this is the estimate for λ. If the total count in 1000 trials is, say, 5000, then λ≈5.001±0.0707 counts per trial.

Answering Lambian: Yes, the prior distribution is ambiguous when the parameter space is infinite, as it is in this case: λ can assume an infinite number of values. When the parameter space is finite, however, the prior distribution is trivial: each of the possible values of the parameter have a priori the same weight. The poisson distribution is the limiting case of (hypergeometric) distributions with finite parameter spaces. You must compute first and take the limit later, rather than take the limit first and compute afterwards. (For example: a limit for big values of n is lim((1/n)/(1/n)) = lim(1) = 1, while the ratio lim(1/n)/lim(1/n) = 0/0 is undefined). There is no distribution giving each positive real the same weight, but there are distributions giving a finite number of reals the same weight, and when such distributions are used as prior, you may compute the posterior, and then take the limit. Note that the poisson distribution and the gamma distribution have the same form: Considered a function of nonnegative integer values of k it is the discrete poisson distribution function, while considered a function of nonnegative real values of λ is is the continuous gamma distribution density function. If you sum over k you get 1, and if you integrate over λ you get 1. If you multiply by k and sum over k you get the mean value of the poisson distribution, which is λ , and if you multiply by λ and integrate over λ you get the mean value of the gamma distribution, which is k+1. Bo Jacoby 21:54, 23 September 2007 (UTC).[reply]

Thank you for the reply. It was very helpful.
Mark W. Miller 16:33, 24 September 2007 (UTC)[reply]
My pleasure! Bo Jacoby 21:38, 24 September 2007 (UTC).[reply]

Sum of factors

I know that the sum of the factors of 124 is 100 (1 + 2 + 4 + 31 +62). How would I go about finding all of the other numbers (say up to 10,000 or any arbitrary number) whose factors add to a specific number such as 100? 68.231.151.161 23:08, 22 September 2007 (UTC)[reply]

Write a computer program to factor and check all of the possible numbers. Offhand I'm not sure where the upper bound for possible candidates would be, but I'd guess around (2n)2? There is a Wikipedia article on Integer factorization, but depending on how big your numbers are, you might not need anything more complicated than trial division. (How come you left out 124 from the list of factors but kept 1?) - Rainwarrior 01:56, 23 September 2007 (UTC)[reply]
Actually, I have no idea if there is an upper bound, though under 80000 (which didn't take long to test with a computer program, actually, maybe a minute if I output all the factorizations to the screen) I only found one other number that matches your first example (194 -> 1 + 2 + 97 = 100). - Rainwarrior 02:24, 23 September 2007 (UTC)[reply]
Divisor function#Properties can be useful. If you are only interested in numbers with a specific sum then you can try working backwards from the sum formula to find possible prime factorizations. PrimeHunter 02:21, 23 September 2007 (UTC)[reply]
Actually, yeah, that's a good idea. For each set of monotonically increasing numbers that sums to 100, try to find a number with that list as a factorization (at worst you'd have to test every combination of numbers in the set multiplied with each other). That could make the search space a lot smaller. (And... my earlier upper bound estimate might be valid then... I was thinking prime factorization at first.) - Rainwarrior 02:56, 23 September 2007 (UTC)[reply]
I haven't proved this, but experimentally it appears to be the case that if the sum is s > 1, then the number is at most (s–1)2, with equality when s = p+1, p a prime number.  --Lambiam 15:06, 23 September 2007 (UTC)[reply]
Well, if is prime, then its sum of factors is , and if not, then it must have a factor greater than , so adding 1 to that gives a sum greater than s. -- Meni Rosenfeld (talk) 16:23, 23 September 2007 (UTC)[reply]
I agree with PrimeHunter. Do you know how the sum of factors of a number can be written in terms of its prime factorization? Write 100 as the product of integers (greater than 1) in all possible ways, then see how each of those factors can be written as (1+p) or (1+p+p^2) or (1+p+p^2+p^3) etc for some prime p, and work backwards from there. – b_jonas 17:28, 24 September 2007 (UTC)[reply]
I don't understand the original assertion "...the factors of 124...(1 + 2 + 4 + 31 +62)". I can see including 2 & 62 in that list, and I can see including 4 & 31 in that list. However, the only reason that you can include 1 in that list is that it gives you 124 when you multiply it by 124. Thus, don't you have to include 124 in the list of factors, too? Or, what would make more sense to me, simply not include the identity factors in your list. Is this a definition thing, or an assumption that I don't see? It looks to me as though the list of factors is either (1,2,4,31,62,124) or (2,4,31,62). -SandyJax 18:41, 25 September 2007 (UTC)[reply]
Naturally, 1 is a factor of all numbers, because A*1 = A, no matter what A is. That might make it seem a trivial case, but it's still a factor that you can't validly exclude. But I take your point that 124 is also a factor of 124. We're told that factorization is "the decomposition of an object (for example, a number, a polynomial, or a matrix) into a product of other objects, or factors, which when multiplied together give the original. For example, the number 15 factors into primes as 3 × 5, and the polynomial x2 − 4 factors as (x − 2)(x + 2). In all cases, a product of simpler objects is obtained." I've highlighted the word "simpler". That suggests that a number is not a factor of itself. But that's a paradox if we accept that 1 is always a factor. Something's wrong somewhere. -- JackofOz 00:33, 26 September 2007 (UTC)[reply]
There's no deep mathematical issue here. It's just that "factor" isn't really a well-defined term, but more of a notion representing an idea. In the context of integers, we can take it to mean "divisor". Under this assumption, 124 is a factor of itself - but anon has, for whatever reason, chosen to consider the sum of only divisors smaller than the number. This is a pity, since the sum of all divisors is much easier to handle. -- Meni Rosenfeld (talk) 07:31, 26 September 2007 (UTC)[reply]

September 23

DIFFERENTIAL EQUATION

where is the xth derivative of f.

How do I start?--Mostargue 03:35, 23 September 2007 (UTC)[reply]

How do you take the derivative of a function that only has values at integers? - Rainwarrior 04:58, 23 September 2007 (UTC)[reply]
Is it possible that you mean this:
At least, that would make sense as a mathematical formula.  --Lambiam 07:26, 23 September 2007 (UTC)[reply]
One possibility IF f(x) is a/could be a polynomial would to be write f(x) = a0+ a1x+a2x^2 + a3x^4 ..etc, then calculate all the differentials - take the sum and collect all the terms in x^n... eg you'd have a0 = a0 + a1 + 2a2 +3!a3 etc.. - no idea if that what is what you meant (or how to proceed from there..)87.102.17.252 15:13, 23 September 2007 (UTC)[reply]
Now if f(x) was expressable as a polynomial (single valued,continous tpyically) - this gives an infinite number of linear equations with infinite unknowns - if I have n linear equations with n unknowns I can solve this and find each unknown (solutions exist) - therefor by extension I can say that if f(x) is a polynomial in xn then there is one such polynomial that satisfies the above relationship.. what it is I can't currently say.87.102.17.252 15:25, 23 September 2007 (UTC)[reply]
Is this supposed to be the Taylor series? — Daniel 23:51, 23 September 2007 (UTC)[reply]

YES YES THANK YOU LAMBIAM

is what I meant. I knew something looked wrong...--Mostargue 01:51, 24 September 2007 (UTC)[reply]

To solve this, simplify f(x) − f'(x).  --Lambiam 08:09, 24 September 2007 (UTC)[reply]

huh? How do I simplify that? Is that supposed to be an equation equal to zero, in that case it would be the exponential function...--Mostargue 08:17, 24 September 2007 (UTC)[reply]

Gandalf61 09:55, 24 September 2007 (UTC)[reply]

so f(x) = 2f'(x)

... hmm... so.. f(x) = 0.5e^2x, f'(x) = e^2x

Is that right?--Mostargue 10:07, 24 September 2007 (UTC)[reply]

Not quite. If f(x) = 0.5e^2x then f'(x) = 2f(x). But you want f'(x) = (1/2)f(x). Gandalf61 10:34, 24 September 2007 (UTC)[reply]

f(x) = 2e^0.5x , f'(x) = e^0.5x

yes?--Mostargue 10:43, 24 September 2007 (UTC)[reply]

That is one solution. f(x) = 0 is another solution, and in fact there is a family of solutions. You have a separable first-order linear ordinary differential equation; see the first example in Examples of differential equations.  --Lambiam 11:23, 24 September 2007 (UTC)[reply]

How many unique families of solutions are there?--Mostargue 14:38, 24 September 2007 (UTC)[reply]

One, that is, in solving the differential equation you have one constant you can choose freely. If you fix the value of f(0) to some given value, there is exactly one solution.  --Lambiam 17:50, 24 September 2007 (UTC)[reply]

Okay. But how did you get to f(x) - f'(x)?--Mostargue 23:04, 24 September 2007 (UTC)[reply]

I think it's one of those things you just need to spot (also known as a 'brainwave'). I couldn't get that either but once you've noticed (or been shown) that using it gives solutions it's a good idea to try to remember for the future - it's not algebraically obvious..87.102.10.190 15:41, 25 September 2007 (UTC)[reply]
It's a fairly standard way to collapse a really long sum. It's how you simplify the geometric series, for instance. Black Carrot 23:54, 25 September 2007 (UTC)[reply]

Mathematical programming

How do we solve a problem given by max z=c'x subject to Ax<b and x is greater then or equal to 0? I believe this is a non linear problem but do not know the reason for this. Cheers--Shahab 07:42, 23 September 2007 (UTC)[reply]

Assuming I've correctly interpreted all your terms, this is just a basic linear programming problem (with slack variables), and thus the kind of thing the simplex algorithm was written for. Algebraist 13:18, 23 September 2007 (UTC)[reply]
But don't linear programming problems have Ax b, whereas my problem has Ax<b. At least that's what the definition on the wikipedia article says. Moreover the definition in my book also doesn't mention strict inequality. Cheers.--Shahab 16:33, 23 September 2007 (UTC)[reply]
If you consider that as a difference, you need to step back and tell us what you really want to do. Are you interested in a mathematical question or do you want to solve an actual problem? The mathematical linear programming problem is defined on the real numbers, and even storing a real number on a computer is impossible in general. If you want to solve an actual problem, you will probably use floating point numbers and the difference between equality and near equality will vanish in the face of other precision problems. If you have a purely mathematical question - I have no idea. —Preceding unsigned comment added by 84.187.32.213 (talk) 17:07, 23 September 2007 (UTC)[reply]
It's a purely mathematical question.--Shahab 17:53, 23 September 2007 (UTC)[reply]
(question) could you explain what is meant by y=c'x (surely not c' times x?)87.102.17.252 18:53, 23 September 2007 (UTC)[reply]
maybe c and x are vectors, and c' is the transpose of c? —Tamfang 00:33, 24 September 2007 (UTC)[reply]
In general, this problem has no solution, because "max" will not be defined on this set. Here's a one-dimensional example: find max x subject to 10x < 100 and x ≥ 0. It's clear that the set of x satisfying these inequalities is the half-open interval [0,10) = {x | 0 ≤ x < 10 }. But this set has no maximum. It does, however, have a supremum, which is always defined, and you can talk about sup cx subject to Ax < b -- but in this type of problem, this supremum will just be max cx subject to Axb, reducing to the usual form for a linear programming problem. Tesseran 21:21, 23 September 2007 (UTC)[reply]
You aren't interested in Integer programming, are you? In that case, there is a big difference between < and ≤. 130.88.79.43 12:43, 25 September 2007 (UTC)[reply]

September 24

Number of rows with at least one unique element

I'm looking for a way to calculate the number of rows with n elements, of which each element is chosen from the range 1..r, and in which there is at least one unique element. Order matters. So, if n = 3, and r = 2, we would have 23= 8 rows, of which 211, 121, 112, 122, 212, 221 all have a unique element, only 111 and 222 do not have a unique element.

I tried to find a more or less direct formula, thinking along the lines: "if we stipulate the first element should be unique, we have r choices for this element, which leaves (r-1)n-1 choices for the other elements, i.e. r*(r-1)n-1 in total." But, this leaves out some of the rows in which the second element is the only unique element. But, on the other hand, it does include some of the rows in which the second element is a unique element too. For example, if n = 3, and r = 3, we would have 123 in which every element is a unique element -- I don't want to count those rows twice. (For n = 3, r = 3, there are 27 rows in total, 24 of which have at least one unique element, only 111, 222 and 333 do not have a unique element).

I can't come up with an exact formula, nor can I find a recursive equation. --Berteun 08:14, 24 September 2007 (UTC)[reply]

The basic problem that you're describing (if I'm reading you right) is how many distinct ways there are to choose n elements from a set R {1, ..., r} with repetitions, without counting the options that choose the same elements n times. According to the article on the binomial coefficient the first part is described by C(r+n-1, n), and the second part is just r, meaning the formula comes out as C(r+n-1, n) - r.
Of course, as I'm typing this I realise that you mean that a sequence like '3311' needs to be discounted as well, since it doesn't have any unique elements. I'll keep thinking on that one, but maybe this will be of some help.
risk 16:01, 24 September 2007 (UTC)[reply]
So your question is how many words there exist of length n with an alphabet of r letters where one of the letters appears exactly once. One way to get to the solution is: 1. Choose the letter that must appear once. 2. Choose the position for this letter. 3. Insert the letter in a word of length n-1 (where only the remaining r-1 letters may be used). Aenar 16:20, 24 September 2007 (UTC)[reply]
Problem with that is, though every word you create will have a unique letter, some words (such as 12, for instance) will be double-counted, as it contains 2 unique elements. Gscshoyru 16:21, 24 September 2007 (UTC)[reply]
Yeah, right, sorry. Aenar 16:25, 24 September 2007 (UTC)[reply]
I've calculated this for small numbers, and got this table.
         n= 0      1      2      3      4      5      6      7
                                                              
   r=0      0      0      0      0      0      0      0      0
     1      0      1      2      3      4      5      6      7
     2      0      0      2      6     12     20     30     42
     3      0      0      6     24     60    120    210    336
     4      0      0      8     60    216    560   1200   2268
     5      0      0     10    180    900   2920   7470  16380
     6      0      0     12    486   3432  14220  44100 113442
     7      0      0     14   1218  13188  70700 265650 799134
It's apparently not in Sloane's. – b_jonas 16:52, 24 September 2007 (UTC)[reply]
I would switch n and r, but the results are along the lines I got too. I mean, if the number of objects n = 1, there are exactly r solutions, whereas if r = 1, there is no solution except if n = 1. At least I'm relieved someone didn't find a solution rather easily, and the fact it's not the Sloane's surprises me. --Berteun 18:37, 24 September 2007 (UTC)[reply]
I did some manual calculations, and I found that for n > 3 and r = 2 the outcome is always 8. I may be wrong though, I'll check. risk 17:00, 24 September 2007 (UTC)[reply]
And wrong I was. I get the exact same table. risk 17:43, 24 September 2007 (UTC)[reply]
Sorry, it is in sloane, you just have to search for the complement: A131103 it is. – b_jonas 19:03, 24 September 2007 (UTC)[reply]
(See also the same in square array format). – b_jonas 19:05, 24 September 2007 (UTC)[reply]
Thank you! And also the other people who have thought about it! --Berteun 19:24, 24 September 2007 (UTC)[reply]
I think it might be possible to write a recurrence for this one. Let a(n,r) is the number of n long words from r letters that don't have any letter exactly once. Then you can write a(n,r) from a(n,r-1); a(n-2,r-1), a(n-3,r-1), ..., a(1,r-1), a(0,r-1); because you get all the words by inserting 0, 2, 3, ..., n-1, or n copies of the r-th letter somewhere in words that only have the first r-1 letters. I haven't actually written this formula nor verified it, so I could be wrong. – b_jonas 21:15, 24 September 2007 (UTC)[reply]
I looked at Sloane's. The sequence you gave was linked to the associated Stirling numbers of the second kind. The 'normal' Stirling numbers of the second kind give the number of ways to partition a set of n values into exactly k subsets. The associated ones give the number of ways to partition a set of n values into exactly k subsets where every subset has at least two elements. A direct expression for this is provided at Sloane's (A008299). This is easily combined with the formula given at A131103. The associated numbers can be thought of as putting n labelled objects into k unlabelled boxes. For this problem we would like to put them in labelled boxes. The idea is to calculate the number of each j = 1..k (where k cannot be greater than floor(n/2) of course), multiply this by k! (because the boxes are labelled) and then divide by (k - j)! because switching empty boxes does not give a new solution. (At least, that's how I read the formula). The rows of letters from an alphabet can be seen as putting each element of the row into a box which represents a letter of the alphabet. This way the number of sequences with only non-unique elements is obtained. --Berteun 08:21, 25 September 2007 (UTC)[reply]

Looking for job to support my study at Rostock University

—Preceding unsigned comment added by 82.116.148.238 (talk) 08:57, 24 September 2007 (UTC)[reply]

Try local newspapers and employment agencies. Also, the university may have a college work-study or coop program you could look into. StuRat 16:47, 24 September 2007 (UTC)[reply]
Do you have a work permit? Without one, it may be difficult to find a job, unless you have some really scarce talent. Anyway, this doesn't seem to be a mathematical topic.  --Lambiam 18:25, 24 September 2007 (UTC)[reply]

Contour lines for 3 fixed points

This all applies to a plane. For 2 fixed points, the contours of equal total distance from them are concentric ellipses with the fixed points as foci. What happens for 3 fixed points? To remove any obvious symmetry, take the triangle to be scalene, but it can be acute or obtuse. As the distance away gets larger, the contours will tend towards circles centred on a point somewhere within the triangle, but what is the pattern for smaller distances?→81.159.11.163 20:36, 24 September 2007 (UTC)[reply]

Very curious, its not a problem I've seen addressed before. I'm not sure if there is an easy answer to this, experiments suggest some you get close curves, which might not always be smooth. There will be a critical value of the distance where the curve forms a single point, and I think this will be a point on the Voronoi diagram of the three point.--Salix alba (talk) 21:48, 24 September 2007 (UTC)[reply]
Re. the curve forming a single point, this will be the Fermat point of the triangle, where the total distance to the vertices is least. So the contours will be non-overlapping closed curves containing this point. To take a simpler case to begin with, if the triangle is equilateral, there must be a corresponding symmetry in the curves - surely not circles, but like "bulgy" equilateral triangles?→81.159.11.163 22:02, 24 September 2007 (UTC)[reply]
locus of constant distance from three points
Ah yes, looks like the same thing.
Some questions arise. Is it always convex? is it always smooth? Brief experiments (see image) suggest yes to the first and possibly no to the second. In which case you will get quite an interesting singularity arising. It might be interesting to find where the vertices of the curve are. --Salix alba (talk) 22:27, 24 September 2007 (UTC)[reply]
The sum of distances to three or more points is a sum of convex functions, and therefore itself convex, so it has convex level sets. The distance to a single point is a smooth function except at that point, so the sum of distances is smooth except at the given points, and has smooth level sets except when the level set passes through one of the given points. See also geometric median for more than three points. —David Eppstein 17:57, 25 September 2007 (UTC)[reply]
Thanks for that. It looks like the distance function has simple A1 singularities (cones) at the given points and the contours are just what you expect from slicing through the cones (locally at least). Mystery solved. --Salix alba (talk) 20:35, 25 September 2007 (UTC)[reply]

conversion of centimeters to inches

Please tell me how do I convert centimeters to inches. All I need are the basics, then I can complete my homework. Thank you Samuelalexander 20:50, 24 September 2007 (UTC)[reply]

1 inch is exactly equal to 2.54 cm. To convert from inches to centimetres, you simply multiply by the conversion factor (2.54). The other way round is just division by the conversion factor. Richard B 20:55, 24 September 2007 (UTC)[reply]

Conversion of unitsCronholm144 23:51, 24 September 2007 (UTC)[reply]

September 25

Why why why (functional value)

I was doing functional value question:

and the question said: What is

So I was like:

So the "h"s cancel, right? But I flip to the answer key and it says h ≠ 0. Shouldn't it be h=0!

I'm confused. Yes, I've asked people, but I don't understand their English. Please explain to me simply. Thanks very much! —Preceding unsigned comment added by 24.76.248.193 (talkcontribs) 16:08, 25 September 2007

What happens when h = 0? In particular, you're dividing by h, so what would happen in that case? (See Division by zero if you need a refresher.) Confusing Manifestation 06:24, 25 September 2007 (UTC)[reply]

I'm confused. What does mean? 210.49.155.132 13:49, 25 September 2007 (UTC)[reply]
And does the asterisk have something to do with the imbalanced bracket? —Tamfang 17:37, 25 September 2007 (UTC)[reply]
I think the intended function is given by:
Then
If h ≠ 0, h/h can be simplified to 1, and be omitted as a factor.  --Lambiam 21:28, 25 September 2007 (UTC)[reply]

Oh, whoops. I meant to say:

Sorry, I'm new at your whole math program thing. My question is that why does h≠0?? —Preceding unsigned comment added by 24.76.248.193 (talk) 03:52, 26 September 2007 (UTC)[reply]

Simple Math Question -- Need Help -- Leap Years (?)

Can someone please help me with this simple math calculation? It can't understand it and it's driving me crazy. Any insight is appreciated. Thanks.

  • Person A is born on 12/18/1946 and dies on 03/21/1994
  • Person B is born on 12/18/1904 and dies on 03/20/1952

Method One

According to Microsoft Excel: A lived 17,260 days and B lived 17,259 days.

That seems to make "sense" since ... although in different calendar years ... they were both born on the same "day" (December 18) but Person A lived an extra day in March (dying on March 21 instead of March 20) while Person B did not live for that extra day in March (dying on March 20 instead of March 21). So, it makes sense that the March 21 decedent (Person A) has lived one extra day more than the March 20 decedent (Person B) ... that is, Person A lived 17,260 days which is one day more than Person B who lived 17,259 days.

So, the only thing that is truly "different" between Person A and B is ... the actual calendar years that they lived through ... and thus "how many leap years / leap days did each person live through." (I think?)

Person A has lived through 12 leap days: in 1948, 1952, 1956, 1960, 1964, 1968, 1972, 1976, 1980, 1984, 1988, and 1992.

Person B has lived through 12 leap days: in 1908, 1912, 1916, 1920, 1924, 1928, 1932, 1936, 1940, 1944, 1948, and 1952.

Using Method One (above), Person A lived one extra day more than Person B.

Method Two

Person A: From December 18, 1946 to December 18, 1993 is exactly 47 years. So, A celebrates his 47th birthday. The date of death on March 21, 1994 is 93 days after the birthday. (using Excel or viewing a calendar)

Person B: From December 18, 1904 to December 18, 1951 is exactly 47 years. So, B celebrates his 47th birthday. The date of death on March 20, 1952 is 93 days after the birthday. (using Excel or viewing a calendar)

Using Method Two (above), Person A lives 47 years and 93 days. Person B also lives 47 years and 93 days. (There is no "one day" difference.)

Method Three

I tried to use the Wikipedia template located at: Template:age in years and days.

Typing in these dates and values yields the following results:

Person A:

{{age in years and days|1946|12|18|1994|03|21}}

yields:

47 years, 93 days

Person B:

{{age in years and days|1904|12|18|1952|03|20}}

yields:

47 years, 93 days

So, Method Three (above) agrees with Method Two (above) ... Person A and Person B died at exactly the same age.

Method Four

I also tried to use the Wikipedia template located at: Template:age in days.

Typing in these dates and values yields the following results:

Person A:

{{age in days|1946|12|18|1994|03|21}}

yields:

17260

Person B:

{{age in days|1904|12|18|1952|03|20}}

yields:

17259

So, Method Four (above) agrees with Method One (above) ... Person A and Person B did not die at exactly the same age, but one day off.

Question

Can anyone help me understand the difference / distinction / discrepancy between these four methods? I seem to be missing something, but I cannot figure out what. Thanks. Where is my reasoning flawed?

Method One and Four agree that "A" lives one day longer than "B". (17,260 versus 17,259)

Methods Two and Three agree that "A" and "B" live exactly the same length of time. (47 years and 93 days)

So, perhaps the word "year" means a different thing for Person A than it does for Person B?

That is, the word "year" means 365 days in some cases ... but it means 366 days in some other (leap-year) cases.

That might seem to cause the discrepancy.

However, Person "A" has lived during 12 leap years/days ... and Person "B" has also lived during 12 leap year/days.

Thus, for both persons, the word "year" means 366 days in 12 years of their lives ... and the word "year" means 365 days in the other 36 years of their lives. They have both lived through 12 leap years and 35 normal years (thus, a birthday of 47 years total) ... plus a fractional piece of yet another (i.e., their 48th) year.

Can anyone help me understand the difference / distinction / discrepancy between these four methods? I seem to be missing something, but I cannot figure out what.

Where is my thinking flawed? Thanks. (Joseph A. Spadaro 05:59, 25 September 2007 (UTC))[reply]

All the methods are correct, but methods 1 and 4 are more useful for comparing ages. The reason is that methods 2 and 3 each count "47 years", but those years have variable lengths, some being leap years and some not. As it works out, the 47 years between 12/18/1946 and 12/18/1993 contain 12 leap days (48, 52, 56, 60, 64, 68, 72, 76, 80, 84, 88, 92) while the 47 years between 12/18/1904 and 12/18/1951 contain 11 leap days (08, 12, 16, 20, 24, 28, 32, 36, 40, 44, 48). Note that 1952 is not in the 47 year period in the second case. StuRat 07:01, 25 September 2007 (UTC)[reply]
Incidentally, had methods 2 and 3 counted from death back in time, the 47 years in each period both would have 12 leap years: 03/21/1947 to 03/21/1994 (48, 52, 56, 60, 64, 68, 72, 76, 80, 84, 88, 92) and 03/20/1905 to 03/20/1952 (08, 12, 16, 20, 24, 28, 32, 36, 40, 44, 48, 52). The number of additional days would be 93 from 12/18/1946 to 03/21/1947 but only 92 from 12/18/1904 to 03/20/1905. Thus, you would get ages of 47 years, 93 days and 47 years, 92 days, respectively. The lesson ? Don't use variable sized units if you want an accurate result. StuRat 07:19, 25 September 2007 (UTC)[reply]

Follow-up. The issue is that the 1952 leap day is not counted as part of a "year", but as a separate day, using methods 2 and 3. The period used for the final year is 12/18/1950 to 12/18/1951, which does not include February 29, 1952. Thus you have an extra leap day, not part of the "47 years". This doesn't happen with the other person because his year of death, 1994, was not a leap year. So, while both people had 12 leap days in their lives, methods 2 and 3 only count, for the person who died in 1952, 11 of those in the "years" and one as a separate day, while they count all 12 of those in the "years" and none as a separate day, for the person who died in 1994. StuRat 15:08, 25 September 2007 (UTC)[reply]

Here's a way we can simplify the problem, leave off the first 44 years, which contain 11 leap days in either case:

{{age in years and days|1904|12|18|1948|12|18}} = 44 years, 0 days

{{age in years and days|1946|12|18|1990|12|18}} = 44 years, 0 days


{{age in days|1904|12|18|1948|12|18}} = 16071

{{age in days|1946|12|18|1990|12|18}} = 16071

This leaves us with the portion that contains the "discrepancy":

{{age in years and days|1948|12|18|1952|03|20}} = 3 years, 93 days

{{age in years and days|1990|12|18|1994|03|21}} = 3 years, 93 days


{{age in days|1948|12|18|1952|03|20}} = 1188

{{age in days|1990|12|18|1994|03|21}} = 1189

Now, let's break down how those calcs are done:

{{age in days|1948|12|18|1949|12|18}} = 365

{{age in days|1949|12|18|1950|12|18}} = 365

{{age in days|1950|12|18|1951|12|18}} = 365

{{age in days|1951|12|18|1952|03|20}} = 93 <- Leap day included


{{age in days|1990|12|18|1991|12|18}} = 365

{{age in days|1991|12|18|1992|12|18}} = 366 <- Leap day included

{{age in days|1992|12|18|1993|12|18}} = 365

{{age in days|1993|12|18|1994|03|21}} = 93

So, by shifting the leap day out of one of the "years" and into the days counted separately, it appears that an equal length of time has passed, when, in fact, the 2nd interval is a day longer. Note that all ranges were assumed to be from noon on the starting day to noon on the ending day (or from the same time on both days, in any case). StuRat 16:26, 25 September 2007 (UTC)[reply]

PRIME NUMBERS

is there any general way to generate prime numbers?82.146.161.212 12:12, 25 September 2007 (UTC)MAT.[reply]

Yes, see Primality test. You can simply try numbers one by one and use one of these tests to check whether they are prime. However, these methods are very inefficient and a lot of computing power is needed to find very large prime numbers. 130.88.79.43 12:38, 25 September 2007 (UTC)[reply]
The most common way to generate all prime numbers in an interval below 1018 (which has been reached from 0 with this method) is the Sieve of Eratosthenes. Other methods are better to find a limited amount of much larger prime numbers. If you are interested in general formulas with no computational value (because they are too slow) then see formula for primes. PrimeHunter 14:37, 25 September 2007 (UTC)[reply]
A more specific question would evoke a more specific answer. Do you want to generate, say, the first 200 prime numbers in order? Do you want a number within a certain range, guaranteed to be prime? Do you want a handful of numbers of arbitrary size, guaranteed to be prime? Do you really want to test primality?
On a large scale, prime numbers are distributed with some regularity (see the prime number theorem); on a small scale, they are too irregular to generate without testing. A sieve test strikes out all the higher multiples of 2, of 3, of 5, of 7, and so on; what remains at each stage are the primes, and their multiples are also struck. The Eratosthenes method can be improved for more efficiency, but cryptography currently depends on finding primes beyond the practical range of sieves. Happily, number theory has found other remarkably effective ways to decide whether a number is prime or composite, ways that do not attempt factoring. (That's important, because we do not know equally fast ways to factor.) One tool is Fermat's little theorem, which tells us that when p is a prime, then any positive integer a raised to the power p−1 is congruent to 1 modulo p. For example, we might test 51 by computing 250 (mod 51); we get 4 instead of 1, so we know 51 is not prime. --KSmrqT 20:33, 25 September 2007 (UTC)[reply]
To be precise, a must be coprime to p. And note that a Fermat test can prove compositeness but cannot prove primality by itself. For example, 3550 (mod 51) is 1 although 51 is composite. If p is not of a special form (for example with known factorization of p-1 or p+1) then much slower methods are needed for primality proving of large numbers, for example the complicated elliptic curve primality proving. PrimeHunter 00:12, 26 September 2007 (UTC)[reply]
There are two variations of the theorem. One says apa(mod p); the other takes one step back and says ap−1≡1(mod p). I was a little imprecise, but the first version is sleight of hand — it works even when a≡0(mod p) — to make the mathematics look neat at the expense of the essential insight (see Proofs of Fermat's little theorem). If p is prime then a and p are coprime for every positive integer a, so long as we make the natural assumption that 0<a<p.
And, yes, we could go into Carmichael numbers and elliptic curve tests; but let's instead wait to see if we can get more clarification about what's wanted. My point in mentioning Fermat was not to give a full exposition, but simply to make credible the counterintuitive idea that we could test primality without involving factors. --KSmrqT 04:02, 26 September 2007 (UTC)[reply]

Irrationality of the square root of 5

I'm trying to prove that the is irrational; however I've gotten stuck. I tried a basic proof by contradiction which can be used to prove that and are irrational, but that did not work:

Assume is rational; then with being an irreducible fraction. Basic algebra gives

Basic number theory states that an odd number times an odd number is odd and an odd number times an even number is an even number. Thus, whether is odd or even depends on whether is odd or even. Since both sides are integers, they must both be odd or they must both be even. Therefore, either and are both even or they are both odd. More basically, either and are both even or they are both odd. However, if they are both even, then they both have a factor of 2, and thus is not an irreducible fraction. Thus, they are both odd, which means and

Substituting this in for the equation gives:

At this point in the proofs for the irrationality of and , it could be shown that one side of the equation was even and the other side odd, a contradiction. However that is not the case here:

Both sides are even. Because of this I am stuck and out of ideas. Does anyone else have any hints or see some place that I messed up? Dlong 18:39, 25 September 2007 (UTC)[reply]

I think you almost had it, but you need an ever so slightly different trick. Factor out 4 from both sides and rewrite slightly:
Now note that both constant factors 5 (left side) and 1 (right side) are odd, so multiplying by them does not change the odd/even status of another integer. What can you say now? Baccyak4H (Yak!) 18:58, 25 September 2007 (UTC)[reply]
I see now, thank you. However, I am curious as to how one form algebraic manipulation results in both sides being even while another results in one side being even and the other odd. Is this simply because we are making a bad assumption (That is rational) or is it something else? Dlong 19:13, 25 September 2007 (UTC)[reply]
I am not sure I understand what it is you are curious about. If it is, "I had both sides even and you had one even, one odd", note I took your expression, both sides even, and factored out the even 4. We don't know in advance how this will turn out, but it does turn out that yes, now one side is even, one odd. But that is merely a consequence of the original assumption, that p and q are relatively prime integers whose squares are in a ratio of 5:1. I am not sure what you meant in parentheses: sqrt(5) is most certainly not even :-)oops, misread That the implication is a contradiction simply means that yes, the assumption is wrong. I hope that helps. Baccyak4H (Yak!) 19:24, 25 September 2007 (UTC)[reply]
"Odd" and "even" are important concepts for the irrationality of the square root of two because "even" is the same as "divisible by two", and "odd" is the same as "not divisible by two". For the proof at hand, it's easier to use divisibility by five instead:
  1. Since 5q2 is a multiple of 5, p2 must be a multiple of 5.
  2. Since 5 is prime, it follows that p is a multiple of 5
  3. so p2 is divisible by 25
  4. so 5q2 is divisible by 25
  5. so q2 is divisible by 5
  6. so q is divisible by 5 (contradiction: p/q is not in lowest terms)
By the way, there are more straightforward proofs that square roots of integers (other than perfect squares) are irrational. Specifically, if you take the square of a non-integer fraction, the result is always a non-integer fraction . It follows that the square root of an integer is always either an integer or an irrational number. Jim 19:29, 25 September 2007 (UTC)[reply]

Schwarzschild Radius

I need help with the equaion for the Schwarzschild Radius. The equation is:

where

is the Schwarzschild radius,
is the gravitational constant,
is the mass of the gravitating object, and
is the speed of light.


Since I assume that all but the mass of the object are constants can someone simplyfy this equation for me so there is only a single varible for which I can plug in a mass of X grams and get a radius. Thanks for any help :) -Icewedge 19:16, 25 September 2007 (UTC)[reply]

The equation is rs = (1.48×10−30 meters/gram) × mass. Jim 19:39, 25 September 2007 (UTC)[reply]
Ok, thanks for the help I have used what you told me to try and sole me problem; I plugged in the estimated mass of the universe: 10^55g and got out a radius of something like 5*10^9 light years. Have I applied your help correctly? -Icewedge 00:17, 26 September 2007 (UTC)[reply]

"Hello"

Hi...I was seeing how you would model subtracted integers?

One of them was -4 - +3

I first started with four then add three zero pairs, and took away three positives having left seven negatives. I don't that's right at all. Can someone help, please? —Preceding unsigned comment added by Writer Cartoonist (talkcontribs) 22:01, 25 September 2007 (UTC)[reply]

Negative_number#Arithmetic_involving_signed_numbers might help. If you're asking whether (-4)-(3)=(-7), the answer is yes. This can be imagined as jumping four spaces left on the number line, then jumping three spaces not-right, in other words left. I recommend that, in future questions, you write more clearly though. "I don't ____ that's right at all" is plain sloppy. From a fellow Houstonian, no less. Black Carrot 22:43, 25 September 2007 (UTC)[reply]

September 26

Square mean root?

The Root Mean Square is also known as the quadratic mean.
Is the inverse "Square Mean Root" recognized as a unique function (since it equals the mean of the "A + B" median and the geometric mean) and, if so, does it have an alternative name, equivalent to RMS's "quadratic mean"?  ~Kaimbridge~00:08, 26 September 2007 (UTC)[reply]

It's covered by Generalized mean but not given a name there, you could call it a power mean with power = 0.5,
The Heronian mean is similar..87.102.23.3 01:52, 26 September 2007 (UTC)[reply]

probability of a set containing a particular subset

I as wathcing a TV show a saw a lottery game that works as follows: 20 numbers are chosen randomly from a set 1 ≤ n ≤ 80. Players select 10 numbers from the same set of 80. If the set of 10 numbers is a subset of the 20, the player wins. What would be the probability of winning? I know (assuming both sets are chosen randomly) there are

ways of choosing the set of 20 and

ways of choosing the set of 10, as well as

possible subsets of size ten within the set of twenty. I'm still not sure how to determine the probability that a particular set of 10 (chosen from the set of 80) is in the set of 20. How do I determine this? - SigmaEpsilonΣΕ 03:55, 26 September 2007 (UTC)[reply]