Wikipedia:Reference desk/Mathematics: Difference between revisions

From Wikipedia, the free encyclopedia
Content deleted Content added
Inverse of NxN Matrices Mod M (Hill cipher)
Line 71: Line 71:


I'm looking for an efficient method to find the inverse of arbitrarily sized matrices under an arbitrary modulo. The 2x2 case is trivial and I have already implemented it. Any ideas for bigger matrices? I need this in order to write a version of the [[Hill cipher]]. The easier the method the better (I'm not keen on coding gaussian elimination). I'm coding in Ruby if that's any help (any library's i don't know of?). [[User:BrokenSegue|''B''roken]][[User talk:BrokenSegue|''S''egue]] 04:46, 25 April 2007 (UTC)
I'm looking for an efficient method to find the inverse of arbitrarily sized matrices under an arbitrary modulo. The 2x2 case is trivial and I have already implemented it. Any ideas for bigger matrices? I need this in order to write a version of the [[Hill cipher]]. The easier the method the better (I'm not keen on coding gaussian elimination). I'm coding in Ruby if that's any help (any library's i don't know of?). [[User:BrokenSegue|''B''roken]][[User talk:BrokenSegue|''S''egue]] 04:46, 25 April 2007 (UTC)

:<math>A^{-1} = (det A)^{-1} \times (adj A) (mod m)</math> as long as (det A, m) = 1 [[User:StatisticsMan|StatisticsMan]] 05:12, 25 April 2007 (UTC)

Revision as of 05:12, 25 April 2007


Please help us write the Reference Desk guidelines
We are currently drafting a proposal at Wikipedia:Reference desk/guidelines.
Please discuss those issues on its associated talk page, Wikipedia talk:Reference desk/guidelines.

Wikipedia:Reference desk/headercfg


April 19

More Hölder inequality fun

I have another question that relates to Hölder's inequality. This is from a qualifying exam a year ago (that I failed, unfortunately).

Suppose for some . We have two statements to prove:

  1. Show that for all x > 0.
  2. Show that .

Now, I can do number 1. Let 1/q = 1 - 1/p, and note that for any x > 0, the function is in . Use Hölder's inequality with the original f and this g, and you have the result. (Since g(x) is a constant function, the Lebesgue and Riemann integrals agree, so it's not hard to get the right hand side.)

For number 2, I'm just at a loss. I feel like I want to use number 1, and have some squeeze theorem result or some such, but I can't get anywhere. Any hints would be greatly appreciated. (Again, this is not homework, this is just a problem from a qual that I could never do.) Thanks in advance! –King Bee (τγ) 14:21, 19 April 2007 (UTC)[reply]

Ooh, I just got an idea. How about this: Number 1 gives you that . Note that for all x, . Now let be any sequence such that with n. Set . Then . Since and , we can apply the monotone convergence theorem to get . So this limit exists and is finite, and since , the limit in question is 0.
Commentary. I see no flaw here. Can someone step in and let me know if this looks good to them as well? Thanks! –King Bee (τγ) 14:39, 19 April 2007 (UTC)[reply]
(I suppose I should add that , since I didn't state that explicitly.) –King Bee (τγ) 14:42, 19 April 2007 (UTC)[reply]
Unfortunately, part 1 does not show that . (It does show that it is in but that is not good enough for the rest of the argument.) Stefán 18:20, 19 April 2007 (UTC)[reply]
You are correct. It shows for all x. Any hints on getting around this snafu? –King Bee (τγ) 19:55, 19 April 2007 (UTC)[reply]
Take the quantity to the p-th power and look at the integral . Then read the article on Hardy's inequality. Stefán 19:10, 20 April 2007 (UTC)[reply]
Ahh, thanks for the help. I was really hoping for something a little more obvious; I have never even heard of Hardy's inequality. Thanks! –King Bee (τγ) 22:05, 22 April 2007 (UTC)[reply]

I am calculating a percentage of work items cleared on any particular day. For example if I had cleared 319 out of 405 work items then the ratio cleared would be 0.787654321. Now I know from the Significant_figures article and the Spurious Accuracy reference that I should round my result to 3 significant digits since .788, or 78.8%. My question is this: I know that there is a reporting error of around 2-3% for the number of work items and number of cleared work items. Would this reduce my allowed significant digits to 2 or would I still be allowed to use 3 significant digits? How exactly does the margin of error in my initial mesurements effect the significant digits? Thanks, -Czmtzc 14:34, 19 April 2007 (UTC)[reply]

You should give the number of significant digits that is supported by the accuracy of your input values - 3 significant digits is only a rough guide. In you case, you say you have uncertainties of up to 3% in each of your two input figures, so your total uncertainty could be up to 6%. In other words, 319 out of 405 could in fact be anywhere from 314 out of 411 (76.4%) to 324 out of 399 (81.2%). To correctly reflect this level of accuracy, I suggest you round your results to the nearest 5% - so 78.8% would be rounded up to 80%, whereas 76.2% would be rounded down to 75%. Gandalf61 14:54, 19 April 2007 (UTC)[reply]
I agree with what Gandalf said (although the two errors aren't strictly additive, but it's close enough for two low error rates around 2-3%). However, there was an error in what you said, Czmtzc. You said that, even if there was no error, you should round to three significant digits. While that would be true if dealing with real numbers given to 3 significant figures, integers are different. In the case where an exact count is known, this is infinite accuracy. So, you can report as many significant digits as you like. If 2 of 3 cars are running, I don't have to round that to 70%, I can say 67%, or 66.7%, or 66.66666666666667%, if I prefer. StuRat 19:40, 19 April 2007 (UTC)[reply]
I see what you are saying StuRat about exact counts. That is why I was careful to mention that I didn't have perfect confidence in the source data, so I also appreciate the answer you gave Gandalf61. Thanks to All. --Czmtzc 20:38, 19 April 2007 (UTC)[reply]
You're welcome. StuRat 04:36, 20 April 2007 (UTC)[reply]
Your question highlights the problem of using the number of figures to inform the reader about accuracy. As you might have realised, it works flawlessly only when the number that is the center of the interval has a finite decimal expansion (in the base you are using) and changing the last of those figures by one takes you the edges of the interval. In all other cases, you'll have to throw away some accuracy or pretend to have more of it than you actually do. :-( And, of course, it might break down even more if an interval isn't a good description of the uncertainty in the first place. —Bromskloss 12:13, 20 April 2007 (UTC)[reply]


April 20

The integral of cosz / z(z^2 +8) dz?

Let C denote the positively oriented boundary of the square whos sides lie on the lines (x= + / - 2 and y = + / - 2)

Evaluate integralc cosz / z(z^2 +8)

My problem with this question is that up until now I could solve the questions by using the Cauchy Integral f(z0) = 1/(2πi) Integralc f(z) dz / z - z0

Because the denominator just had a z - something so it was easy to use. Now this doesn't have that so I don't even know if this formula is correct. Which formula should I use?

This is just a question from the text and I have the answer, πi/4 but that hasn't helped me.. :) Thanks

Hint 1: can be factored into two linear terms like .
Hing 2: What goes in "the denominator" depends only on what you want to put there. You don't have to put everything that is in your other denominator in there at once. --Spoon! 05:47, 20 April 2007 (UTC)[reply]
Just to be clear, is the following the integral in question?
If so, consider the partial fraction expansion of
especially as it relates to poles and residues. --KSmrqT 06:42, 20 April 2007 (UTC)[reply]
KSmrq's suggestion regarding partial fractions is an excellent approach to problems like this one in general; in the particular example you have, howeve.

Inter-rater agreement: Are there different alternatives for calculation of the confidence interval for kappa?

We have developed a new method for blood typing that is particular useful in certain clinical situations when conventional blood typing is difficult. For 5 antigens we have found 100% concordance between our new method and the conventional method in 151 blood samples that have been typed by both methods. These results are part of a manuscript that we have recently submitted for an international Hematology journal. One of the referees asked us which statistics we had used to assess concordance between the two methods. Initially we thought that this is an easy problem to solve. We intended to calculate the kappa coefficient with 95% confidence interval (CI) for each antigen and then, for all 5 antigens, to calculate the pooled kappa value with 95% CI. Because there was 100% concordance it is obvious that kappa = 1. To our surprise we found that the standard error of kappa in our case was 0, but by scrutinizing the formula for the standard error of kappa (Altman. Practical Statistics for Medical Research. Chapman & Hall, London 1991) it is evident that the standard error of kappa always equals 0 when there is 100% concordance. Intuitively, we found this difficult to understand: We would be much more confident in our new blood typing method if we have shown 100% concordance with the conventional method in a very large number of samples (for instance 830) samples as compared to a small number of samples (for instance 12). Hence, we expected that the confidence interval would more narrow if had tested a large number of samples as compared to a small number of samples. To us it seems as the 95% CI is not very useful for kappa values close to 1. If we assess the inter-rater agreement using the MedCalc statistical package on a data set of 50 samples in which there is concordance for 49 samples and disagreement for 1 observation, we obtain the following results: Kappa = 0.960, 95% CI from 0.882 to 1.038. This CI does not make sense because kappa cannot exceed 1.

Our question is as follows: Are there other ways of estimating the 95% confidence interval for kappa that are more suitable for kappa values of 1 or close to 1?

129.240.43.46 09:13, 20 April 2007 (UTC)[reply]

The usual standard error formulae for most statistics (including kappa) are appropriate for large samples, but are based on approximations that can be highly inaccurate when the sample size is low or the distribution of the data is extreme, and especially when both occur at once. A simple example is the usual formula for the standard error of a proportion, , which is 0 for or regardless of whether the sample size is 1 or 1000. There are algorithms that solve the small sample aspect of the problem, known as exact tests, which can also help mitigate extreme data distributions. Statistical packages such as SPSS or the SAS System can calculate exact tests and confidence intervals for kappa (see relevant SAS command here, and further background here), if you want to try that route. Another approach might be some form of Bayesian analysis; I recall that Alan Agresti has shown this can at least solve the problem of an extreme proportion I mentioned above. This would probably require some statistical expertise, though. -- Avenue 12:12, 20 April 2007 (UTC)[reply]

Freaky triangles - Where did those two squares come from???

I was reading this pop-science magazine at my dentist the other day, and they have these brain-teasers in the back. I can usually figure these things out pretty easily, but one of them stunned me, and I haven't been able to figure it out. It's a picture of a triangle composed of smaller shapes (four other triangles and two sort-of stubbed tetris pieces) that are rearranged in a different way to form the exact same triangle, but this time there are two additional squares. I drew the picture and uploaded it, this is how it looks like:

Please, for the sake of my sanity, can someone please explain where the hell those two blue squares come from? This is killing me! --Oskar 14:27, 20 April 2007 (UTC)[reply]

Niether of the 2 big figures is a triangle. The 2 "sides" pointing upwards are not straight lines. Got it now? Zain Ebrahim 14:38, 20 April 2007 (UTC)[reply]
Well, now I feel profoundly stupid. Of course they aren't straight lines! I can't believe I missed that! I see it now. Thank you! --Oskar 14:44, 20 April 2007 (UTC)[reply]
Longer explanation, in case anyone is still puzzled - if you subtract the first "triangle" from the second, you find that you are left with two long thin parallelograms. If we place the origin at the bottom left hand corner, the co-ordinates of the vertices of the left-hand parallelogram are (0,0), (3,7), (5,12), (2,5), and the second is symmetrically placed on the right hand side. The area of each of these parallograms is 1 unit (i.e. (3x5)-(2x7)), so the area of the second "triangle" exceeds that of the first by 2 units, which accounts for the two squares. Gandalf61 15:06, 20 April 2007 (UTC)[reply]
Quite a classic puzzle, a nice explanation is here[1]. It seems that Fibonacci numbers come into play when selecting the lengths of sides. I'm sure this puzzle had a name, but it escapes me. --Salix alba (talk) 18:06, 20 April 2007 (UTC)[reply]

This is a variant of the Missing square puzzle. Root4(one) 18:16, 20 April 2007 (UTC)[reply]

I note the article also calls it Curry's paradox.Root4(one) 18:24, 20 April 2007 (UTC)[reply]
I seem to remember that this, or similar, was one of Sam Loyd's puzzles.…81.159.11.43 10:43, 21 April 2007 (UTC)[reply]

Well, I will never understand the numbers, but I sat a straight edge along both "upward pointing" sides of the triangles, and they look like straight lines to me. Do I just need new glasses, or a new straight edge? Bielle 10:17, 26 April 2007 (UTC)[reply]


April 21

How many different ways can I express 1,-1,1,-1, ... as a one-variable function?

Infinitely many, I suppose, but what I'm looking for is all the interesting variations. For example,

f(x) = -1^(x-1)

I guess works, but how could I represent this using only modular arithmetic, or trig, or boolean algebra, topology, fractal geometry, or some other area of mathematics that I have never even heard of yet. Basically I'm wondering if there is a list somewhere that shows all the (interestingly) different ways to do it. I know it's not a well-defined question, but I'm curious. NoClutter 00:16, 21 April 2007 (UTC)[reply]

Ah, but what is interesting? You have stuff like various periodic sine and cosine graphs, you can use f(n)=(-1)n+1, f(n)=-(i2)n=-i2n, or a function you think is interesting enough and when you put a value in you get -1, then power it to n. If you get bored, look around at this on OEIS. x42bn6 Talk 00:26, 21 April 2007 (UTC)[reply]
Is a function for that possible in modular arithmetic? Say f(n) = 2n + 1 (mod 4) which gives 1, 3, 1, 3, ... And 3 ≡ -1 (mod 4), so does this work? –Pomte 00:41, 21 April 2007 (UTC)[reply]

How about some infinite series? =).

lol.--Kirbytime 01:24, 21 April 2007 (UTC)[reply]

Test question

Hi Reference desk. I got a question on a math test wrong, but I don't know why. I've tried to solve it, but don't understand. I was wondering if you could help. The question is as follows:

How many square meters is 7,000,000 square centimeters ?

[A] 70,000,000,000 m square [B] 7,000 m square [C] 700 m square [D] 0.007 m square [E] None of these

I chose E, thinking the answer is 70,000 (7,000,000 divided by 100 = 70,000). What did I do wrong? Thanks in advance, Nick.

A meter is 100 cm, so a square meter is not 100 square cm but 100^2 square cm.
Imagine a square with side length 1 meter. How many small centimeter squares fit into this large square? –Pomte 00:43, 21 April 2007 (UTC)[reply]

Ah, okay. So it would be 7,000 square meters, with 1,000 square centimeters per square meter. Thank you very much Pomte! Nick.

Not quite. 100^2 is 10,000, not 1,000, so there are 10,000 square centimeters per square meter. Gandalf61 14:32, 21 April 2007 (UTC)[reply]

Just remember it like this

1 metre = 100 cm

(1 meter)^2 = (100 cm)^2

1^2 meter^2 = 100^2 cm^2

1 meter^2 = 10,000 cm^2

202.168.50.40 01:46, 22 April 2007 (UTC)[reply]

Prime Numbers

Is (10^n)+1 always prime for all positive whole numbers?It seems to be as far as I cancalculate. If so, whose law haxe I rediscovered?

You have demonstrated the law of laziness. We can, for example, factor 109+1 as
Furthermore, that factor of 11 will show up whenever n is odd.
No simple formula is known to guarantee primes, but this one fails early and easily. --KSmrqT 03:36, 21 April 2007 (UTC)[reply]
The Fibonacci sequence has something with primes (not the actual sequence, but something that uses it). --TeckWiz is now R ParlateContribs@(Lets go Yankees!) 03:39, 21 April 2007 (UTC)[reply]
Actually, as formula for primes mentions, there is a simple formula for primes of the form floor(A3n), where A is Mills' constant. But perhaps what KSmrq meant is that there is no simple and easily computable formula... Mills' formula is not easily computable, because it's much easier to generate primes by other means than to calculate A sufficiently accurately to use the formula. —David Eppstein 04:06, 21 April 2007 (UTC)[reply]
I pondered what phrase to use, and decided "simple" was simplest. ;-)
Even so, I'm sure I spent more time in answering the question than was spent in generating it. Ah, well; primes seem to hold an endless fascination for some folks. We should mention Mersenne primes, which presently have no serious competition as the largest known primes. However, these numbers, of the form 2p−1 (where p is a prime) are not guaranteed to be prime; hence the Great Internet Mersenne Prime Search (GIMPS).
I mention the factor of 11 for three reasons. First, it demolishes the proposal by eliminating half of the candidates at one stroke. Second, it gives me another chance to illustrate the utility of ring homomorphisms from the integers to the integers modulo n, which I mentioned in response to a recent post. And third, it rewards those who know this supplemental variation of casting out nines.
For, casting out elevens is almost as easy, and another handy check on arithmetic. Where casting out nines sums all the digits, casting out elevens alternately adds and subtracts. Consider 267583; working from right to left, we have 3−8+5−7+6−2 = −3, which is congruent to 8 modulo 11. Thus any number of the form 10…01, with an even number of zeros, is immediately seen to be divisible by 11. (It's interesting to note that the designers of ISBN-10 knew that this check catches digit transpositions, a common mistake that ISBN-13 does not guard against.) --KSmrqT 07:54, 21 April 2007 (UTC)[reply]
Kenneth Rosen's "Elementary Number Theory and its applications" states that no polynomial in n (n a positive integer) variables generates only primes and says the proof is beyond the scope of the book. StatisticsMan 05:41, 23 April 2007 (UTC)[reply]
Formula for primes gives a proof in three lines, but I suppose it's not quite elementary. Algebraist 16:21, 23 April 2007 (UTC)[reply]


April 22

Math Songs

I wasn't sure where to post this, and I decided the Math desk would be marginally the better choice. I know of, at the moment, three really good sources of math-based music: [2], [3], and [4]. Can anyone suggest anything else? —The preceding unsigned comment was added by Black Carrot (talkcontribs) 08:23, 22 April 2007 (UTC).[reply]

Sorry I didn't sign that. Here's one other I didn't have a link to. [5] Black Carrot 09:23, 22 April 2007 (UTC)[reply]
Tom Lehrer wrote a couple of math songs, but his math-related ones tend to be more rare than the rest. One of his most famous ones has its own article, New Math (song). Maelin (Talk | Contribs) 12:27, 23 April 2007 (UTC)[reply]

Cubic Functions

Is there anything analoguous to conics for higher degree polynomials (e.g. conics are to second degree polynomials as ... are to third degree polynomials)? --AMorris (talk)(contribs) 09:07, 22 April 2007 (UTC)[reply]

Degree 4 = quartic; degree 5 = quintic; degree 6 = sextic. Gandalf61 09:33, 22 April 2007 (UTC)[reply]
It depends on what you expect. A degree 2 polynomial in one (or more) variables is called a "quadratic", as in the famous quadratic formula. A degree 3 polynomial is a cubic. But conics, or conic sections, are special. Every degree 2 polynomial in two variables, x and y, implicitly defines a curve in the projective plane that is a conic section. With one variable the graph of the function is always a parabola, and with three variables we speak of quadrics (or quadric surfaces).
Elliptic curves for j=1728
Curves of higher degree acquire complications. The typical cubic curve, which is nonsingular, has no parametric form, and is called an elliptic curve for historical reasons. (These curves are not ellipses!) Every real-valued elliptic curve is equivalent to either of
by a linear transformation. Among these, the pair of curves
have a unique status, having to do with the number 1728 (which is 123) and some very deep mathematics. (The final route to the proof of Fermat's Last Theorem involved elliptic curves in a subtle and surprisingly beautiful way.)
The cubic Bézier curves which are so popular in computer graphics do not fall into one of these two families of elliptic curves. In fact, simply knowing that they are parametric curves tells us that they must be singular as algebraic curves. Thus somewhere on the curve a magnified view does not look like a line.
Consider the curve with Bézier control points (−3,−9), (1,−3), (−1,−3), (3,−9). As an implicit curve, it is
The singular point does not appear on the parametric curve; it is at the origin, (0,0). Using lines through this acnode, we can find every other curve point's parameter
Thus we see a profound difference between degree 2 and everything higher. Bézout's theorem tells us a line will intersect a curve as many times as the degree, if we count multiplicity. For degree 2, we can fix any point on the curve and use the family of lines through it, each of which gives a second intersection point, and thus a parameterization. For degree three, we must have a "double point" — a singularity — to accomplish the same thing. And the nonsingular cubics, the elliptic curves, have extraordinary properties not shared by curves of higher degree. --KSmrqT 16:30, 22 April 2007 (UTC)[reply]

Polynomial sequence

The generating function for the nth powers of the positive integers has the form Pn(x)/(1-x)n+1, where Pn(x) is a polynomial of degree n. In other words,

The first few polynomials in the sequence Pn are:

and the Pn polynomials satisfy the recurrence relation:

Does anyone know if there is a name for this polynomial sequence ? I have searched the polynomial sequence articles here and at Mathworld, but I haven't found this particular sequence. Gandalf61 09:29, 22 April 2007 (UTC)[reply]

This [6] comes up on OEIS. Just skimmed it. Eulerian polynomials? --87.194.21.177 10:52, 22 April 2007 (UTC)[reply]
Ah, yes, thank you - the co-efficients of these polynomials are the Eulerian numbers as decribed here and here ... but not, apparently, currently in Wikipedia, because "Eulerian number" redirects to Euler number, which is not the same thing. I feel an article coming on ... Gandalf61 14:24, 22 April 2007 (UTC)[reply]
Another way to describe the g.f. of n^m is
where the xD operator is the application of differentiation followed by multiplication with x. (xD)^m means m-times application of it. A generalization would be
--85.179.17.199 16:10, 22 April 2007 (UTC)[reply]
As promised, I have now created an article on Eulerian numbers. Gandalf61 16:05, 23 April 2007 (UTC)[reply]

USB symbol

Is the USB (Universial Serial Bus) symbol a mathematical symbol? Or at least resembles one? Thanks very much for your response. --Mayfare 15:27, 22 April 2007 (UTC)[reply]

It resembles loosely both lowercase epsilon (ε) and capital sigma (Σ), used respectively to denote an infinitesimal quantity and show the process of summation. But more or less any shape can be matched to a symbol in some alphabet.…217.43.211.203 15:40, 22 April 2007 (UTC)[reply]
Besides for looking like an epsilon and sigma, I don't think the symbol means anything other than the device can connect with triangles, squares, and circles, meaning it's made to be universal. --Wirbelwindヴィルヴェルヴィント (talk) 19:08, 22 April 2007 (UTC)[reply]
In a certain (pointing upward) orientation, it also resembles the letter psi, which may be used for various mathematical things. It also resembles the Unicode Mathematical Operators #22D4 (points downward), #22F2 (points to the right), and #22FA (points to the left). --Spoon! 19:29, 22 April 2007 (UTC)[reply]


Aha! It's a metro map! —Bromskloss 22:56, 22 April 2007 (UTC)[reply]
its obviously the symbol of.... a data bus. Not anything to do with a math symbol.--Dacium 04:30, 27 April 2007 (UTC)[reply]

Quadratic Equations

If I want to find the roots of a quadratic equation with the quadratic formula, does C always need to be positive? I've noticed that if C is negative, the discriminant will be negative but we will not cover complex numbers (i) in this school unit.

For example, 2x^2 + 14x - 240 would have to be multiplied by -1 to -2x^2 - 14x + 240 = 0 to work in the quadratic formula.

Thank you very much!

Vertciel 18:48, 22 April 2007 (UTC)[reply]

The discriminant is given by - this will always be greater than zero provided b2 > 4ac, which can be achieved for any variety of numbers. In the case of a negative c, a positive a will ensure a positive discriminant. Try for a variety of signs and magnitudes of a, b and c, and you'll see what I mean. Icthyos 19:01, 22 April 2007 (UTC)[reply]
Indeed, there seems to be some confusion. For the given polynomial,
we have
Thus the discriminant is
which is positive; no complex numbers are needed.
If you think better with pictures than with equations, consider this: The graph of a quadratic equation is a parabola. Since x2 is always positive, the arms rise up if a is positive, and they drop down if it is negative. The effect of adding the constant c is to shift the curve higher or lower. So what does b do? It shifts the curve diagonally (both horizontally and vertically). We are trying to find where the curve crosses zero (the height of the x axis), and we have three possibilities.
  • Perhaps the apex of the parabola just touches the x axis. (This is unlikely.)
  • The x axis slices through both arms.
  • The curve lies entirely above or entirely below the x axis.
The discriminant is zero, positive, or negative, depending on which case we have. In an example like x2+1, the arms rise up, the curve is shifted up, and there is no diagonal shift; thus the parabola lies entirely above the x axis, consistent with a discriminant of −4.
Teaser: A zero of x2+1 would have to be a quantity, i, that squares to −1. Since both positive and negative real numbers have positive squares, i must be something new and different. Thus we meet complex numbers, one of the most fruitful discoveries in the history of mathematics. --KSmrqT 21:20, 22 April 2007 (UTC)[reply]

uploading of math diagrams

How to upload simple mathematical diagrams of lines and curves, etc. on the wiki page from word documents with such diagrams?--Profvk 23:22, 22 April 2007 (UTC)[reply]

You need to export these as images (PNG or SVG, only pictures should be saved as JPEG) and Upload them before adding to the article. For any further help, just ask the guys at the Help Desk. — Kieff | Talk 23:46, 22 April 2007 (UTC)[reply]
See Help:Images and other uploaded files for more details. And to be more precise, JPEG is a lossy compression format best reserved for natural photographs. Sharp line drawings and other graphic art are best presented in SVG format, which is not only highly efficient, but also scales cleanly to any size. The function article contains a typical example, Image:Graph of example function.svg. But as Kieff suggests, questions like this about Wikipedia itself are more appropriate to the Help Desk. --KSmrqT 02:47, 23 April 2007 (UTC)[reply]


April 23

E(max{d20+x1,d20+x2,...,d20+xn})

If n 20-sided dice are rolled, and to them are added integer constants x1 through xn, is there a general function for the expected largest modified roll? Expected smallest? Does it help if n is fixed at 2? NeonMerlin 01:36, 23 April 2007 (UTC)[reply]

I don't understand your question. Are you talking about the maximum roll of any one die in a roll of say 5 dices? 202.168.50.40 04:58, 23 April 2007 (UTC)[reply]

Well, I guess the max is pretty obviously . The max roll for any normal die is 20 so the max roll for n dice is . You'd then add on all the x's. But, since you give absolutely no info on the x's, that is the best you can do. For all I know, . StatisticsMan 05:44, 23 April 2007 (UTC)[reply]
I think the question is actually "If you roll n dice, and to each die you apply the appropriate modifier, which total will be largest?" And I've looked at it, and there's no nice expression in general other than a few algebraic simplifications you can make. For the two dice case, it's still tricky but I think it's possible to express it in a relatively simple function of x_1 and x_2. Still working on it. Confusing Manifestation 06:18, 23 April 2007 (UTC)[reply]
ConMan is right: I mean the expected value of the largest of the n modified rolls. NeonMerlin 01:58, 24 April 2007 (UTC)[reply]

Okay, say you're rolling two six-sided dice for starters. Expected value is defined as

Still, it really all just depends on x_1 and x_2. If x_1 is 100 and x_2 is 5, then the expected value relies only on the first die and the second die will never come into account. Same goes if x_2 is way higher than x_1 as 1 and 2 are just labels. So, the only time it will matter is if the x_1 and x_2 are close to each other. If we have six-sided dice, then

If the difference were 5, then the lowest roll on one die would be and the highest on the other would be 6 + x_2. If x_1 is 5 higher (or more) than x_2, then all that matters is the one die.

Your question about n 20-sided die I say is pretty complicated unless you give more info. Any die which has x_i 19 lower than the max of the x_i can be taken away essentially as it would never come into play.

Then, for the die with the max value of x_i, and depending on how much higher, it'd be most likely to come up as the max die. If it's 10 higher than any other x_j, then half the time that one die is going to give the max roll and the other half the time, one of the others will.

Add to that how many dice end up being close enough. Does one die have the highest modifier, two others with a modifier 4 lower, another with a modifier 6 below the highest, six dice with modifier 10 below the highest, one hundred dice with modifier 11 below the highest, and another 18 below? It totally depends on how many of each and how close.

StatisticsMan 04:34, 24 April 2007 (UTC)[reply]

The probability P1(k) that die number 1 has a value no larger than k is the cumulative distribution function for die 1. The value of this function is 0 for k<= x1, 1/20 for k=x1+1, ...19/20 for k=x1+19, and 1 for k>=x1+20 (for 20-sided dies). To say that that the maximum value of dies one and two is <=k is the same as saying that the value of die one is <=k AND that the value of die one is <=k. Because the two die outcomes are independent, the probability of this event is Q2(k)=P1(k)P2(k). Similarly the cumulative distribution function for the maximum of n dies is Qn(k)=P1(k)P2(k)....Pn(k). The probability that the maximum value of the n dies is exactly k is the probability density function Pmax corresponding to the cumulative distribution function Qn. This has the values Pmax(k) = Qn(k)-Qn(k-1). From the probability density function you can calculate the expectation value in the usual way (E=Sum(x p(x)). Putting the above reasoning together into a closed expression gets a bit messy, but calculating the answer for particular n and {xi} is straightforward. --mglg(talk) 22:25, 24 April 2007 (UTC)[reply]

Incidentally, the best way (that I can see) to simplify the expression is to make the following adjustments:

First, without loss of generality, assume that , ie. the modifiers are arranged in increasing order. Then, let . This allows you to pull out of the max() expression and the expectation/sum thanks to linearity. This is particularly helpful in the two dice case since the expectation then simplifies to , where the sum is over all possible values the two dice can have. Confusing Manifestation 23:43, 25 April 2007 (UTC)[reply]

The expected value of a real-valued random variable with cumulative distribution function F equals ∫z dF(z), in which the bounds are from F(z) = 0 to F(z) = 1. If the distribution is discrete, the integral can be replaced by the sum Σ z ΔF(z), where ΔF(z) = F+(z) − F(z), where F+(z) = limζ↓0F(ζ) AND F(z) = limζ↑0F(ζ), and z ranges over all points where the value of F(z) makes a jump.
Given n independent random variables v1, ..., vn whose respective cumulative distribution functions are F1, ..., Fn, the cumulative distribution function F of random variable v = max{v1, ..., vn} is given by F(z) = Πi = 1,...,n Fi(z).
In this case, where the value of vi is observed by rolling an s-sided die with face values 1 to s and adding a given quantity xi to the face value, the jump points have the form z = k + xi for k = 1, ..., s and i = 1, ..., n. Disregarding for now the possibility of coinciding jump points for different variables, at each such jump point the value of Fi(z) jumps from Fi(z) = (k−1)/s to Fi+(z) = k/s, so F(z) = (k−1)/k × F+(z). Coinciding jump points can be treated as a sequence of jumps at the same value for z; the order is immaterial.
We can now give a simple algorithm for computing E(v):
J := ∪i=1,...,n, k=1,...,s {(k+xi, k)}
(ev, p) := (0, 1)
for (z, k) ← J in reverse order:
ev := ev + z × p/k
p := (k−1)/k × p
end for
The set of jump points J incorporates the values for k, which are needed in the calculation of the jump size. This set J must be traversed in order of decreasing z-values. The result for E(v) is the final value of the program variable ev. When, in the course of the computation, the value of p becomes 0, it will remain so and ev will no more change value, so the computation may then be terminated.  --LambiamTalk 20:35, 26 April 2007 (UTC)[reply]

what is?

B3 X 2 - B =?



--NÓRA70.109.78.147 13:08, 23 April 2007 (UTC)[reply]


B5?…Semiable 17:50, 23 April 2007 (UTC)[reply]
At a guess, a homework question. If B3 is meant to represent 3 times B, which is traditionally written as 3B, then think of having three lots of "B", then doubling that, then taking away one lot of "B" and see what you get. Confusing Manifestation 22:49, 23 April 2007 (UTC)[reply]
EXPAND{ B3 X 2 - B =? }
(B + B + B) + (B + B + B) - B = ?
Do the maths yourself. 202.168.50.40 23:25, 23 April 2007 (UTC)[reply]

Connections within a hypercube

Hello, helpful people of the Reference Desk:

I'm trying to understand the connections between the parts of a hypercube (checking that article, it seems that term is more general than I thought and I should specify that I'm talking about a 4-cube). If it helps, imagine that I'm trying to work out which rooms connect to which in a Crooked House of the Heinlein variety (that's not the exact problem, but it maps to the one I'm interested in). Please excuse the crude ASCII diagram, but if we consider it folded out into the form of a tesseract (each letter representing one three-dimensional cube):

    A
    | B
    |/
C---D---E
   /|
  F |
    G
    |
    |
    H

then, if I've understood it right, each cube is connected to six other cubes (the "obvious" example being D) and has one cube that is on the "opposite" side of the hypercube which it is not directly connected to (in the same way that each face of a cube is connected to four of the others, but not the one opposite). As I see it, this gives four pairs of mutually unconnected "opposite" cubes: AG, BF, CE, and DH. Is that right (both in the general idea, and the specific details of which ones are opposites) or have I wrapped myself up in (four-dimensional) knots? (Google found Peter Turney's Unfolding the Tesseract for me, and I think it supports my interpretation, but the discussion gets too advanced too quickly for me to be confident.) 86.143.46.128 16:07, 23 April 2007 (UTC)[reply]

Yes, you are right. Here is a simpler way to look at it: The points {0,0,0,0}, {0,0,0,1}, {0,0,1,0},... {1,1,1,1} (i.e., all 24=16 possible combinations of 0 and 1) define the 16 corners of a 4D hypercube. The 3D cube you get if you fix any one of the coordinates to be 0 is opposite to the 3D cube you get if you fix the same coordinate to be 1. Since there are four coordinates to choose from, there are indeed four such pairs. -- mglg(talk) 16:31, 23 April 2007 (UTC)[reply]
Thanks! And yes, that is a simpler way of looking at it ... 86.143.46.128 17:30, 23 April 2007 (UTC)[reply]
Perhaps if we work our way up from lower dimensions, the case of interest will be clear. Begin with a point (or vertex), all we have in 0 dimensions. Stepping up to 1 dimension, we sweep that vertex into a line segment, so that we have the original vertex, a new line segment (unit length), and a new vertex. Stepping to 2 dimensions, we sweep that line segment into a face, increasing the number of vertices and edges as we do so. We double the original edge and sweep an edge for each vertex. Is the pattern becoming clear? Stepping to 3 dimensions, we sweep the face into a solid. Tallying:
dim vertices edges faces solids 4-cell
0 1
1 2 1
2 4 4 1
3 8 12 6 1
4 16 32 24 8 1
The entry in column n is a combination of those in the row before: column n−1 plus twice column n.
The development of this table parallels the hypercube article. However, it also helps us understand what is connected to what. When we sweep a vertex, we create a new edge in addition to the old connections. Each edge is bounded by two vertices. Therefore a vertex in dimension d is connected to d other vertices through an equal number of edges. When we sweep an edge, we create a new face. Each face is bounded by four edges. Therefore an edge in dimension d is connected to d−1 faces and 3(d−1) edges. When we sweep a face, we create a new solid. Each solid is bounded by six faces. Therefore a face in dimension d is connected to d−2 solids and 5(d−2) faces. Finally, when we sweep a solid, we create a 4-cell. Each 4-cell is bounded by eight solids. Therefore a solid in dimension d is connected to 7(d−3) solids. For the purposes of a tesseract, we are done.
It is convenient to place the vertices with each coordinate either 0 or 1. To get an n-cell, we freeze dn of the coordinates. For example, to get a face (2-cell) of a cube (3 dimensions), we freeze 1 coordinate; thus we get faces x = 0 and 1, y = 0 and 1, and z = 0 and 1. This also allows us to describe the bounds of a cell. For example, the edges bounding face (1,·,·) are those fixing either y or z, namely (1,0,·), (1,1,·), (1,·,0), and (1,·,1). Similarly, we deduce that edge (1,0,·) bounds faces (1,·,·) and (·,0,·).
You seem to want to distinguish n-cells that share an (n−1)-cell, such as adjacent sides of a square as distinct from opposite sides. You now have enough information to work this out for cubes of the tesseract, either by thinking of sweeps or by working with coordinates. Hope this helps. --KSmrqT 18:02, 23 April 2007 (UTC)[reply]

Dirichelet eta and Riemann Zeta

Hi, does any one know the proof for: η(s) = (1-21-s)ζ(s) or where i can find it?

thanks 87.194.21.177 16:09, 23 April 2007 (UTC)[reply]

What definition are you using? Our Dirichlet eta function article gives that formula as one definition, which allows for a quick proof :-) Algebraist 16:15, 23 April 2007 (UTC)[reply]
You could start by proving the relationship in the region Re(s)>1, where the Dirichlet series definition for both η(s) and ζ(s) converges. Then (waving hands vaguely) use analytic continuation to extend the result. Gandalf61 16:23, 23 April 2007 (UTC)[reply]
If necessary, you could just prove it for real s>1 (and use analytic continuation again) to avoid thinking about complex exponentiation. Algebraist 09:19, 24 April 2007 (UTC)[reply]

April 24

PrefixSpan Algorithm

Hello, I have a question regarding the PrefixSpan algorithm. This algorithm can be consulted in the following paper "Mining Sequential Patterns by Pattern-Growth: The PrefixSpan Approach" by Jian Pei, Jiawei Han, Behzad Mortazavi-Asl, Helen Pinto.

My problem is with this sentence: "To avoid checking every possible combination of a potential candidate sequence, one can first fix the order of items within each element. Since items within an element of a sequence can be listed in any order, without loss of generality, one can assume that they are always listed alphabetically."

Furthermore in the definition of a prefix say that among other things a sequence A is only a prefix of B if each item of the last set of items of A is alphabetically before every missing item of the corresponding set of items in B. Then they go on how to define a suffix of a sequence given a prefix an then that a projection of a sequence database with regards to A is a the set of suffixes given A as prefix.

I don't understand how the assumption of alphabetical ordering does not result in loss of generality. For example given a database like:
a(abc)bd
a(ac)bdef

A frequent pattern with minimum support count 2 would be a(ac)bd (i.e. article a, followed by article a and c together then article b then article d. If we follow the lexicographical ordering within itemsets assumption we will fail to find this itemset since a(ac) is not a prefix of both sequences ? Or am I misreading something in the article.

Kind regards and thanks for your time.

62.48.159.19 09:27, 24 April 2007 (UTC)[reply]

Correlated random walks

Can you please include an article on correlated random walks, their definition, history and applications to date.

Non-random number generator

Someone who wishes to remain anonymous has a Klondike solitaire card game on their computer. Probability with random card ordering would predict that each ace would appear first, second, third or fourth with equal frequency, yet their observations (made over hundreds of games) show that the ace of hearts is the last of the four aces to appear in 75-80% of games. This includes a sequence of 8 consecutive games where the ace of hearts was the last ace. Why might this deviation from probability occur? What (if it was random) is the probability of the 8 consecutive games occurring? Can the error be corrected? Seans Potato Business 17:11, 24 April 2007 (UTC)[reply]

The probability of 8 games in a row is . However, if you ran many series of 8 games, then the probability that one of those series would have that pattern is then much higher. I'd say the program used a rather poor method for selecting pseudo-random numbers. I doubt if it can be fixed without rewriting the program. There is something you can sometimes do called "setting the random number seed", but that would likely need to be done in the program. You might want to post this on the Computer Ref Desk, too. StuRat 17:27, 24 April 2007 (UTC)[reply]
Than chance that you'd observe more than 40 games out of 100, with AH the last of the four is 1 in thousand. It becomes 1 in million for over 47. For 75, it is virtually zero. (See Binomial distribution.) Hence the probability generator of your Klondike game is definitely biased, assuming ur observations are correct. --Hirak 99 17:45, 24 April 2007 (UTC)[reply]
Just a quick observation, but you probably want to look at the chance of any specific ace being the last ace in 75% of the games. The reason is that you have to consider that if you were paying attention to the order aces came out you probably also would be suspicious if you noticed the Ace of Spades being last most of the time, or the Ace of Clubs or Ace of Diamonds. Therefore there are actually four possible scenarios that would have set off a red flag in your head, and you just happened to witness one of those four. The question then is what are the odds of a randomized deck delivering one of the four scenarios you would notice (which is I believe four times the probability of the Ace of Hearts specific scenario). That probability is still really, really low, but technically not quite as low as the probabilities mentioned previously. Dugwiki 23:04, 24 April 2007 (UTC)[reply]

April 25

Conic Section

Can someone point me in the right direction with these problems:

  • How do I find the coordinates of the endpoints of the major and minor axis of the ellipse:
  • Knowing that the angle of rotation is 108.43°, how do I unrotate:

?

and
.
I have plugged in the prime values for x and y, and substituted 108.43° for θ. How do I get x'y' to cancel out?

Thanks -- Sturgeonman 01:03, 25 April 2007 (UTC)[reply]

Inverse of NxN Matrices Mod M (Hill cipher)

I'm looking for an efficient method to find the inverse of arbitrarily sized matrices under an arbitrary modulo. The 2x2 case is trivial and I have already implemented it. Any ideas for bigger matrices? I need this in order to write a version of the Hill cipher. The easier the method the better (I'm not keen on coding gaussian elimination). I'm coding in Ruby if that's any help (any library's i don't know of?). BrokenSegue 04:46, 25 April 2007 (UTC)[reply]

as long as (det A, m) = 1 StatisticsMan 05:12, 25 April 2007 (UTC)[reply]