User talk:EmilJ/Archive 2

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia

This is an old revision of this page, as edited by IJA (talk | contribs) at 13:32, 10 October 2008 (→‎First-order logic). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

Thanks for the tip

And for cleaning up the hyphens, dashes, and minus signs in the quadratic residue article (I'd swear that somewhere I read that – was proper for formulas; I didn't even realize that − existed)

I can see the difference between – and − by juxtaposing them; –− the minus is raised a pixel or so higher. Subtle.

Thanks again.

Virginia-American (talk) 17:10, 1 April 2008 (UTC)

Probability of Solovary-Strassen primality test

The probability of the Solovay-Strassen primality test can be more specifically limited than 1/2. I will find the reference in the textbook that a math teacher of mine owns. Meanwhile, I have my own version (the math teacher made me re-calculate everything). It involves the use of Bayes' rule.

"To derive the probability of failure of the Solovay-Strassen Primality test, Bayes' rule is used. In this problem, event A will be the probability that n survives m number of trials and event B will be the probability that n is composite. So, the probability that a number n is composite given that it survives m number of trials is what we are looking for. P(A|B) (the probability that a composite number n survives m number of trials) is less than or equal to 2^{-m} because at most half of the numbers can be liars. P(A) is expanded and found below. P(B) is the probability of choosing a composite number. The probability of choosing a prime number is about however, to increase our chances, we will discard the obviously non-prime, a.k.a. \textbf{even} numbers, doubling our chance to Thus the probability of choosing a composite number is since if n is prime the number of trials it can survive is infinite.

So


Does my probability still seem dubious?

--Heero Kirashami (talk) 22:44, 27 April 2008 (UTC)

You are seriously confused about what a probabilistic algorithm is, and how is its probability of failure defined. The statement "the probability of failure of the algorithm A is at most p" means "for every input n, the probability (over its internal coin tosses) that A fails to give the correct answer on input n is at most p". It does not involve any probability distribution on the inputs in any way. (One of the reasons being to rule out trivial "algorithms" like ignore the input and return "composite", which according to your definition correctly computes primality with the negligible error of 1/ln n.) So, while your computation might be correct (I did not check it), it is totally misguided, the number computed has nothing to do with the error probability of the algorithm.
Furthermore, the bound as you inserted in the article uses a mysterious parameter m which does not appear anywhere in the article, and the bound is actually worse than the usual 1/2 (or 2-m with more trials and your notation) bound, as assymptotically for . For future, note also the correct formating of in TeX. — EJ (talk) 11:39, 28 April 2008 (UTC)
However, note that the original calculations of an author much smarter and less confused than an eighth grader were correct. And he actually knew how to define the actual terms. I'm still somewhat confused about the terms, but I know how to do the calculations. If I can reference the book, then I will place it in the article. --Heero Kirashami (talk) 22:26, 28 April 2008 (UTC)
You see, the problem is not so much with the computation, but with your interpretation of the result. What you thus need to check is not the name of the book, but the actual formulation of the statement, and its meaning in the context. Given your own admission of being "somewhat confused about the terms", I will not hesitate to revert any addition which is at odds with standard and well-known facts abut the algorithm. — EJ (talk) 11:03, 30 April 2008 (UTC)
However, I have read through it carefully and so I have found the source. If you would like to check it, fine. However, even additions that are "at odds with standard and well-known facts ab[o]ut the algorithm." may be correct. As Cryptography: Theory and Practice, by Douglas R. Stinson, states, (I'm quoting this directly but re-phrasing will be necessary for the article) "If we have run the algorithm m times, what is our confidence that n is prime? It is tempting to conclude that the probability that such an integer n is prime is . This conclusion is often stated in both textbooks and technical articles, but it cannot be inferred from the given data." Thus, I do believe it contradicts commonly referenced facts about the distribution. However, just because everyone else assumed the world was flat didn't mean it actually was.
Thus, the computation is correct (and, given that m will be defined as the number of trials, completely sensible; further, it is not n, but m which is going to infinity, so it is still approaching zero, and further it does matter whether is better, it matters which one is correct); however, his formulation of the statement was essentially the same as mine. Thus, I think that my result is correct. I am willing to reformat it so it matches with the article. However, I will wait for your "approval." If you need proof, then go find a copy of Cryptography: Theory and Practice, Second Edition, by Douglas R. Stinson, (If you go to Books_on_cryptography you can find its ISBN, but you can probably find a friend or someone with it, or try a library), and go to page 178 (Or "The RSA Cryptosystem and Factoring Integers"). --Heero Kirashami (talk) 00:01, 7 May 2008 (UTC)
Probabilistic algorithms and their probability of error are mathematical objects with an exact definition, which you can find in any standard textbook on computational complexity (e.g., Papadimitriou). It has nothing to do with anybody's confidence in anything. Confidence is a subject of psychology, not computer science. It is a proved mathematical fact that the probability of failure of the Solovay–Strassen algorithm is at most for every n. It is likewise a hard mathematical fact that this bound is optimal, there are many inputs n which attain the bound: 1729, 2465, 15841, ... (in general, any Carmichael number n such that 2(p − 1) | n − 1 for every prime divisor p of n). You cannot argue with it any more than you can argue with 1 + 1 = 2, so what you say about flat world is just babbling nonsense. Finally, it makes no sense to say that "it is not n, but m which is going to infinity". In theory, it is customary to compare randomized algorithms using just one round (i.e., m = 1). In practical applications, both parameters are of course bounded, but m behaves as a constant much more than n does (m is usually a small number like 5 to 10, whereas n has hundreds of bits).
Your derivation above makes it clear that you are not computing a bound on the failure probability of the algorithm, but the conditional probability of pronouncing prime a uniformly randomly chosen composite integer in [1, n]. Actually, that's still not quite correct, as you introduced for no good reason another complication by excluding even numbers. (Why? Why not exclude also multiples of 3? Or 5? etc?) So, the actual description is that it is supposed to be a bound on the probability of pronouncing prime a uniformly randomly chosen odd composite integer in [1, n].
Should it be mentioned in the article? No, I say. Neither the fact that it appeared as an example in some book, nor the fact that you were able to recompute it yourself, make it notable for inclusion in an encyclopedia. For one thing, there is no explanation why anyone should be interested in a parameter with such a ridiculously complicated description. Much more importantly, the appearance of the bound in the article would suggest the impression that it is a realistic estimate, which is completely false. The actual probability is much, much smaller. This is due to the fact that the number is equal to , where p is not the usual (maximal) probability of error of the algorithm, but the average probability of error taken over uniformly random (odd) integers in [1,n]. This p is significantly smaller than 1/2. For a trivial bound, it is less that (for m = 1), because is on average (see totient#Growth of the function, or any textbook on number theory). For a stronger bound, Damgård, Landrock, and Pomerance give several bounds (e.g., for m = O(log n), for larger m) on the average probability of error of the closely related Miller–Rabin algorithm (see that article for an exact reference). (The point to observe in the somewhat complicated expression is that the bound is exponentially small not only in m, but also in log n.) I am not aware of such a bound being published for Solovay–Strassen (presumably because nobody gives a damn about the Solovay–Strassen algorithm any more, as Miller–Rabin is better in all respects), but the similarity of the two algorithms and their analysis strongly suggests that a bound of similar growth rate should hold for Solovay–Strassen as well.
So, apart from your bound being hardly useful, it is also highly misleading, as it is badly suboptimal. I thus cannot agree with putting it in the article. If you want to make yourself useful, you can search a library to find out whether there isn't a published paper extending the Damgård, Landrock, and Pomerance results to Solovay–Strassen after all (though it does not look very promising), instead of keeping pushing your bound.EJ (talk) 10:45, 9 May 2008 (UTC)
Found it myself, it's actually in one of the reference of the DLP paper: Erdős and Pomerance[1] show that the average probability of error of Solovay–Strassen (and even Fermat) is (for m = 1). I'll put it in the article. — EJ (talk) 13:49, 9 May 2008 (UTC)
Seeing as we have reached an agreement, you should put it in as I am still partly confused, and you can probably write it a lot better than I can. And it's definitely true, too, that no one gives a damn about Solovay-Strassen because Miller-Rabin is so much faster, with an equal or better probability of...working (that's the best I can say). And even though I have no idea what the heck your number is, I don't disagree because it's probably for the general case (I think) instead of just for odds and also I am not too good with algebraic manipulation when there's so many logs. It's great that we've both grown as mathematicians, then! Thanks! For me, it would probably take...15,000 years to look through papers! --Heero Kirashami (talk) 06:10, 13 May 2008 (UTC)
  1. ^ P. Erdős, C. Pomerance, On the number of false witnesses for a composite number, Mathematics of Computation 46 (1986), no. 173, pp. 259–279.

global account

I'm making an usurpation request for the account EmilJ on da. — Emil J (talk) 10:19, 4 June 2008 (UTC)

sorry, i was just hoping someone more relevant with the IPA would make the IPA more accurate.CuteHappyBrute (talk) 06:17, 15 June 2008 (UTC)

Your Comment

"Serbia recalled ambassadors for consultations from states recognizing Kosovo. This diplomatic procedure is by definition a temporary measure, all the ambassadors will be back sooner or later. 100 days is actually quite a lot."

How is the diplomatic procedure a "temporary measure"? What "definition" are you refeering to? How is 100 days "quite a lot"? I await your reply, sir. Ari 0384 (talk) 20:26, 23 July 2008 (UTC)
Why did you move the discussion here from the original talk page? It has nothing to do with me personally. Anyway, an envoy recalled for consultation comes to his/her home country, where he consults with foreign ministry officials who advise him on the latest developement of the country's official foreign policy, and then he returns back. That's what the phrase means, I can't help you if you don't know that. Of course, these days ambassadors can be easily advised by phone or email, so the supposed "consultation" is usually only an excuse for recalling the ambassador to his home country as a means of diplomatic pressure weaker than full severing of diplomatic ties, but this does not change the basic principle that such a recall is temporary. — Emil J. (formerly EJ) 09:59, 24 July 2008 (UTC)

Heyting algebra

I have seen your change. But then, what about the sentence "Arend Heyting (1898-1980) was himself interested in clarifying the foundational status of intuitionistic logic, when he introduced this type of structure." (my boldface)? --Tillmo (talk) 17:31, 19 August 2008 (UTC)

As far as I know, Heyting did not introduce Heyting algebras. They are only named after him, or rather after the Heyting calculus (=intuitionistic logic). — Emil J. (formerly EJ) 09:25, 20 August 2008 (UTC)

My apologies. I had not noticed that the sub-bullets were moved down into a footnote now. You are correct that since the move, the sub-bulleting no longer applies. Rossami (talk) 14:25, 22 August 2008 (UTC)

No need to apologize. The page source looks confusing in this place. — Emil J. (formerly EJ) 14:48, 22 August 2008 (UTC)

functional completeness

Dear EmilJ,

Forgive me for reverting your recent edits to functional completeness.

I hope the discussion on the talk page helps you understand why I would do such a crazy thing. --68.0.124.33 (talk) 05:49, 26 August 2008 (UTC)

LaTeX logos

Given that the logos themselves are defined by code, why is it better to use images to represent them rather than have them constructed? Inline images are a bad idea for a number of reasons. Chris Cunningham (not at work) - talk 11:04, 11 September 2008 (UTC)

The template expands to a <math> expression which ends up being rendered as an image, so it is as bad as using an image directly. You know, our manual of style even explicitly tells you that inline <math> expressions are a bad idea for a number of reasons.
Now, in principle, it would be fine to have the logo constructed, but only if it is constructed exactly according to the specs. What the template does, however, is a crude approximation of the original shifts by various combinations of \!, subscripts, and such stuff. This is essentially dubious OR, and the result is very distorted. The "E" is too low, it does not touch either the "T" serif or bottom of the "X" as it should. The "E" is also too close to the "T". The upper tip of the "A" is not on the same level as the other letters, and so on. It is simply unacceptable to change a logo arbitrarily in such a way. There are basicly only two ways of constructing the logo: either <math>\hbox{\LaTeX}</math>, or to expand its definition giving
<math>\hbox{L\kern-.36em\sbox0 T\vbox to\ht0{\hbox{\scriptsize A}\vss}\kern-.15emT\kern-.1667em\lower.5ex\hbox{E}\kern-.125emX}</math>
(I've omitted some NFSS cruft). Alas, neither is accepted by texvc. — Emil J. 12:12, 11 September 2008 (UTC)
Fair enough. Chris Cunningham (not at work) - talk 12:16, 11 September 2008 (UTC)

trivia :)

Hi Emil. Concerning unliking dates: "To avoid disruption, however, this deprecation should not be taken as license for wholesale removal of existing links from articles currently employing them extensively. Such removal from a given article should follow a consensus to do so among the editors of that page." ([1]). For us in the Kosovo recognition article, mentioning what you are doing on the talk page in order to get us to use consistent dates might be a good idea. Best wishes, --Mareklug talk 22:00, 22 September 2008 (UTC)

If you look closely, you will realize that the Kosovo recognition article does not extensively use linked dates, the reason being they were already deleted a month or so ago by a bot. I only removed a couple of new links which were inserted afterwards. — Emil J. 14:47, 23 September 2008 (UTC)

Kosovo map and CSS

Shoot, that makes thing a lot easier. User:Zscout370 (Return Fire) 23:09, 2 October 2008 (UTC)

I'm sorry, but I can't make heads or tails of your message. What do you mean by shoot? Is it some kind of a joke? — Emil J. 10:02, 3 October 2008 (UTC)

Before you do any more work on the article, just a quick reminder that this article is read (and edited) a lot by philosophers. I think they tend to have serious problems with trivial special cases of definitions, and that's why I havent done this earlier and even think it's a bad idea. It's not even the most typical way things are presented in mathematical books on the subject. --Hans Adler (talk) 13:03, 7 October 2008 (UTC)

Well, it simplifies the definitions and usage by cutting down the number of cases, which should make the material easier to understand and learn, and avoid false impression of qualitative distinctions where there are none. It is thus an improvement benefiting the readers of the article. I see no reason to preemptively impede it because philosophers might not like it, not until they actually complain. — Emil J. 11:19, 8 October 2008 (UTC)
I totally agree with you about the substance. I am sure some mathematical text must have done it right (i.e. in this way), but at the moment I am only aware of more conservative texts, that all got it wrong. Do you know a source that we can cite?
One more real problem (in the sense that it's not just about different words but about slightly different mathematics) is, unfortunately, the question of 0-place predicates. I think most authors don't allow them – for reasons similar to those for not allowing an empty domain. As a result, for most authors propositional logic is not a proper subset of first-order logic. While I personally think that's plain wrong, we shouldn't mislead our readers by claiming otherwise. (If I am right.)
I would like to share your optimism that this edit is going to stick, but I have been surprised by very strange disputes in this area in the past that have made me very cautious when editing this article. This is one of the reasons I haven't worked much on it yet, although it is still in extremely bad shape. So, while I am happy with your changes (I haven't examined them in detail, for lack of time), I am just slightly concerned that they might get us into time-wasting fights with non-mathematicians, and I am a bit reluctant to build on them now before I know they will survive. --Hans Adler (talk) 12:12, 8 October 2008 (UTC)
It's been ages since the last time I looked for a book covering these basic things, so I can't give you a source out of my head.
I don't understand what's the problem with 0-ary predicates. What do they have to do with empty domains? I'm simply puzzled. As far as I can see, empty domains are disallowed because they change the logic in many unfortunate ways, invalidating such basic axioms as (cf. free logic). No such thing happens with propositional variables: first-order logic with 0-ary predicates is a conservative extension of both propositional logic and first-order logic without 0-ary predicates, and all valid schemas of the latter continue to hold, one may simply ignore 0-ary predicates when one does not need them. Anyway, the article treated the question inconsistently, some parts assumed that propositional variables can be used, some (most) assumed that they can't. I've effectively unified it to the former convention, because it is nicer to allow more generality when it does not cost any additional effort, and it is inelegant to introduce arbitrary restrictions. However, I do not see it as an important point, as the need for propositional variables nearly never occurs in practice in first-order logic.
As for your concerns, I think it is reasonably safe to expect (per WP:Integrate changes) that anybody who wanted to switch back to treating constants separately would take care of adjusting other parts of the article that may depend on this choice, so you should not hesitate to build on the article as it is. Anyway, I'm not willing to waste time fighting over the choice, if there arises significant opposition to it let them have it their way. Incidentally, thanks for your efforts to bring the article into a better shape. — Emil J. 13:35, 8 October 2008 (UTC)
Thanks for yours :-). I don't remember what the issues with nullary predicates are, but it seems they were discarded by many since the slightly more general kind of logic that Gödel worked with. Possibly some results in proof theory that don't hold when you have them? I guess that the constants T and F for truth values are normally missing for the same reason, whatever it is. It's a pity that there seems to be no text that really explains its choices. I think Wikipedia has already started to play an important role for uniformisation of mathematical terminology, because here is where all the different conventions clash rather violently. --Hans Adler (talk) 15:04, 8 October 2008 (UTC)

Oops

Sorry about that, I won't do it again. I am dyslexic, so I'm not the best speller. But thanks anyway. Regards Ijanderson (talk) 13:32, 10 October 2008 (UTC)