Jump to content

Talk:Computational complexity of mathematical operations: Difference between revisions

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia
Content deleted Content added
why not exponentiation?; what does the lack of fastest algorithm mean?; why is long division accomplished in one multiplication?
Line 15: Line 15:


== What does lack of fastest algorithm mean? ==
== What does lack of fastest algorithm mean? ==
I think that someone who is familiar with Schnorr's and Stumpf's work should explain this claim that is odd claim that no multiplication algorithm is fastest. Either this is trivially true for all algorithms, not merely for multiplication: there is always some small-sized scenarios where one of the high-growth-rate algorithms outperforms the slower-growth-rate algorithms. Or this is misleadingly worded and presented. Certainly, the shallowest meaning cannot be what was intended: no matter how many civilizations expended a seemingly infinite amount of effort in multiplication-algorithm research, there is ''''always'''' a faster (e.g., slower growth rate) multiplication algorithm that has yet to be discovered. Even if such an odd reading is what is intended, isn't that trivially true for all algorithms, not merely multiplication? Please explain further, especially in the article. —[[User:Optikos|optikos]] ([[User talk:Optikos|talk]]) 17:31, 29 June 2008 (UTC)
I think that someone who is familiar with Schnorr's and Stumpf's work should explain this claim that no multiplication algorithm is fastest. Either this is trivially true for all algorithms, not merely for multiplication: there is always some small-sized scenarios where one of the high-growth-rate algorithms outperforms the slower-growth-rate algorithms. Or this is misleadingly worded and presented. Certainly, the shallowest meaning cannot be what was intended: no matter how many civilizations expended a seemingly infinite amount of effort in multiplication-algorithm research, there is ''''always'''' a faster (e.g., slower growth rate) multiplication algorithm that has yet to be discovered. Even if such an odd reading is what is intended, isn't that trivially true for all algorithms, not merely multiplication? Please explain further, especially in the article. —[[User:Optikos|optikos]] ([[User talk:Optikos|talk]]) 17:31, 29 June 2008 (UTC)


== Long division ==
== Long division ==

Revision as of 17:32, 29 June 2008

I suppose that where you say:

"Here, complexity refers to the time complexity of performing computations on a Turing machine."

you should say:

"Here, complexity refers to the time complexity of performing computations to a RAM (Random Access Machine)."

Usually when talking about polynomical time operations, RAM models are used instead of TM. This is because we can simulate a RAM in a TM, but only with an overhead of n^2. They are equally powerfull but minimum time, changes.

Thanks for pointing this out. Fredrik Johansson 19:52, 31 March 2007 (UTC)[reply]

Why isn't exponentiation listed? —Preceding unsigned comment added by 90.128.188.188 (talk) 12:00, 17 May 2008 (UTC)[reply]

Perhaps because the most commonplace clever exponentiation algorithm at the heart of the discrete-logarithm cryptographic techniques is actually not calculating the full product of the repeated multiplications because a number that occupies O(2^160) bits does not fit in RAM or hard disk on conventional computers and would take too long to calculate anyway. Instead, it is calculating that product modulo a prime number, which permits converting the exponent to sums and products of even and odd numbers (or other combinations of additions and subtractions) that equal the value of the exponent. Thus, those 2 are not an apples-to-apples comparison. Likewise, calculating exponentiation in base-b is as small as L(1), whatever your complexity L of your chosen left shift is. For example, on O(n) base-2-based hardware, left-shift appears to software as O(1) because the hardware provided O(n) logic gates to accomplish the operation. In a positional number system for very large numbers, perhaps shifting 1 to the left 1 million bits might imply O(n) writes to memory to zero-out a bunch of words, such as 1,000,000/32=31,250 writes to RAM on a 32-bit processor or 1,000,000/64=15,625 writes to RAM. Clearly as the number of digits increases beyond the natural register size of the processor, left shift in a positional number system has a time complexity of O(n). But if someone is interested in efficient exponentiation in a base-b number system, then that person would likely be interested in the time complexity of converting to and from base-b. Knuth's Seminumerical Algorithms discusses base-b conversions; this is easier to find in the table of contents than in the index, by the way. —optikos (talk) 17:31, 29 June 2008 (UTC)[reply]

What does lack of fastest algorithm mean?

I think that someone who is familiar with Schnorr's and Stumpf's work should explain this claim that no multiplication algorithm is fastest. Either this is trivially true for all algorithms, not merely for multiplication: there is always some small-sized scenarios where one of the high-growth-rate algorithms outperforms the slower-growth-rate algorithms. Or this is misleadingly worded and presented. Certainly, the shallowest meaning cannot be what was intended: no matter how many civilizations expended a seemingly infinite amount of effort in multiplication-algorithm research, there is 'always' a faster (e.g., slower growth rate) multiplication algorithm that has yet to be discovered. Even if such an odd reading is what is intended, isn't that trivially true for all algorithms, not merely multiplication? Please explain further, especially in the article. —optikos (talk) 17:31, 29 June 2008 (UTC)[reply]

Long division

Why is long division assuming a single multiplication? Methinks that the O(n^2) of schoolbook long division, which is presented as M(1), where the chosen multiplication algorithm is schoolbook multiplication, is not based on the same foundations as Newton method's M(n). I suspect that long division should be presented as O(n^3) here, where for n digits there are n multiplication-subtraction pairs; multiplication dominates in complexity over subtraction; thus there are a quantity n of O(n^2) schoolbook multiplications. O(n) times O(n^2) is O(n^3) for schoolbook long division. —optikos (talk) 17:31, 29 June 2008 (UTC)[reply]