Alpha max plus beta min algorithm: Difference between revisions

From Wikipedia, the free encyclopedia
Content deleted Content added
m 'and' may be more pertinent here
Citation bot (talk | contribs)
Alter: title. Add: pages, issue, volume, journal. | Use this bot. Report bugs. | Suggested by Abductive | Category:Root-finding algorithms‎ | #UCB_Category 28/37
 
(48 intermediate revisions by 30 users not shown)
Line 1: Line 1:
{{short description|High-speed approximation of the square root of the sum of two squares}}
[[Image:Alpha Max Beta Min approximation.jpg|800px|centre]]
{{Distinguish|Minimax|Alpha–beta pruning}}


[[File:AlphaMaxBetaMin.png|thumb|The locus of points that give the same value in the algorithm, for different values of alpha and beta]]
The '''alpha max plus beta min algorithm''' is a high-speed approximation of the [[square root]] of the sum of two squares. That is to say, it gives the approximate absolute magnitude of a [[Vector (spatial)|vector]] given the [[Real number|real]] and [[Imaginary number|imaginary]] parts.


The '''alpha max plus beta min algorithm'''<ref>{{Cite journal |last=Assim |first=Ara Abdulsatar Assim |date=2021 |title=ASIC implementation of high-speed vector magnitude & arctangent approximator |journal=Computing, Telecommunication and Control |volume=71 |issue=4 |pages=7–14 |url=https://infocom.spbstu.ru/en/article/2021.71.01/ |language=en |doi=10.18721/JCSTCS.14401}}</ref> is a high-speed approximation of the [[square root]] of the sum of two squares. The square root of the sum of two squares, also known as [[Pythagorean addition]], is a useful function, because it finds the [[hypotenuse]] of a right triangle given the two side lengths, the [[norm (mathematics)|norm]] of a 2-D [[vector (geometric)|vector]], or the [[magnitude (mathematics)|magnitude]] <math>|z| = \sqrt{a^2 + b^2}</math> of a [[complex number]] {{math|1=''z'' = ''a'' + ''bi''}} given the [[real number|real]] and [[imaginary number|imaginary]] parts.
:<math> |V| = \sqrt{ I^2 + Q^2 } </math>


The algorithm avoids the necessity of performing the square and square-root operations and instead uses simple operations such as comparison, multiplication and addition. Some choices of the &alpha; and &beta; parameters of the algorithm allow the multiplication operation to be reduced to a simple shift of binary digits that is particularly well suited to implementation in high-speed digital circuitry.
The algorithm avoids performing the square and square-root operations, instead using simple operations such as comparison, multiplication, and addition. Some choices of the α and β parameters of the algorithm allow the multiplication operation to be reduced to a simple shift of binary digits that is particularly well suited to implementation in high-speed digital circuitry.


The approximation is expressed as:
The approximation is expressed as
<math display="block">|z| = \alpha\,\mathbf{Max} + \beta\,\mathbf{Min},</math>
where <math>\mathbf{Max}</math> is the maximum absolute value of ''a'' and ''b'', and <math>\mathbf{Min}</math> is the minimum absolute value of ''a'' and ''b''.


For the closest approximation, the optimum values for <math>\alpha</math> and <math>\beta</math> are <math>\alpha_0 = \frac{2 \cos \frac{\pi}{8}}{1 + \cos \frac{\pi}{8}} = 0.960433870103...</math> and <math>\beta_0 = \frac{2 \sin \frac{\pi}{8}}{1 + \cos \frac{\pi}{8}} = 0.397824734759...</math>, giving a maximum error of 3.96%.
:<math> |V| = \alpha\,\! \mathbf{Max} + \beta\,\! \mathbf{Min} </math>


{| class="wikitable" style="text-align:right"
Where <math>\mathbf{Max}</math> is the maximum absolute value of I and Q and <math>\mathbf{Min}</math> is the minimum absolute value of I and Q.
! <math>\alpha\,\!</math> || <math>\beta\,\!</math> || Largest error (%) || Mean error (%)
|-
| 1/1 || 1/2 || 11.80 || 8.68
|-
| 1/1 || 1/4 || 11.61 || 3.20
|-
| 1/1 || 3/8 || 6.80 || 4.25
|-
| 7/8 || 7/16 || 12.50 || 4.91
|-
| 15/16 || 15/32 || 6.25 || 3.08
|-
| <math>\alpha_0</math> || <math>\beta_0</math> || 3.96 || 2.41
|}
[[File:Alpha Max Beta Min approximation.png|800px|centre]]


==Improvements==
For the closest approximation, the optimum values for <math>\alpha\,\!</math> and <math>\beta\,\!</math> are <math>\alpha_0 = \frac{2 \cos \frac{\pi}{8}}{1 + \cos \frac{\pi}{8}} = 0.96043387...</math> and <math>\beta_0 = \frac{2 \sin \frac{\pi}{8}}{1 + \cos \frac{\pi}{8}} = 0.39782473...</math>, giving a maximum error of 3.96%.
When <math>\alpha < 1</math>, <math>|z|</math> becomes smaller than <math>\mathbf{Max}</math> (which is geometrically impossible) near the axes where <math>\mathbf{Min}</math> is near 0.
This can be remedied by replacing the result with <math>\mathbf{Max}</math> whenever that is greater, essentially splitting the line into two different segments.


: <math>|z| = \max(\mathbf{Max}, \alpha\,\mathbf{Max} + \beta\,\mathbf{Min}).</math>
{| class="wikitable"

Depending on the hardware, this improvement can be almost free.

Using this improvement changes which parameter values are optimal, because they no longer need a close match for the entire interval. A lower <math>\alpha</math> and higher <math>\beta</math> can therefore increase precision further.

''Increasing precision:'' When splitting the line in two like this one could improve precision even more by replacing the first segment by a better estimate than <math>\mathbf{Max}</math>, and adjust <math>\alpha</math> and <math>\beta</math> accordingly.

: <math>|z| = \max\big(|z_0|, |z_1|\big),</math>
: <math>|z_0| = \alpha_0\,\mathbf{Max} + \beta_0\,\mathbf{Min},</math>
: <math>|z_1| = \alpha_1\,\mathbf{Max} + \beta_1\,\mathbf{Min}.</math>

{| class="wikitable" style="text-align:right"
|-
|-
! <math>\alpha\,\!</math> || <math>\beta\,\!</math> || Largest error (%) || Mean error (%)<br />
! <math>\alpha_0</math> || <math>\beta_0</math> || <math>\alpha_1</math> || <math>\beta_1</math> || Largest error (%)
|-
|-
| align="right" | 1/1 || align="right" | 1/2 || align="right" | 11.80 || align="right" | 8.68
| 1 || 0 || 7/8 || 17/32 || −2.65%
|-
|-
| align="right" | 1/1 || align="right" | 1/4 || align="right" | 11.61 || align="right" | 0.65
| 1 || 0 || 29/32 || 61/128 || +2.4%
|-
|-
| 1 || 0 || 0.898204193266868 || 0.485968200201465 || ±2.12%
| align="right" | 1/1 || align="right" | 3/8 || align="right" | 6.80 || align="right" | 4.01
|-
|-
| align="right" | 7/8 || align="right" | 15/16 || align="right" | 12.5 || align="right" | 4.91
| 1 || 1/8 || 7/8 || 33/64 || −1.7%
|-
|-
| align="right" | 15/16 || align="right" | 15/32 || align="right" | 6.25 || align="right" | 1.88
| 1 || 5/32 || 27/32 || 71/128 || 1.22%
|-
|-
| 127/128 || 3/16 || 27/32 || 71/128 || −1.13%
| align="right" | <math>\alpha_0</math> || align="right" | <math>\beta_0</math> || align="right" | 3.96 || align="right" | 1.30
|-
|-
|}
|}

Beware however, that a non-zero <math>\beta_0</math> would require at least one extra addition and some bit-shifts (or a multiplication), probably nearly doubling the cost and, depending on the hardware, possibly defeat the purpose of using an approximation in the first place.

==See also==
*[[Hypot]], a precise function or algorithm that is also safe against overflow and underflow.


==References==
==References==
{{Reflist}}
*[[Richard G. Lyons|Lyons, Richard G]]. ''Understanding Digital Signal Processing, section 13.2.'' Prentice Hall, 2004 {{ISBN|0-13-108989-7}}.
*Griffin, Grant. [http://www.dspguru.com/dsp/tricks/magnitude-estimator DSP Trick: Magnitude Estimator].


==External links==
*[[Richard Lyons|Lyons, Richard G]]. ''Understanding Digital Signal Processing, section 13.2.'' Prentice Hall, 2004 ISBN 0-13-108989-7.
*{{cite web |title=Extension to three dimensions |date=May 14, 2015 |work=[[Stack Exchange]] |url=https://math.stackexchange.com/q/1282435 }}
* Griffin, Grant. [http://www.dspguru.com/comp.dsp/tricks/alg/mag_est.htm DSP Trick: Magnitude Estimator].


[[Category:Approximation algorithms]]
[[Category:Approximation algorithms]]
[[Category:Root-finding algorithms]]
[[Category:Root-finding algorithms]]
[[Category:Pythagorean theorem]]

Latest revision as of 03:51, 13 December 2023

The locus of points that give the same value in the algorithm, for different values of alpha and beta

The alpha max plus beta min algorithm[1] is a high-speed approximation of the square root of the sum of two squares. The square root of the sum of two squares, also known as Pythagorean addition, is a useful function, because it finds the hypotenuse of a right triangle given the two side lengths, the norm of a 2-D vector, or the magnitude of a complex number z = a + bi given the real and imaginary parts.

The algorithm avoids performing the square and square-root operations, instead using simple operations such as comparison, multiplication, and addition. Some choices of the α and β parameters of the algorithm allow the multiplication operation to be reduced to a simple shift of binary digits that is particularly well suited to implementation in high-speed digital circuitry.

The approximation is expressed as

where is the maximum absolute value of a and b, and is the minimum absolute value of a and b.

For the closest approximation, the optimum values for and are and , giving a maximum error of 3.96%.

Largest error (%) Mean error (%)
1/1 1/2 11.80 8.68
1/1 1/4 11.61 3.20
1/1 3/8 6.80 4.25
7/8 7/16 12.50 4.91
15/16 15/32 6.25 3.08
3.96 2.41

Improvements[edit]

When , becomes smaller than (which is geometrically impossible) near the axes where is near 0. This can be remedied by replacing the result with whenever that is greater, essentially splitting the line into two different segments.

Depending on the hardware, this improvement can be almost free.

Using this improvement changes which parameter values are optimal, because they no longer need a close match for the entire interval. A lower and higher can therefore increase precision further.

Increasing precision: When splitting the line in two like this one could improve precision even more by replacing the first segment by a better estimate than , and adjust and accordingly.

Largest error (%)
1 0 7/8 17/32 −2.65%
1 0 29/32 61/128 +2.4%
1 0 0.898204193266868 0.485968200201465 ±2.12%
1 1/8 7/8 33/64 −1.7%
1 5/32 27/32 71/128 1.22%
127/128 3/16 27/32 71/128 −1.13%

Beware however, that a non-zero would require at least one extra addition and some bit-shifts (or a multiplication), probably nearly doubling the cost and, depending on the hardware, possibly defeat the purpose of using an approximation in the first place.

See also[edit]

  • Hypot, a precise function or algorithm that is also safe against overflow and underflow.

References[edit]

  1. ^ Assim, Ara Abdulsatar Assim (2021). "ASIC implementation of high-speed vector magnitude & arctangent approximator". Computing, Telecommunication and Control. 71 (4): 7–14. doi:10.18721/JCSTCS.14401.

External links[edit]