# Edge change ratio

The Edge Change Ratio (abbreviation ECR , English for " Edge Change Ratio ") is a measure that measures the difference between two consecutive images in a digital video and is used by algorithms for section detection. Such an algorithm is sometimes referred to as an edge change ratio .

## development

The original idea for the ECR comes from employees at Dublin City University in Ireland , who in 1998 introduced the measure Edge Change Fraction (ECF, English for " edge change fraction ") for the detection and classification of scene transitions in digital video material and an associated algorithm. Work by other researchers followed with the aim of improving the original algorithm; For example, employees at Cornell University in Ithaca ( USA ) in 1999 presented an algorithm that also compensates for camera movement and, by integrating the Hausdorff distance , can detect scene transitions more reliably.

## Mathematical definition

The Edge Change Fraction is defined as:

${\ displaystyle \ mathrm {ECF} = \ max (\ rho _ {\ text {in}}, \ rho _ {\ text {out}})}$

where is the number of edge pixels that are added in the second image and the number of edge pixels that disappear from the first image. ${\ displaystyle \ rho _ {\ text {in}}}$${\ displaystyle \ rho _ {\ text {out}}}$

The ECR changes this definition slightly to:

${\ displaystyle \ mathrm {ECR} _ {\ text {i}} = \ max \ left ({\ frac {\ rho _ {\ text {in}}} {s _ {\ text {i}}}}, { \ frac {\ rho _ {\ text {out}}} {s _ {\ text {i}} + 1}} \ right)}$

where denotes the number of all edge pixels in the first image and the number of all edge pixels in the second image. ${\ displaystyle s _ {\ text {i}}}$${\ displaystyle s _ {\ text {i}} + 1}$

## How the algorithms work

Edge Change Ratio determines the proportion of "distant" disappearing and appearing edges for every two consecutive images.

1. Two consecutive images of the digital video are selected and prepared:
1. The images are converted into gray value images ; only the brightness of the pixels is important for further processing. This reduces the storage requirement and shortens the processing time.
2. Newer algorithms also compensate for the camera movement. By calculating the average motion vector ( motion vector ) , the most likely camera motion is detected. Then those edge areas that leave or enter the image due to the suspected camera movement are cut away.
2. A and B are converted to ( monochrome ) edge images A 'and B'. The conversion is usually done by filtering with a high band filter .
1. In addition to the high-band filter, some algorithms use other algorithms that are supposed to suppress possible noise .
3. The dilated images A * and B * are calculated from A 'and B' . The dilation widens the visible outline. The optimal value for the parameter of the dilatation is determined by empirical test series.
4. A * and an inverted version of the image B 'are XOR- linked, as is B * and an inverted version of the image A'. In the resulting images A and B , the number of all colored pixels is counted. The result is two numbers, the number of entering edge pixels and the number of exiting edge pixels.${\ displaystyle E _ {\ text {a}}}$${\ displaystyle E _ {\ text {from}}}$
5. ${\ displaystyle E _ {\ text {from}}}$is divided by the number of edge pixels in edge image A ', by the number of edge pixels in edge image B'. The result is the edge entry ratio and the edge exit ratio .${\ displaystyle E _ {\ text {a}}}$${\ displaystyle \ rho _ {\ text {out}}}$${\ displaystyle \ rho _ {\ text {in}}}$
6. The maximum of the values and corresponds to the ECF.${\ displaystyle \ rho _ {\ text {out}}}$${\ displaystyle \ rho _ {\ text {in}}}$

### Edge images

The most important step in determining the ECR of two images is to create edge images. A “good” edge image is a two-tone image that contains nothing more and nothing less than the outlines of all objects in the original image. The starting point for creating edge images are the mathematical filters described under Edge detection . In principle, these determine how much the color values ​​of two neighboring pixels differ from one another and draw the strength of this deviation in a gray-scale image .

## Individual evidence

1. Aidan Totterdell: An Algorithm for Detecting and Classifying Scene Breaks in MPEG Video Bit Streams. School of Computer Applications, Dublin City University. Technical report [98-05], September 21, 1998.
2. Ramin Zabih, Justin Miller, Kevin May: A feature-based algorithm for Detecting and classifying production effects. In: Multimedia Systems Vol. 7, 1999, pp. 119-128.