Video Multi-Method Assessment Fusion

from Wikipedia, the free encyclopedia

Video Multi-Method Assessment Fusion ( VMAF ) is an objective metric developed by a research group at the University of Southern California and Netflix developers for the algorithmic (automatic) assessment of image quality in videos. It evaluates a disturbed video based on a comparison with an undisturbed reference in the form of a D MOS estimate.

It combines several features, taking into account the dependencies of the human image quality perception on the image content through the inclusion of machine learning in the form of a support vector machine (SVM), to form a rating. The characteristics considered can easily be exchanged. Version 0.3.1 is based on Anti-noise SNR (ANSNR), Detail Loss Measure (DLM), Visual Information Fidelity (VIF) and, as a temporal feature, the mean neighboring pixel difference of a single image relative to that of the previous one.

VMAF was published in 2016 (version 0.3.1) and is intended to come particularly close to human judgment. Compared to PSNR-HVS , it delivers significantly better results, comparable to the Video Quality Model with Variable Frame Delay (VQM_VFD) published in 2011 . In particular, it is intended to improve the comparability of results between different types of video material and interference.

A reference implementation programmed in C and Python (“VMAF Development Kit, VDK”) was also published as free software in the source code under the conditions of version 2 of the Apache license (ALS 2).

Web links

Individual evidence

  1. T. Daede (Mozilla) & A. Norkin (Netflix), March 15, 2016: Video Codec Testing and Quality Measurement: VMAF , IETF Network Working Group