In order to start to analyze and optimize Video Analytics settings for particular customer set up, we need initial information and export of example video.
This article describes the absolute minimum information we need in order to start analyzing and optimizing Video Analytics performance.
Video Analytics (VA) task performance optimization
Please provide the following details:
Symptom specific Information
Describe the expected result from the VA task.
It is important to get detailed information of the results VA should provide.
Define what is the customer’s priority:
High sensitivity - miss as few true alarms as possible, but increases the likelihood of more false alarms!
High precision – have less false alarms, but accept the risk to miss true alarm!
Using VA is always based on balancing between sensitivity and precision. It is not always possible to get high sensitivity and high precision at the same time.
Provide an export of recorded video that should contain:
for basic understanding of meta data, please read the latest Software Manual Video Content Analytics
for instructions about doing a video export with meta data, please read the article: How to export video including meta data (Export video including Video Analytics meta data).
Couple of examples of false alarms (in case false alarms are the issue)
Provide the number of false alarms per time span (per day, per week, etc.)
What would be acceptable as a number of false alarms per time span
Couple of examples of situation that should have caused alarm, but VA did not generate the alarm as expected.
In this case please provide the time stamp of the event and short description
Screenshot from the Camera Calibration Verification page - only in case calibration is needed for the particular VA functionality
access the Calibration Verification page of the camera (via the Configuration Manager or Web Interface)
provide screenshot from the verification of each parameter used for camera calibration
Note: For more information on Camera Calibration and Calibration Verification, please check:
How To calibrate camera for Video Analytics?
Traffic detector has been developed to cope with all traffic densities up to high density like congestions and queues in front of traffic lights. It also detects non-moving vehicles and is thus applicable for smart parking. In addition to vehicles, it also detects pedestrians.
Tunnels & Highways: o Collection of traffic statistics o Detection of congestions for automatic speed signaling o Detection of wrong way driving
Intersections: o Detecting presence and volume of vehicles
Smart parking: o Detect parked cars
When camera is set to auto track (by web interface) to "auto"-> IVA is detecting an object but camera is not following.
Auto tracking works and metadata is displayed only on a stored preposition.
There was a change introduced starting with FW v.7.70.
The new feature is: i-Track only starts on VCA-Alarm.
You can either program this on presets, e.g a detection field or if you want to have Itrack active regardless of its position, you need to activate "global VCA" and assign task detection area "whole screen"
now only activate iTrack "AUXON78" so the eye is in the corner, and your iTrack will always be on sharp
Note: IVA must be active for at least one preset scene in the VCA page on the Settings tab.
If IVA is configured for one scene, then all other scenes have Intelligent Tracking enabled by default. If a scene, however, has Motion+ or IVA 5.5 Flow activated, then Intelligent Tracker is disabled for these scenes.
Please refer also to the attached document.