Professional Intrusion Detection with IVA Pro Perimeter (FW 8.90 / 9.40)
📚Overview:
1 Introduction
1.1 Applications 1.2 Common product platform (CPP) 1.3 Why not just use IVA Pro Buildings? 1.4 Two generations of IVA Pro Perimeter
2 Configuration
2.1 Licensing 2.2 Activating IVA Pro Perimeter 2.3 Calibration 2.4 Configure alarm task
3 Technical Background, FAQ and Tips & Tricks
3.1 What is an intruder? 3.2 How can I distinguish intruders from animals? 3.3 What is a camera calibration and when do I need it? 3.4 How does it work? 3.5 How should I set up the camera view? 3.6 How far into the distance can IVA Pro Perimeter detect objects? 3.7 Why are objects lost when they don’t move anymore? 3.8 Why are objects detected so late? 3.9 How to get the longest possible detection distance? 3.10 Shaking / vibrating camera 3.11 Infrared illumination & insect swarms 3.12 Optimization via forensic search 3.13 General advice
1 Introduction
This article describes the best practices for and answers common questions regarding intrusion detection using IVA Pro Perimeter with FW 8.90 (CPP13) / 9.40 (CPP14).
1.1 Applications
Perimeter protection
Sterile zones
Warehouse after hours
Solar plants
Façade protection
… and wherever and whenever no one is supposed to be within an area during a certain time
1.2 Common product platform (CPP)
Bosch cameras can be clustered by their common product platform. As different platforms offer a different amount of processing power, this can make differences in the performance. For an overview of the different product platforms and the cameras belonging to them, see the tech note on Video Content Analysis (VCA) Capabilities per Device. IVA Pro Perimeter is available on most Bosch IP cameras of CPP 13 and 14.
1.3 Why not just use IVA Pro Buildings?
There are two reasons to prefer IVA Pro Perimeter over IVA Pro Buildings for professional intrusion detection:
(1) Long distance surveillance. IVA Pro Buildings can only do short to medium ranges. For long distances, IVA Pro Perimeter is needed. (2) Protection against professional intruders. IVA Pro Building can only detect upright persons. Professional intruders, on the other hand, will do everything to not be detected, including crawling, rolling, hiding behind camouflage or cardboard or others. IVA Pro Buildings cannot detect them.
IVA Pro Perimeter, on the other hand, has been specifically designed to deal with subterfuge..
1.4 Two generations of IVA Pro Perimeter
With FW 9.40, a new generation of IVA Pro Perimeter has been released for selected cameras. The new generation is AI-enhanced by adding AI-based object classification as one of the object verification criteria to the perimeter tracking modes. This, it improves sensitivity for persons and vehicles as well as reduces false alarms significantly. However, not all IVA Pro cameras have the processing power to support the AI-enhanced version, and thus neither the CPP13 IVA Pro cameras nor the FLEXIDOME multi, corner or panoramic can be upgraded. Note also that the 3000 series does not support IVA Pro Perimeter at all. In addition, the two generation have the following differences:
Step-by-step guide
2 Configuration
2.1 Licensing
IVA Pro Perimeter is available by default, without the need to activate any license, on all CPP13 and CPP14 camera of 7000+ range. It is also available by default on the FLEXIDOME panoramic and multi, even though they are of the 5000 range. IVA Pro Perimeter licenses can be bought for all other CPP14 cameras of 5000 range. For the license activation process, please see the dedicated whitepaper on Intelligent Analytics Licensing.
2.2 Activating IVA Pro Perimeter
Configuration of IVA Pro Perimeter requires the Bosch Configuration Manager 7.74 or higher. In the Configuration Manager, select the target camera, then go to VCA -> Main Operation. Set the Analysis Type to IVA Pro Perimeter. The Tracking parameters are automatically set to Perimeter tracking (2D). To activate Perimeter tracking (3D), the camera needs to be calibrated first, otherwise the entry will be inactive.
2.3 Calibration
Full information on calibration can be found in the IVA Pro whitepaper on calibration and geolocation. Calibration is needed whenever map coordinates of the objects should be calculated. It is also recommended to use calibration and Perimeter tracking (3D) for best performance in mid to long distance surveillance on flat ground as both detection sensitivity and false alarm resistance are boosted greatly by using perspective information. Do not use calibration when the ground is not flat, nor when looking for climbing or thrown objects.
2.4 Configure alarm task
By default, IVA Pro Perimeter will alarm on any moving object. In practice, a restriction to the target area is usually done. For that, go to VCA -> Tasks and either click on the line of the default task and then on the edit button, or delete the default task and set up a new “Object in field” task.
On the next page, you can draw the target area by clicking in the video image to the right for each node of the field, and finishing with a double click. The field itself, each line and each node can be moved afterwards. Hover with the mouse over the field and see what element is highlighted, then click and drag. The debounce time determines how long the object needs to be observed in the field in order to trigger an alarm. The intersection trigger says which part of the object needs to be in the field. Default is the object base point, which means that e.g. the feet of the person need to be inside the marked area. There is usually no need to change this. Further object attributes can be used on the following pages to filter the alarm objects, e.g. by object size, if needed.
3 Technical Background, FAQ and Tips & Tricks
3.1 What is an intruder?
If we talk about intruders, we typically mean people entering areas which are off-limits to them. Depending on the application, however, the people may also sit in vehicles or bikes. Furthermore, professional intruders typically do not walk into the area, but crawl or roll to present the camera the least view of them that is possible.
3.2 How can I distinguish intruders from animals?
You can’t. It is possible to separate standing and walking people from smaller animals like dogs, foxes or rabbits by their size, but if we talk about professional intruders crawling or rolling into the scene, then most of the time the difference to the animal in question is not large enough for a robust classification. There is currently no video analytic for intrusion detection on the market that can really solve this problem. If you are only interested in walking / standing persons, the automatic object classification can be used. See the tech note on object classification for configuration details.
3.3 What is a camera calibration and when do I need it?
A camera calibration teaches the camera about the perspective in the scene. Due to perspective, in the rear of the video images the persons appear smaller, they cover less pixel in the image, though their real size is the same. Perspective is thus needed whenever the real size and speed of objects is needed, as well as an automatic perspective correction of object sizes. Perspective knowledge also allows IVA Pro Perimeter to automatically differentiate much more robustly between persons, vehicles and false detections. Calibration becomes more important the larger the area covered by a single camera is. For small areas (10-20m distance), the perspective effect is typically neglectable, for larger areas, it becomes essential for robust performance. Note that the longer the distance, the less reliable object size and speed estimations become, as less pixel are available per meter.
To calibrate, the position of the camera in relation to a single, planar ground plane is described by the elevation of the camera, the angles (tilt, roll) towards the ground plane and the focal length of the lens. As calibration is only done in reference to a single, planar ground plane, scenes with stairs, escalators, several ground levels, facades or rising ground cannot be calibrated correctly. If the rising ground only differs a little from the planar ground plane, a best effort calibration can be tried. In all other cases, please refrain completely from using a calibration and set the object size filters, if needed, for the different image regions by hand.
Further information on calibration can be found in the IVA Pro whitepaper on calibration and geolocation.
3.4 How does it work?
In the past, most intrusion detection algorithms have been based on optical flow and/or background subtraction. Nowadays, deep-learning based AI detectors are often used. All of these approaches have their advantages and disadvantages, which is why IVA Pro Perimeter combines them in an intelligent way to get the best performance.
Optical flow is a motion estimation. Taking two images a few frames apart, for every part within the first image, the corresponding best fit in the other image is determined. Areas without motion are then ignored, and areas with same motion are clustered. The advantage is that this algorithm helps to track objects over time, and it is able to detect any kind of moving object. The disadvantages are that motion in the background, e.g. vegetation waving in the wind, will cause false detections, and non-moving objects cannot be detected. Groups of objects cannot be separated, though this is of less relevance for intrusion detection.
For background subtraction, one, multiple or stochastic background images are learned over time and updated continuously. Every difference to the learned background is then extracted as a moving foreground object and tracked over time. The advantage is again that this is able to detect any kind of moving object. The disadvantage is that motion in the background, e.g. vegetation waving in the wind, will cause false detections, and that this approach is sensitive to illumination changes. Also, groups of objects cannot be separated, though this is of less relevance for intrusion detection.
The first generation of IVA Pro Perimeter uses both optical flow and background subtraction in addition with statistical analysis and object track validation to be able to detect any kind of moving object, e.g. professional intruders crawling, rolling or being camouflaged, while suppressing as many false detections from background motion as possible.
Machine learning for object detection is the process of a computer determining a good separation between positive (target) samples and negative (background) samples. In order to do that, a machine learning algorithm builds a model of the target object based on a variety of possible features, and of thresholds where these features do/don’t describe the target object. This model building is also called the training phase or training process. Once the model is available, it’s used to search for the target object in the images later on. This search in the image together with the model is called a detector.
Neural networks are based on the visual cortex and are able to learn descriptive features on their own. They utilize a neural network structure of optimization parameters instead of using the handcrafted features. Typically, neural networks for image processing will also learn edge features and combine them first to parts of the object, and then the full target object itself. Deep neural networks for image processing use roughly ~20 million parameters.
The advantage is that object detectors based on deep neural networks can simultaneously detect and classify and they will detect non-moving objects. They can also separate objects in groups or crowds. The disadvantage is that they can only detect what they have been trained for. Professional intruders hiding behind a mundane cardbox, for example, will thus not be detected at all, see section 1.3. Objects similar in shape may be confused, adding a certain amount of false and missed detections here as well. Between both these disadvantages, the separation of professional intruders which can roll, crawl and camouflage, and animals causing false alarm on the other hand, is not fully possible. Also, AI-based detectors typically need more resolution which translates into less detection distance and more computational power needed. IVA Pro Buildings and IVA Pro Traffic are solely based on deep neural networks. The new generation of IVA Pro Perimeter combines optical flow, background subtration, statistical and track analysis as well as deep neural network trained AI-detectors to bring the best performance to professional intrusion detection.
3.5 How should I set up the camera view?
If possible make sure that intruders cross the field of view instead of walking towards the camera. Due to the perspective, a person walking toward the camera does not cross as many pixel in the image and does not have much apparent motion as a person crossing the camera view. Thus it is more difficult to detect and separate from noise. Higher elevation is preferred due to the same reason. Though higher poles are more expensive and prone to shaking, the lower a camera is mounted the less apparent motion objects walking toward the camera have, and the harder they are to detect. Note also that the more area is covered by the selected lens, the farther an object must travel to cross the same amount of pixel.
3.6 How far into the distance can IVA Pro Perimeter detect objects?
A general answer cannot be given, as this depends on the chosen camera, the chosen video aspect ratio, the camera perspective, on the focal length and on the light and weather conditions. Furthermore, both Intelligent Video Analytics and Essential Video Analytics are not directly computed on the original camera resolution but on a reduced one due to computational power limits. For an overview of which resolution is used on which camera, please see the tech note on VCA Capabilities per Device. Generally, a larger focal length indicates a larger zoom factor and a smaller width of the field of view. So one can see farther into the distance but less far to the left and right than with a smaller focal length. Furthermore, with a larger focal length, the unobserved area in front of the camera is much larger as well. Another trade-off for the larger detection range of larger focal length is that motion towards the camera takes longer to detect.
The detection distance also depends on the size of the object, with longer detection ranges for larger objects.
Distance resolution is best near the camera, and degrades heavily into the distance where a single pixel in the internal resolution often covers several meters of ground. Here an example for Essential Video Analytics:
Use the Bosch Video Analytics and Lens Calculator at http://www.boschsecurity.com/LensCalculator/html/lens-calculator.html to determine video analytics detection distances for specific cameras, lenses and focal lengths.
3.7 Why are objects lost when they don’t move anymore?
The background subtraction needs a stable background image to work properly. As day changes to night and back, illumination changes, there are weather effects and change of seasons, the background visually changes. Therefore it needs to be adapted continuously to keep step with the changing environment. To that regard, non-moving objects are declared part of the background in IVA Pro Perimeter after a configurable time. Go to VCA->Metadata->Stationary timeout to increase the time that an object will be kept though it is not moving any longer.
3.8 Why are objects detected so late?
Objects are actually detected as soon as they appear, however, to validate that they are interesting objects with consistent motion and not spurious detections by wind in trees or flags or the falling of rain drops and snowflakes, IVA Pro Perimeter holds the detection back for a few frames. To get the objects as soon as they appear, please go to Metadata Generation->Tracking and raise the sensitivity to max. For the first generation of IVA Pro Perimeter, also set noise suppression to off. In the AI-enhanced IVA Pro Perimeter, changing the noise suppression will not influence the speed of the object detection.
3.9 How to get the longest possible detection distance?
Always use the tracking parameters “Perimeter tracking (3D)”. Adding the calibration greatly improves both sensitivity and false alarm resistance. In the original IVA Pro Perimeter, also ensure that noise suppression is off or medium, as the detection distance will otherwise be halved. In the newer generation, noise suppression does not influence detection distance anymore.
3.10 Shaking / vibrating camera
When the camera shakes, the content of the whole image shakes with it. The effects are especially visible around edges, as these cause the most change. Thus, false alerts can occur and the tracking of existing objects can be disrupted. IVA Pro Perimeter automatically compensates for shaking and vibrating cameras, though detection distance will drop as objects smaller than the shaking cannot be detected.
3.11 Infrared illumination & insect swarms
Insects are drawn to the light of infrared illuminators. If the infrared illuminators are included in the camera or positioned close nearby, this means that a myriad of insects will flutter through the video and cause false alerts. Therefore always position the illuminator in at least 80cm distance to the camera. Though false alerts due to insects cannot be suppressed completely, they are greatly reduced by IVA Pro Perimeter, and even further in the AI-based second generation.
3.12 Optimization via forensic search
There are two parts of the configuration of IVA Pro Perimeter. The first part defines the object detection and tracking, also called metadata generation. This includes camera calibration, selection of the tracking mode, masking areas from the processing, sensitivity, noise suppression and stationary timeouts. This first part needs to be done initially and cannot be changed afterwards.
The second part of the configuration evaluates the metadata and includes tasks like line crossing, object in field and more. This second part can be fully evaluated and optimized using forensic search. To do so, record video including the events to be detected. Then use a forensic search capable viewing client. Define or adapt your alarm tasks, and evaluate whether all events are detected correctly and how many false alerts are still left.
3.13 General advice
- Scheduling: This is available on DINION & FLEXIDOME cameras. Configure one or both VCA profiles, then change the VCA configuration to “Scheduled” and define the times where which VCA profile should run. - Alarm-based recording / adjustment of recording frame rates: Can be triggered by any alarm. Configurable via Recording -> Profiles, or via the alarm task editor for full flexibility.
Nice to know:
Technical documentation for Video Analytics
... View more