Use the tool provided when FSM server application is installed, to manage export/import of configurations.
Please note: This tool is only for configuration backup and does not include logs and history.
Run the tool located as shown below, the tool is named as "System Configuration Import and Export"
The tool only has two commands Import and Export
Using export you can backup all the configurations (alarm and event log are not included in the export file).
Using import you can import saved or previous configuration but note: It is not possible to upgrade directly a configuration backup. The only way is to install the old version, load the backup then update software with new version. During software upgrade database will be updated to required version, otherwise you will get error message similar to one shown below:
... View more
This article lists how to configure ANR feature in BVMS and how correctly to set up iSCSI and direct replay.
Configuration of ANR in BVMS system
Insert SD card into camera and power it on.
In Configuration Client, select Devices page.
Device password and CHAP password. There are two possible configuration set ups - to configure password for both device and iSCSI connection or to leave both without password. Choose and configure one of them (3.1 or 3..2).
Note: Mixing the parameters between the two options 3.1 and 3.2 will lead to issues in direct replay from SD card.
3.1. Set device user password and CHAP password
BVMS 11 and above
For BVMS 11 Set a Global CHAP password.
See detailed guide here:
How to set the Global CHAP password for an existing fully operational BVMS 11.x system?
Below BVMS 11
3.1.1 Set user password for the device
3.1.2 Assign the Global iSCSI connection password(CHAP password). This must be the same password as the “user” account password. Main Tab - Settings, the Options.
3.2. No CHAP password for ANR
3.2.1 Leave the user password for the device empty
3.2.2 Do not assign the Global iSCSI connection password (CHAP password). Main Tab - Settings, the Options.
4. On the Devices page
5. Expand VRM Devices node
6. Select the video device (camera) to use ANR.
7. Right-click on the camera and select “Edit encoder”
8. Uncheck the Secure connection checkbox.
Note: If the check box remains checked, the recording cannot be accessed at all and no timeline is displayed.
9. Click the OK button.
10. Click on Recording –> Recording Management.
11. Stop the Primary recording by clicking on the Stop button
12. Under Secondary Recording, for Preferred storage target type, from the drop-down menu, select SD card.
13. The Local target field will display the SD card properties
14. Click on the Save icon will start to format the SD card, if required.
15. Select Cameras and Recording page.
16. Select VRM option to see all recording details of VRM cameras.
17. Under the Recording column, check the ANR box of the camera.
*Option might be grayed out, if camera does not support ANR or dual recording enabled.
18.Click on Save icon
19.Click on the Activate icon
20.Go back to Devices page, camera’s Recording Management tab.
21.Under Secondary Recording section, hover the mouse over the HDD icon
22.The Recording Status bubble will pop-up to indicate that it is recording to SD card.
23. Check the playback from iSCSI and from ANR SD-Card in the Operator Client - on the playback Cameo of the respective camera, click on the Video source icon and choose the ANR replay . For playback issues from the SD-Card, please refer to the next troubleshooting section
Troubleshooting of direct SD card replay
Issue: The SD card recording cannot be accessed at all and no timeline is displayed.
Disable the “Secure Connection” on the camera (Devices tab > right click on the camera > select "Edit encoder" > uncheck Secure connection > OK)
Recording encryption should be disabled: Devices - Select the appropriate VRM device. Select the Service tab. Select the Recording encryption tab. Encrypted recording should be disabled.
Issue: By replay the timeline is visible, however, the cameos remain black.
Check the user password of the camera and the Global iSCSI connection password(CHAP password). These two passwords should be identical or if one of them is not set the other should also be left empty.
... View more
Traffic detector has been developed to cope with all traffic densities up to high density like congestions and queues in front of traffic lights. It also detects non-moving vehicles and is thus applicable for smart parking. In addition to vehicles, it also detects pedestrians.
Tunnels & Highways: o Collection of traffic statistics o Detection of congestions for automatic speed signaling o Detection of wrong way driving
Intersections: o Detecting presence and volume of vehicles
Smart parking: o Detect parked cars
1.2 Available object classes
The following object classes and object class filter are available: - Person - Vehicle
Motorbike o Car o Truck
Object classes are hierarchical. That means e.g. a bicycle is also a bike is also a vehicle, and a bus is also a truck is also a vehicle. Object class filters fully support this hierarchy, while visual class labels will only show the deepest level of classification, that is they will show person, bicycle, motorbike, car, truck and bus labels.
Setup is only possible via the Configuration Manager.
Detection of persons, bicycles, motorbikes, cars, trucks, busses. o Persons, bicycles and motorbikes may be confused, especially when seen from the front. o Busses and trucks may be confused.
Minimum object size: 16x16 pixel in 640x360 resolution
Maximum object size: 500x500 pixel in 640x360 resolution
Minimum object visibility: 50%. Objects occluded more than 50% may not be detected.
2D traffic tracking requires 50% overlap between two consecutive frames. Fast objects are only tracked well moving toward or away from the camera. Fast vehicles crossing the field of view will not be tracked properly.
Speed, geolocation and color are only available in 3D traffic mode.
Traffic Detector and Camera Trainer cannot run in parallel.
Top-down views (birds eye views) are not supported.
1.4 Supported cameras
Traffic Detector is available on the following cameras:
- AUTODOME inteox 7000i:
o NPD-7602-Z30-OC o VG5-ITS1080P-30X7
- DINION inteox 7000i:
- FLEXIDOME inteox 7000i:
- MIC inteox 7000i:
o MIC-7602-Z30BR-OC o MIC-7602-Z30WR-OC o MIC-7602-Z30GR-OC o MIC-7604-Z12BR-OC o MIC-7604-Z12WR-OC o MIC-7604-Z12GR-OC o MIC-ITS1080P-GE30X7 o MIC-ITS1080P-WE30X7 o MIC-ITS1080P-BE30X7 o MIC-ITS1080P-B30X7 o MIC-ITS1080P-W30X7 o MIC-ITS1080P-G30X7 o MIC-ITS4K-BE12X7 o MIC-ITS4K-WE12X7 o MIC-ITS4K-GE12X7
2 Intelligent Video Analytics, Camera Trainer and Traffic Detector
Intelligent Video Analytics was designed for intrusion detection and works well in scenes where objects are visually well separated. For traffic, that translated to low and medium traffic. In high-density traffic, where vehicles merge visually, Intelligent Video Analytics is not able to separate the vehicles anymore. Furthermore, Intelligent Video Analytics only detects moving objects.
To separate vehicles in high traffic, or to detect parked vehicles, machine learning is needed. Camera Trainer was Bosch's initial solution for these applications. Camera Trainer’s low processing power requirements made it ideal for use on Bosch IP cameras based on the CPP6, 7 and 7.3 common product platforms. However, Camera Trainer was limited to a short distance and needed to be trained by the user for every scene, resulting in high training effort. (The advantage of Camera Trainer is that any kind of rigid object can be trained. For more information please refer to the Camera Trainer Tech Note.)
Traffic Detector is a pre-trained vehicle and person detector which also supports greater detection distances than Camera Trainer, though less than Intelligent Video Analytics. It separates persons, bikes, cars, trucks and busses even in dense congestion or traffic queues. Another benefit of traffic detector is that it is robust with regards to shadows or headlight beams.
3 Technical Background
3.1 Machine learning: Finding threshold between target object and world
Machine learning for object detection is the process of a computer determining a good separation between positive (target) samples and negative (background) samples. In order to do that, a machine learning algorithm builds a model of the target object based on a variety of possible features, and of thresholds where these features do/don’t describe the target object. This model building is also called the training phase or training process. Once the model is available, it’s used to search for the target object in the images later on. This search in the image together with the model is called a detector.
Hand-crafted features typically describe edges and include:
Haar features: Binary edge descriptors via black/white tiles
Histogram of oriented Gradients (HoG): Histogram of quantized edge direction and contrast
The resulting model will typically have around 2000 parameters. There are different methods of machine learning with hand-crafted features including support vector machines (SVNs), AdaBoost, and decision trees. Each of these methods has certain advantages, however, all of them result in similar performance levels. A detector based on these features can typically run in real-time in current network camera hardware. Camera Trainer is based on an SVN using Histograms of oriented Gradients. For more details on Camera Trainer, see its own Tech Note.
Neural networks are based on the visual cortex and are able to learn descriptive features on their own. They utilize a neural network structure of optimization parameters instead of using the handcrafted features. Typically, neural networks for image processing will also learn edge features and combine them first to parts of the object, and then the full target object itself. Deep neural networks for image processing use roughly ~20 million parameters, and can deliver a performance boost of up to 30%. On the other hand, deep neural networks are a brute force approach that requires hundreds of times more computational power.
Besides the model and training method, the samples of target objects and the background are very important. For a task like face or person detection, the positive samples need to show all possible variations including perspective and pose, while the background samples have to represent the full world. Therefore, machine learning needs tens of thousands of examples of the target object and billions of examples of what the rest of the world looks like. This is a huge effort both in collecting and preparing the sample images. For automated machine learning, either the target objects have to be marked in the image or the images restricted to show only the target object. Furthermore, modelling the complexity of the full world is one of the reasons machine learning is computationally expensive. The new Bosch Traffic Detector is trained using a deep neural network.
3.2 Will machine learning, especially deep learning, replace current video analytics?
No. It will extend the possible applications. Current intrusion detection has to cope with long detection distances, correspondingly few pixels on the target object, a wide variety of poses (walking, running, crawling, rolling) and needs to be able to deal with the unexpected. It can run on very low processing power. It is targeted towards moving objects, and will ignore all stationary ones. Machine learning, on the other hand, needs high resolution and will thus only work in near range. It also needs high processing power, which is one of the main reasons adoption is not more widespread yet. Furthermore, it can only detect trained and expected objects. But with machine learning, objects can be properly classified, and close objects can be separated well. Non-moving objects can still be detected, even after a very long time without motion. Therefore machine learning in general is a good technology for applications like parking lot occupancy, traffic monitoring, as well as for counting people without false counts from shopping carts or baggage. Furthermore, a detector itself only detects an object in a single frame. For most application, on the other hand, the movement of the object over time is even more important, and a tracker is typically needed to combine the single detections, add robustness, and extract information like location, speed and direction. The best solutions arise when knowledge from several sources is combined, like detectors, trackers, optical flow, perspective information and more.
4.1 Activating Traffic Detector
Activation of the Traffic Detector is available in VCA->Metadata Generation->Tracking Mode via 2D Traffic or 3D Traffic. Note that Camera Trainer will automatically be disabled when Traffic Detector is enabled and vice versa. Traffic Detector objects behave as all other video analytics objects, and the usual alarm and counting rules can be applied.
4.2 2D or 3D Traffic Mode?
2D Traffic is the choice for static applications like parking lot occupancy. A simple tracker combines the detector output over time, and objects need to overlap for 50% in two consecutive frames to be tracked properly. Thus, fast objects crossing the field of view may not be tracked properly. - Only bounding boxes will be outputted. - No color, speed, geolocation, nor direction filter - No idle/removed object detection
3D Traffic is the choice whenever speed, map location / geolocation, and best tracking performance are needed. It requires the camera to be calibrated properly in order to understand the perspective in the scene and convert pixel into real-world size, speed and location. Once an object has been detected with the traffic detector, the tracker learns the appearance of the object and is able to follow it on its own. - Bounding boxes for static objects, closer fitting shapes for moving objects - Color, speed, geolocation, direction available - Idle/Removed Object: only Stopped Object detection
Left: Bounding boxes are always used in 2D Traffic, and used for non-moving objects in 3D Traffic.
Right: Once an object starts moving in 3D Traffic mode, the outline switches to a flexible shape. Visualization will switch back to bounding box only for cars, busses and trucks, once the objects stops long enough and is considered parking. Use VCA->MetadataGeneration->Idle/Removed->Stopped Object Debounce Time to determine when vehicles will switch to static objects handling.
Recommendation: Set this longer than typical traffic light red phase in case of intersections. For persons and bikes, tracking will stop completely after this time.
... View more
For some reasons it could be necessary that the the Technical Support needs this logging to analyze what could cause the trouble. This guide will help you to get through . This topic can as well be found in the Configuration Manual of Dicentis wireless page 34.
You need at least a user with access right: Configure
Logging export is not available for tablets.
The log file can only store about 1200 lines! That means you need to extract the logging after the error or symptoms occurred before the system overrides the logging line. It may be to late if you extract the logging after some days.
Extract the log file
Login in your DCNM-WAP. ( The default setting for a new or factory reset system is: username: ‘admin‘ with an empty password. )
Save it where you can find it again.
Please send this file directly to the Technical Support for analysis.
... View more
Is the VIDEOJET decoder 8000 Firmware downgrade-able lower than the factory default version?
HD Decoders are not downgrade-able lower than the factory default version.
All Video Management Systems should not have issues with newer Firmware. All their implemented functions should work.
NOTE: All new stock of VJD-8000 gets version 9.51 due to Security Vulnerabilities.
Please also see:
Which Bosch encoders and decoders are compatible with BVMS?
... View more