Motion detection in Agent is used to trigger Alerts and AI processing You can set Agent to record on motion or record on Alert. Use the Recording menu option and see the Mode setting. You can also use Alerts to trigger Actions. Motion detection can sometimes raise false alerts as the detectors can't easily tell the difference between an object moving, wind, rain and brightness changes. To cut down on false alerts you can integrate Agent with CodeProject.AI for more intelligent alert filtering.
Setting up motion detection

The motion detector area control is found by editing a camera and selecting Detector in the menu at the top right. You start setting up a detector by defining zones to monitor. Agent supports up to 9 zones which you can choose between using the Zone dropdown. Each zone has a different color. To draw a zone click on the pen tool and start drawing over the video preview. Use the left mouse button or touch to draw and, on desktop the right mouse button to erase. You can toggle the size of the nib to fill more pixels using the nib tool . To erase an area use the eraser tool . Use the reset tool to toggle fill the entire area with the selected zone. Agent will monitor the colored areas for motion.
- Enabled: Controls whether to use the detector
- Detector: Select a motion detector type and click the "..." button to configure it. The various motion detectors are explained below.
- Color: These settings control the color of the motion detection overlay (not used by all detectors).
- Timeout: How long to keep the camera in a motion state after motion stops (seconds,1 - 60, default is 3).
Using Zones
Zones are used by AI detectors (Face/ LPR/ Object Recognition) and object tracking detectors (like trip wire, speed, object tracking). You can select which zones will trigger an alert in the detector configuration or you can specify Actions to take when an alert is raised in specific zones.
The simple detector will just alert if enough motion is detected in all zones to trigger it.
Some detector types don't use the zone settings at all (like MQTT, ONVIF or motion triggered via API calls)
Using Motion Areas
Motion areas are collections of zones you can name and save to use whenever you like. To save the current motion zone configuration as a new area click on the edit icon next to Area. Using these tools you can add, edit and delete areas.
To apply a motion area when you move your PTZ camera to a PTZ Preset position (using the Agent UI):
- Create a new motion zone configuration and save it with a name (for example "carpark")
- Add a new Action:
If: "PTZ Preset Applied"
Select the PTZ Preset Command (eg: "Go Preset 1") - your camera must support PTZ presets for this to work.
Click to Add a Task:
Task: "Set Motion Detection Area"
Select your new area ("carpark")
Click OK twice. Now whenever you select the preset, or if Agent sets the preset through scheduling or some other event, this motion area will be set automatically.
You can also change the motion detector area using the Scheduler - so you can have different motion zone configurations dependant on the time of day, week or date.
Detector Types
Simple
The simple detector just looks for any movement. This also uses the least CPU of all the detector options (other than ONVIF). Detected movement is highlighted in red so you can easily tell what is causing movement in the scene.
- Advanced: See the advanced section below for more information.
- Sensitivity: This controls the amount of motion required to trigger movement detection. You can set minimum and maximum values. Setting a maximum of say 80 can help ignore whole scene brightness changes. The numbers shown under the slider are the percentage of pixels changed.
- Gain: This is a multiplier for the pixels changed which lets you make motion detection more sensitive or less.
HAAR Objects
This uses files called haar cascades to recognise objects in the video. You may get better results using the simple object detector and setting up an AI Server to filter alerts instead.

- Frame Size: Choose the size of the frame to use for processing. Smaller frames use less CPU but are less accurate.
- Detect Interval: How often to process the frame. This is in milliseconds so 200 = 5 times a second. 1000 = once a second.
- Width Limits and Height Limits: This is the upper and lower limit of object size Agent will look for. The values here are in percentage of width or height. Adjusting this slider will show you an overlay on the video of the size range of object it is looking for.
- Use GPU: Whether to use the GPU for processing or not. This is only available if your GPU supports Cuda and drivers are installed.
- File: The Haar Cascade file to use to configure the object detector. There are default ones provided for face and cat face
- Alert Condition and Alert Number: Agent will generate an alert based on object detection according to how many objects are detected. So if you want an alert if Agent recognises a face you would choose "More Than" and enter 0 in Alert Number.
- Alert Zones (v4.4.8.0+) Select the motion zones to include in the monitored area.
- Check Corners See Checking Corners
Checking Corners
Agent will check the center point of the detected object against your zone configuration to work out if it should process alerts/ actions. It can also check the corners of the bounding box of the object. Using this feature you can set a percentage of the distance from the center point to the corner of the bounding box to check the zone. Basically 0 = center point only, 100 = check all corners and 50 = check the points halfway to each corner of the bounding rectangle. If you are getting lots of event notifications where the object doesn't look like it's in the zone then set Check Corners 0.
MQTT
You can trigger motion detection from your MQTT server. Setup MQTT and pass in the command shown in the detector configuration screen to the Agent/commands channel to trigger object detection.
ONVIF
Some ONVIF devices have their own motion detection features built in. Selecting this mode along with an ONVIF capable camera (using the ONVIF connection type in Agent) will make Agent listen to the device itself for motion detection events and trigger based on those. Check the logs (at /logs.html on the local server) if this isn't working as possibly your camera doesn't support onvif detection. See Server ONVIF settings
People
This uses a custom algorithm to look for pedestrians. You may get better results using the simple object detector and setting up deepstack to filter alerts instead.
- Use GPU: Whether to use the GPU for processing or not. This is only available if your GPU supports Cuda and drivers are installed.
- Frame Size: Choose the size of the frame to use for processing. Smaller frames use less CPU but are less accurate.
- Detect Interval: How often to process the frame. This is in milliseconds so 200 = 5 times a second. 1000 = once a second.
- Alert Condition and Alert Number: Agent will generate an alert based on object detection according to how many objects are detected. So if you want an alert if Agent recognises a face you would choose "More Than" and enter 0 in Alert Number.
- Alert Zones (v4.4.8.0+) Select the motion zones to include in the monitored area.
- Check Corners See Checking Corners
Reolink
Some reolink cameras come with an endpoint Agent can poll to get motion or AI alert states. You can use this detector if your camera supports it. To find out if your camera supports this try entering the URL: http://IP ADDRESS/api.cgi?cmd=GetMdState&channel=0&rs=Get&user=USERNAME&password=PASSWORD (where USERNAME and PASSWORD are your login details for your camera. You should get some json formatted text back (not an error page).
- Interval: Set how often to poll your camera for alerts/ motion
- Mode: Choose between Motion and AI. Either will trigger the motion detector event in Agent which you can use to record (set record mode to Detect). The AI option will trigger motion if one of the object classes on the camera is found (dog_cat, face, people, vehicle).
You can add actions to perform tasks on object found with tags dog_cat, face, people or vehicle on v4.6.6.0+
Note: For the AI feature to work you may need to enable tracking on the cameras web UI and configure the minimum and maximum size of object to find. Agent will tag your recordings with the objects Reolink finds.
Speed
This uses information you enter about the scene to track moving objects, estimate their speed and generate alerts if objects are moving too fast or too slowly.
- Advanced: See the advanced section below for more information.
- Width Limits and Height Limits: This is the upper and lower limit of object size Agent will look for. The values here are in percentage of width or height. Adjusting this slider will show you an overlay on the video of the size range of object it is looking for.
- Minimum Travel: This is the distance the object must travel to be tracked as a moving object. Its value is the percentage of width of the scene.
- Minimum Time: The amount of time the object must be tracked for to be classified as a moving object. It's in x0.1 of a second so 1 = 0.1 seconds or 10 = 1 second.
- Speed Measurement: Choose the base unit of speed to use in the overlay
- Speed Limits: Choose the lower and upper speed limit values to ignore. Speeds detected outside of this range will trigger a motion detection event.
- Horizontal and Vertical Distance: This is the total distance in meters in the scene. Agent uses this to estimate the speed an object is moving.
- Alert Zones (v4.4.8.0+) Select the motion zones to include in the monitored area.
- Check Corners See Checking Corners
Tracking
This detects and tracks moving objects and triggers motion detection events based on how long they are in the scene for and how far they move.
- Advanced: See the advanced section below for more information.
- Width Limits and Height Limits: This is the upper and lower limit of object size Agent will look for. The values here are in percentage of width or height. Adjusting this slider will show you an overlay on the video of the size range of object it is looking for.
- Minimum Travel: This is the distance the object must travel to be tracked as a moving object. Its value is the percentage of width of the scene.
- Minimum Time: The amount of time the object must be tracked for to be classified as a moving object. It's in x0.1 of a second so 1 = 0.1 seconds or 10 = 1 second.
- Display Total: This adds a counter to the live video.
- Heat Map: This adds lines to tracked objects which helps visualize movement patterns over time.
- Alert Zones (v4.4.8.0+) Select the motion zones to include in the monitored area.
- Check Corners See Checking Corners
As Agent tracks movement in the scene it displays colored rectangles around what it's seeing. The colors mean:
- White: It's just been detected and is a "possible"
- Yellow: It's been detected for multiple frames
- Orange: It's continued moving for as long as the minimum time setting in tracking settings
- Red: It's fulfilled the tracker requirements to trigger a motion detection event
Trip Wires
This detects and tracks moving objects and triggers motion detection events when they cross trip wires you add into the scene. Click and drag on the live video to add as many trip wires as you like. Click and drag a grab point out of the scene to delete the trip wire.
- Advanced: See the advanced section below for more information.
- Width Limits and Height Limits: This is the upper and lower limit of object size Agent will look for. The values here are in percentage of width or height. Adjusting this slider will show you an overlay on the video of the size range of object it is looking for.
- Minimum Travel: This is the distance the object must travel to be tracked as a moving object. Its value is the percentage of width of the scene.
- Minimum Time: The amount of time the object must be tracked for to be classified as a moving object. It's in x0.1 of a second so 1 = 0.1 seconds or 10 = 1 second.
- Repeat Trigger: By default an object can only trigger a trip wire once. Turn this on to enable multiple triggering.
- Count: Display a count of how many times an object has crossed the trip wire and in which direction. You can count left, right, both or total.
- Alert: Generate an alert if the trip wire is crossed in a specific direction or any direction.
- Alert Zones (v4.4.8.0+) Select the motion zones to include in the monitored area.
- Check Corners See Checking Corners
API
To trigger motion detection via an API call, for a camera (ot=2) with id 1 (oid=1 - the ID is displayed at the top of the edit control when you edit a device) call:http://localhost:8090/command.cgi?cmd=detect&ot=2&oid=1
Advanced
The default settings for the detectors are usually pretty good for most scenes but you can adjust them if you need to tune it for better performance.
- Analyzer: Currently the only analyzer available is the CNT background subtractor which offers very good accuracy and low CPU usage.
- Frame Size: Choose the size of the frame to use for processing. Smaller frames use less CPU but are less accurate.
- Tracker: The opencv tracker to use for object tracking:
- Mosse this is the fastest (uses lowest CPU) tracker but is also the least accurate (default)
- KCF this is more accurate than Mosse but uses slightly more CPU
- CSRT this is the most accurate but uses the most CPU. Use this if you are having problems tracking objects.
- Max Objects: This sets an upper limit on the number of objects to track at any one time. The more objects you track the more CPU it uses.
- Detect Interval: How often to process the frame for movement. This is in milliseconds so 200 = 5 times a second. 1000 = once a second.
- Track Interval: How often to process the trackers. This is in milliseconds so 200 = 5 times a second. 1000 = once a second. If this is set too high then it may easily lose tracking with fast moving objects.
- Pixel Stability: This sets the number of samples of a pixel to gather to consider it stable (lower number) and the maximum credit a pixel can have for staying the same color (higher number). This is used to perform background subtraction to detect movement. This should usually be set to a low range, say 1 to 20. More info
- Use History: This enables learning about consistently moving objects in the scene in order to ignore them. Unless you have a need for this it's best to have it disabled.
- Parallel Process: Enables processing in the motion detection algorithm. We recommend this is enabled.
- Tracking Timeout: How long (in seconds) to wait for an object to reappear out of frame before giving up tracking it.
- Movement Timeout: How long (in seconds) to wait for an object to start moving again after stopping before giving up tracking it.