Elitenect Technologies SLAM Software

  • SLAM (Simultaneous Localization and Mapping) refers to the process in which a mobile robot establishes an environmental model while estimating its own motion without prior environmental information.
  • Through visual cameras, LiDAR GPS、IMU、 Encoders and other sensors replace sensory observation of external environmental information (including environmental geometry, appearance, and semantics) and their own motion (including speed, position, and direction), estimate their own position in the environment through algorithms, and maintain a model that describes the external world.
  • By integrating multiple sensors such as LiDAR, vision, IMU, GPS, and combining semantic information and dynamic environmental change information, robots can operate reliably and stably in complex and specific environments.

Lidar+Visual SLAM

Product Parameters

The SLAM solution of Eyesight Technology supports both LiDAR SLAM and visual SLAM. At the same time, Eyeview Technology supports the SLAM solution of laser radar+visual fusion, which means using both laser radar and camera for positioning and navigation in SLAM.

Lidar and camera belong to different sensor categories, and they have their own advantages and disadvantages:

The advantage of LiDAR is that it can provide accurate distance information, and because LiDAR emits light on its own, it can work in completely dark environments. The disadvantage of LiDAR is that the sampling points are relatively sparse (such as some single line LiDAR with only 360 measurement points per scan), and the SLAM method of LiDAR depends on aligning geometric point clouds, which may cause SLAM failure in geometrically degraded scenes (such as long and straight tunnels).

The camera can provide rich environmental information (RGB) and the information is relatively dense (for example, a regular monocular camera can provide image information of 640 * 480 pixels). The disadvantage of cameras is that they are sensitive to environmental lighting conditions, such as not working in dark environments. At the same time, in environments without distinctive features such as cloudy days, snow on the ground, and all white walls, visual based SLAM can also face difficulties.

Product Function

  • Indoor and outdoor robot positioning and navigation.

Product Advantages

The fusion solution of LiDAR and visual SLAM can compensate for the weaknesses of both LiDAR and camera:

  • Being able to utilize the camera's dense angular resolution to compensate for the sparsity of LiDAR.
  • Using accurate range information from LiDAR to address the ambiguity of camera depth perception
  • Even if any sensor malfunctions, the laser radar+vision fusion SLAM system can still function normally.

Application Scenarios

The fusion scheme of LiDAR and visual SLAM can adapt to more complex environments, enabling SLAM to work in multiple environments:

  • Night
  • Tunnel
  • Fog, snow
  • On a high-speed moving car
  • More Other Environments

Product Display Diagram

Integration of Machine Vision and SLAM

Product Introduction

As a leading visual tracking research and development unit in China, Beijing Moushi Technology Co., Ltd. realizes the tracking of target (person) walking in unknown environments (without making maps), detects and tracks designated targets in real time, performs basic operations such as forward, backward, left turn, and right turn, and maintains a distance of 1.5 meters from the target.

Scope of Use

Intelligent travel suitcase, factory item shipping, home intelligent monitoring, specific pedestrian trajectory tracking, etc.

Function Introduction

(1) Robot following pedestrian targets;
(2) Robots maintain and track targets within a certain range, and can avoid obstacles during the tracking process;
(3) Real time acquisition of scene information in front of the robot and recognition of pedestrian information;
(4) Create a map of the surrounding environment during the robot's walking process;

Product Display Diagram

Visual SLAM+IMU

Product Introduction

Visual SLAM is a set of algorithms that use visual sensors for positioning and navigation. Using visual sensors instead of LiDAR can effectively reduce the cost of SLAM products, while maintaining similar accuracy to LiDAR. However, pure visual SLAM is difficult to effectively locate in a single repetitive scene, and the sensor+IMU solution can effectively solve this problem:
  The fusion of vision and IMU can leverage the higher sampling frequency of IMU to increase the output frequency of system vision;
  The integration of vision and IMU can improve the robustness of vision, such as the erroneous results of visual SLAM due to certain movements or scenes;
  The fusion of vision and IMU can effectively eliminate the integral drift of IMU;
  The fusion of vision and IMU can correct the bias of IMU;
  The fusion of monocular and IMU can effectively solve the problem of unobservable monocular scales.
The visual SLAM system supports three types of cameras: monocular, binocular, and depth cameras.

Monocular vision with IMU

  • Low cost. The fusion of monocular and IMU can effectively solve the problem.
    The problem of unobservable monocular scale.

Binocular vision with IMU

  • Obtaining the depth of an object through binocular perspective, combined with IMU, can avoid the problem of lost tracking in environments with few features such as white walls and deserts.

Depth camera with IMU

  • Accurate mapping and not easily lost. Combining IMU effectively avoids the impact of sunlight, wall reflections, and other factors on SLAM for depth sensors.

Product Display Diagram

Multi sensor fusion SLAM

Product Introduction

Implement a tightly coupled framework of laser vision inertial odometry through smoothing and mapping, for real-time state estimation and mapping.
  In the form of a factor diagram, it consists of two subsystems: a visual inertial system and a LiDAR inertial system;
  When two subsystems detect faults or when sufficient characteristics are detected, they can work independently;
  VIS performs visual feature tracking and can choose to use LiDAR frames to extract feature depth information;
  By optimizing the error of visual reprojection and IMU measurement, visual odometry can be used as the initial value for laser radar scanning matching, and constraints can be introduced in the factor graph;
  After correcting the point cloud using IMU measurements, LIS extracts the edge and plane features of the LiDAR and matches them with the feature map maintained in the sliding window;
  The estimated system state in LIS can be sent to VIS for easy initialization;
  For closed-loop detection, candidate matches are first identified by VIS and further optimized by LIS;
  In the factor diagram, the constraints of visual odometry, LiDAR odometry, IMU pre integration, and closed-loop were jointly optimized. Finally, the entire system outputs pose estimation at the frequency of IMU.

Product Function

  • Positioning, mapping, navigation, obstacle avoidance

Product Advantages

By utilizing multi-sensor fusion to complement the capabilities of radar and vision, a more robust SLAM can be achieved; And it can still ensure the normal operation of SLAM in the event of a single system failure.

Product Display Diagram

Object detection and tracking based on deep learning

Product Introduction

Based on deep learning technology, FASTER-CNN or YOLO of convolutional neural networks can quickly recognize pedestrians and objects, helping robots recognize surrounding targets and track them. This can enable robots to perceive the environment more intelligently and perform corresponding controls such as obstacle avoidance and more intelligent path planning. The company is the first manufacturer in the market to integrate two-dimensional LiDAR and three-dimensional vision for robot positioning and navigation, as well as pedestrian recognition.
Integration is reflected in two aspects:

  • Form a three-dimensional point cloud based on visual features, and fuse the visual three-dimensional point cloud with the two-dimensional LiDAR point cloud;
  • Detection and tracking of pedestrians and moving targets based on visual deep learning, followed by fusion with depth information from 2D LiDAR.

Product Display Diagram

Dynamic Environment Mapping

Product Introduction

Most SLAMs assume that the surrounding environment is static and unchanging, which is currently the most obvious limitation of SLAMs. Because objects in the environment can usually be divided into three categories:

  • Static object: an object that remains stable and unchanged for a long time, such as walls, columns, etc.
  • Semi static object: Most of the time it is stationary, but it may change at some point. A door that opens and closes intermittently, parking on the roadside.
  • Dynamic object: a moving object, such as a moving pedestrian or vehicle.

The usual SLAM and localization algorithms are based on the assumption of a static environment, which assumes that all objects in the environment are of the first type. Obviously, this is not the case in real environments. The dynamic environment is the true situation of more application scenarios, so solving SLAM in dynamic environments is an important issue that needs to be addressed in the industry.

  • During a single mapping process, high dynamic targets are filtered out in real-time to avoid them appearing as unnecessary obstacles in the SLAM generated map. The application scenarios of this feature include situations such as pedestrians, moving vehicles, animals, etc. during the SLAM mapping process.
  • During multiple mapping processes, the map is automatically updated to reflect the current status of low dynamic targets in the map, in response to their appearance, disappearance, displacement, and other situations. The application scenarios of this function include adding an obstacle within the inspection area, removing or changing the position of an object in the original environment, and so on.
  • During multiple mapping processes, when the robot moves into an area that was not previously mapped, a new map of the area can be added based on the previous map. The application scenarios of this function include the need to extend the inspection area to areas that have not been previously mapped.

Product Display Diagram

Semantic SLAM

Product Introduction

Combining traditional SLAM schemes with deep learning not only solves the problem of robot localization, but also synchronously recognizes the surrounding environment of the robot, performs semantic segmentation of environmental information, and uses the segmented objects for localization, allowing the robot to perceive the world from two levels: what and content. At present, our company has successfully identified multiple targets (text, pedestrians, electricity meters, etc., please refer to the machine vision section for details) and successfully embedded the text information into the map.

  • Text information is important information for humans to recognize their surrounding environment. Combining graphic and text tracking with SLAM, endowing robots with the ability to read and analyze text labels in the environment, is a big step forward for robots towards intelligence, suitable for indoor navigation, intelligent service robots, logistics, and inspection robots.

Product Display Diagram

Security Obstacle Avoidance

Product Introduction

  • During the forward movement of the robot, sensing devices such as LiDAR, cameras, and obstacle avoidance radar can be used to detect obstacles on the road, automatically avoid them, and achieve obstacle avoidance control functions such as turning, retreating, turning left in front, and turning right in front, which can avoid collisions with pedestrians, equipment, etc.

Lidar Obstacle Avoidance

During the driving process, the laser radar is continuously used to detect obstacles and feedback the obstacle information to the path planning module. The path planning module plans a path to avoid obstacles based on known map information and real-time obstacle information, and instructs the robot to avoid obstacles.

Ultrasonic Obstacle Avoidance

During the driving process, ultrasonic waves are continuously used to detect obstacles and feedback the obstacle information to the path planning module. The path planning module plans a path to avoid obstacles based on known map information and real-time obstacle information, and instructs the robot to avoid obstacles.

Machine Vision Obstacle Avoidance

During the driving process, visual perception is continuously used to detect obstacles and calculate their pose, and then the obstacle information is fed back to the path planning module. The path planning module plans a path to avoid obstacles based on known map information and real-time obstacle information, and instructs the robot to avoid obstacles.

Innovation, efficiency, and openness