Enhanced 3D vision
IDS simplifies working with 3D camera data
Today, environmental perception with 3D camera data enables many innovative applications that could previously only be done by humans. Robotics is therefore enabled to recognise objects in a human-like manner and to react independently to different situations. In addition to spatial dimensions and the location in the shop floor, it is also possible to draw precise conclusions about deviations or defects in comparisons with reference objects.
Working with 3D cameras and their data is very complex and requires a lot of preparation and setup time when developing an application. Especially in multi-camera applications or in combination with robotics, complex calibrations of several coordinate systems are necessary before the data can be used effectively. Due to the strong system dependency, the application often has to be developed directly at the system level in order to generate usable data. In addition, the field of view and resolution of many 3D cameras is not sufficient for applications with larger workspaces.
These requirements have been considered when developing the new Ensenso SDK 2.2 and the new 5 MP variant of the Ensenso X series. Many improvements in details have been achieved, which make integration much easier.
Wide range of models optimised for speed and quality
Ensenso 3D cameras are well suited for both static and moving applications. Their FlexView technology in conjunction with the sophisticated SC algorithms (sequence correlation) optimises the accuracy of results of the camera models N35, X36 1.3 MP, X36 5 MP for bin picking or high-precision object comparisons. An adjustable mounted high-performance projector produces a random pattern, creating images with different surface structures of the test object. The SC algorithms use this output to calculate the 3D object data, which benefits from each additional image pair (up to 16) for increased accuracy.
Ensenso is also perfectly prepared for applications with moving objects, such as on continuous conveyor belts or in cases where the camera moves itself, with the camera models N30, X30 1.3 MP, X30 5 MP. Optimised SGM algorithms (semi-global matching) can already achieve considerable accuracy of depth information from a single image pair. The Ensenso camera selector on the IDS website can assist you in finding the right camera for every application.
Bigger eyes for a wider field of vision
With the integration of two Sony IMX264 5 MP image sensors, the 3D image resolution of the Ensenso 3D camera family increases by approximately 35% compared to the previous 1.3 MP version while simultaneously increasing the field of view by approximately 20%. To completely capture a packed Euro pallet with a volume of 120 x 80 x 100 cm, the distance between camera and test object can be reduced from 1.5 m to only 1.25 m. As a result, native sensor resolution is also used much more effectively. In combination with the lower pixel noise of the Sony sensors, this results in an improvement of the calculated depth information (Z accuracy) from 0.43 mm to an excellent 0.2 mm.
Ready for embedded applications through accelerated calculation
More powerful sensors naturally cause larger amounts of data and thus potentially longer processing times until results are available. Reference measurements with a 5 MP model confirmed approximately four times longer processing times for matching the images of an image pair or the time, to a complete 3D image. Nevertheless, the times for the complete 3D image calculation of a sequence correlation with 16 high-resolution 5 MP image pairs are only about 2.5 seconds. These are completely sufficient for most applications. For applications with higher speed requirements, a semi-global matching with a 5 MP image pair provides sufficient accuracy with a calculation time of only 1.1 seconds.
Important calculations have been optimised for CUDA to counteract the larger data volumes and the associated loss of time. The additional computing power of NVIDIA GPUs speeds up processing by about five times, depending on the GPU used and the parameterization of the corresponding algorithms.
With CUDA support, 3D applications become interesting for the embedded environment. A suitable platform is e.g. the NVIDIA Jetson TX2 board. While the stereo calculations can access 256 CUDA cores, a subsequent image processing with HALCON for Embedded Devices runs on the available ARM CPUs.
Multi camera capability
The Ensenso software libraries provide a number of useful functions allowing multiple cameras to work together in one application. Using different views and positions, the coordinate systems of several cameras must be adjusted to each other or to fixed points in the real world and calibrated to a uniform object coordinate system. If cameras are to work with a robot and its movements must be coordinated with the camera data, a hand-eye calibration can also be performed. An integrated calibration wizard accompanies the user during the execution.
Apart from additional Ensenso stereo cameras, the SDK also allows very simple integration and calibration of monocular 2D-uEye cameras in the same application. The quality of inspection and measurement results in 3D applications can be significantly improved by the capabilities of 2D cameras. Where stereo cameras have difficulties identifying objects in border areas, 2D cameras perfectly assist by edge detection or colour recognition. Furthermore, 2D cameras also allow the acquisition of additional information such as barcode content. The Ensenso software therefore optimises the integration of both technologies. Specially developed calibration patterns also facilitate setup and adjustment in multi-camera systems. The NxLib library recognises object coordinate systems of any size and the camera positions relative to each other using several calibration plates, some of which are covered by the cameras, and can then synchronise them with each other.
Virtualisation for easier development and debugging
Application developers should benefit especially from the "File Cameras" and "Virtual Cameras" extensions. To improve algorithms and processes, it is necessary to debug identical data multiple times. A file camera behaves on the system like a real camera, with the difference that its images come from a local folder holding stored data sets. In this way, application sequences can be simulated again and again without the need to access the real system or recreate situations. Also an ideal debugging tool: users can save problematic data sets and deploy them to image processing specialists. This allows errors to be easily reproduced.
With "virtual cameras", simulations can be done in an offline environment, e.g. to evaluate data quality, completeness, resolution and noise of a scene with different camera models. Objects in STL or PLY format (common file formats for storing 3-dimensional data) can be imported, rendered and positioned as desired. This enables performance evaluations for different variants of an inspection process, without having to set up a real system. A scene editor as an alternative to manual model creation is already integrated. Bin picking applications with different parts and their orientation in boxes can be easily simulated. With a random function, an unlimited number of variants are available for tests, like in reality with unsorted part feeding. Each Ensenso model can be applied as a "virtual camera". Thus, every possible model can be selected via the online camera selector and simulated before the system is set up. With both of these tools, preliminary investigations and optimisations can easily be realised even for 3D applications.
With the "optimisation" of 3D applications, the complexity of working with camera hardware and its data increases inevitably. With the Ensenso SDK 2.2 and the new 5 MP camera models, IDS offers several solutions that simplify the work with 3D data for system integrators and developers. With further improved hardware and software, IDS supports the development of more powerful 3D applications. Even in difficult tasks like "bin picking", the demands on quality, cycle rate and economy and a fast availability of Robot Vision applications can now be met.