Automatic 3D recognition and marking of wooden beams
In terms of volume and mass, wood is the most important raw material worldwide. This natural material is also one of the oldest building materials and is becoming increasingly popular: Wood is ecological, healthy and gives a feeling of comfort more than any other building material. For example, the market share of wooden houses has doubled to 15 per cent of all houses built in Germany in the last ten years. The focus of production shifts from the construction site to the carpentry workshop. Automation is playing an increasingly important role there in order to be as cost-efficient and time-efficient as possible and thus fit for the future. In a recent example: The “Zentrum für Telematik e.V.”, based in Germany, has developed a robotics solution for the automatic marking of wooden beams for the German company Georg Schumann GmbH & Co. KG.
Schumann offers a fully automatic joinery service via a sawmill and the associated timber trade. There, wooden beams are manufactured in many variations, all of which were previously labelled by hand. Labelling in this case involves the numerical marking of a list of timber tailored to the customer's needs, usually for the carpenter. In addition, the system can also be used, for example, to mark certifications (sustainability, origin) or gradings (according to quality, starch, degree of dryness and intended use). To automate the marking process, a system has been developed that integrates an industrial robot, a 3D camera and a compact ink printing system. The solution enables the automatic determination of possible print areas, the selection of the print position and the optimal alignment and size of the font.
In order to be clearly visible in a stack of pallets, the print must be made on the front side of the beam if possible. The camera must recognize the different shape and position of the wooden beams: For example, each bar has different bevels, pegs or depressions in the print area. Furthermore, the position of the beams in front of the robot is never quite the same due to variance or tolerances in the upstream production system. This requires an image processing system with which the robot can quickly, reliably and precisely detect the respective position and the 3D surfaces of each beam.
An Ensenso N35 camera is used to capture the beam position and geometry. As soon as the timber is in the printing position, the robot automatically places the flange-mounted camera so that it can detect the surfaces of the beam. If necessary, the camera is positioned by the robot on several sides around the wooden beam to allow different views of the different sides of the beam.
The camera sees the top of the wooden beams at a medium distance (~40-90cm) from different oblique perspectives. It is equipped with two monochrome CMOS sensors (global shutter, 1280 x 1024 pixels), GigE interface, screwable GPIO connectors for trigger and flash and a light pattern projector. The Flex View technology integrated in Model N35 is particularly suitable for 3D detection of standing objects and for working distances of up to 3,000 mm. A light pattern projected onto the beam gives extra texture in the image. The position of the projector mask in the light beam can be shifted linearly in very small steps.
Consequently, the projected texture on the object surface of the scene objects also moves and creates other help structures. Acquiring multiple image pairs with different textures of the same object scene produces a lot more image points. The resolution increases. In addition to the resolution, the robustness of the data on difficult surfaces also increases, as the shifted pattern structures apply additional information. The Ensenso N35 thus meets the customer's requirements: the most precise and low-noise detection of wood surfaces.
The system developed by ZfT converts the acquired raw data, determines the position of the point cloud in the robot coordinate system and extracts the planes on the beam that are potentially suitable for printing. This data is used to calculate the print position and the optimum print size and alignment for the given text. The robot then moves to the print positions determined by the camera system and precisely performs the actual printing with the ink printer.
"The measuring accuracy of the camera in the acquisition volume, as well as the speed of the measurement was decisive in our choice of camera", explains Florian Leutert, research assistant at ZfT. "Further assets are the compactness of the N35 and the dust/moisture protection." The protection class plays an important role for wood processing in the sawmill environment. The robust, compact aluminium housing of the Ensenso N35 3D camera is perfectly suited for this. It meets the requirements of protection class IP65/67 and is therefore protected against dirt, dust, splash water and in this case ink.
"In robotics, not only rigid automation solutions are increasingly required, but also automatic processing systems that can deal flexibly with different workpieces and environments," commented Florian Leutert on future requirements. This requires high-quality cameras, because the 3D acquisition of the working area must be performed with the accuracy required by the robotic system, i.e. in the sub-millimetre range if possible. This is no problem for the Ensenso stereo 3D models from IDS: they make 3D vision not only robust and simple, but also fast and precise. They form a reliable and promising component for the automatic 3D acquisition of different parts, not only for the woodworking industry in which beams in a great diversity are “cut from the same cloth”.
Ensenso N35 - 3D vision, fast and precise
- With GigE interface – versatile and flexible
- Compact, robust aluminium housing
- Global Shutter CMOS sensors and pattern projector, optionally with blue or infrared LEDs
- fps (3D):10 (2x binning: 30) and 64 disparity levels
- fps (offline processing): 30 (2x binning: 70) and 64 disparity levels
- Designed for working distances of up to 3,000 mm (N35) and variable picture fields
- Output of a single 3D point cloud with data from all cameras used in multi-camera mode
- Live composition of the 3D point clouds from multiple viewing directions
- Integrated FlexView technology for more detailed accuracy of the point cloud and higher robustness of 3D data on difficult surfaces
- "Projected texture stereo vision" process for capturing untextured surfaces
- Capture of both stationary and moving objects