3D scan technology:
A comparison of the new Scoobe3D Technology
The 3D scan has become indispensable in many areas, e.g. in automated optical inspection in industry. On the contrary, it is predestined for applications in which existing technologies sometimes reach their limits. The application possibilities are almost endless.
In general, 3D technologies can be divided into procedures with angle-based, geometric calculations (stereovision, laser triangulation, structured light) and time-based time-of-flight technology. Developments in the exciting, trend-setting 3D field are progressing rapidly. Lasertriangulation & Co. has recently been joined by a promising new technology that combines geometric and time-based techniques: The Scoobe3D Technology
The fact is that all common technologies have strengths and weaknesses that give them an advantage or disadvantage depending on the application. An inventory.
Scoobe3D Technology - New Entry with Potential
The Scoobe3D technology is built into a single compact hardware and combines the 3D image of a time-of-flight camera (ToF) with a polarized RGB image. First things first: How does a ToF camera work?
ToF cameras use the transit time method to measure distances. For this purpose, the field of view of the camera is illuminated with a light pulse. Infrared illuminators are mostly used for this purpose. The time it takes for the light to travel to and from the object is directly proportional to the distance and is captured by the ToF camera for each pixel. For each pixel, the distance of the imaged object is recorded.
The result is a 3D image, e.g. with 640×480 pixels, which, depending on the sensor design, should be taken at a distance of a few meters to about 40 meters. The optimum resolution is between 5 and 10 mm, but is often much worse due to reflections. ToF cameras are used in logistics or traffic, but their accuracy is too low for other areas.
For this reason the Scoobe3D technology combines the image of the ToF camera with a polarized RGB image and thus achieves an accuracy of 1/10 mm, i.e. 100 fps. µm. This is how it works: The object can either be circumnavigated with the Scoobe device or rotated in front of the camera.
The light source integrated in the device projects light onto the object to be scanned independently of ambient lighting. Each particle of light oscillates perpendicularly to the direction in which it propagates. The angle of incidence of the light on the target leads to a partial polarization of the corresponding light beam, i.e. there is more of one pole direction than of another in the light (cf. changed dashing of the arrows after impact on the target).
This polarized light first hits one or more liquid crystal modules and then the polarized filter upstream of the RGB camera. The liquid crystals cause an electrically controllable rotation of the polarization of the light.
The polarizing filter acts like a wall of lattice bars that filters the light after polarization. A photon can either glide through or be reflected or absorbed. The latter happens when the polymers, i.e. the "lattice bars", are aligned with the direction of oscillation of the photons in such a way that they cannot pass through. However, if the direction of oscillation of the photon is at a certain angle unequal 0° to the orientation of the polymers of the polarizing filter, the photon can pass through the polarizing filter and hits the RGB camera.
Thus only light components with a certain polarization penetrate to the RGB camera. All in all, images can be captured very quickly with different polarization directions highlighted. This enables the system to obtain information about the angle at which the object stands to the camera, from which an intelligent algorithm can reconstruct a shape in 3D. The resulting information, however, is a relative 3D information, i.e. without absolute distances.
Absolute distances are added by the image of the ToF camera, which alone could not provide such high accuracy of the 3D image. The combination of the two methods can thus be used to generate highly accurate and absolute 3D information. Due to the essential components of the system, the liquid crystal modules and the polarization filter, the process is called Liquid Crystal Polarizatography.
The only requirement for the measurement with Scoobe3D technology is no direct sunlight as backlight. The software can largely compensate for restless movements of the hand or the object, since a large image section - not just a dot or stripe - is captured at high recording speeds. The Scoobe avoids holes in the 3D scan, laser spots or missing data sets.
The result is a 3D scan created in a few seconds, which represents an exact representation of the real object, true to scale and ratio. Since this technology is very new, there are still no fixed areas of application known. You can be curious about the first test reports.
Stereovision - a low-cost procedure with limitations
Stereovision is a process for 3D scanning inspired by nature, or more precisely by the human eye. Two normal RGB cameras take - like our eyes - two 2D images of an object from different angles. During this process, the object may in principle be moved. The 2D images are then combined to form a three-dimensional image by triangulation.
It is important for the success of this method that each object point can clearly be assigned to a pixel in both 2D images. This requires reference marks, patterns or textures on the surface of the object. Consequently, the method is not suitable for detecting reflective or transparent surfaces.
The finished 3D model does not contain any information about the size ratio - just like an airplane on the horizon looks like a model airplane from a scale point of view. For the 3D visualization of rooms that are dangerous or inaccessible to humans, the method is still used, but for those who cannot or do not want to do without true to scale, further options are available, such as laser triangulation.
Laser triangulation - the technology for specific cases
Laser triangulation is one of the simplest measurement methods. A line laser directs a sharply contoured line onto the object to be scanned, which is moved through the field of view of the laser. The light of the laser is deformed by the surface geometry of the object. The deviations of the light line are continuously recorded and measured by a camera positioned at a known angle to the laser. The basic value is the non-deformed laser beam. In this way some height profiles are generated, which are combined to a three-dimensional image.
The altitude information is first displayed as raw data in a Range Map - as color-coded gray values. These data are not calibrated, the range map is pixel based. Thus it can make height differences relatively visible, but does not offer metrically concrete values. Calibration with the help of software converts the height information of the range map into 3D point clouds (also point cloud).
If the speed of movement of the object is constant during measurement, a time-based sampling method can be used. If this is not the case, an encoder that controls the object movements can be used to smooth out rotation and position deviations - in all six degrees of freedom. There are already devices that perform this calculation step internally and spit out finished 3D images. This saves the calculation on the host computer. Other 3D cameras perform this step through a connected computer.
The prerequisite for the feasibility of laser triangulation is that the object to be scanned moves relative to the laser illumination and the camera. For each measurement sequence, the object must be returned to its original position, without having to be placed or moved very precisely thanks to the subsequent correction and smoothing of errors. This procedure also enables the detection of very large objects.
Another aspect to consider is ambient light (e.g. daylight passing through a nearby window). This can vary greatly depending on the time of day and season and is not subject to human control. Nevertheless, it can significantly impair, falsify or even cause the absence of results in the measurement results of 3D sensors.
Especially for applications under direct sunlight this should be an important point in the considerations for the choice of the measuring method. Laser triangulation is only conditionally suitable under such circumstances, e.g. using high-power laser triangulation sensors. These can be preceded by an optical filter that blocks the ambient light. These high performance sensors thus increase the quality of images with strong ambient light and reduce the duration of the images at the same time. On the other hand, the increased power of these lasers increases the risk of eye damage.
If one has nevertheless gone through all these steps successfully, one should have a finished 3D scan in front of him. For the sake of completeness, it should be mentioned here that the possible problems of the laser triangulation process include so-called shading, i.e. object properties that are concealed by other surface shapes and thus not detected. Since the height information of the location in question was not recognized, it does not appear as an error in the correction work step either. Shadows can manifest themselves in the form of a hole in the model.
If this problem is to be solved - with further use of laser triangulation - several cameras can be used to record the laser line from different angles. These varying data sets of the individual cameras are then combined as usual to form a height profile image. Another possibility is to use two differently positioned lasers. Both procedures reduce the risk of missing object data.
What is unfortunately unavoidable is the speckle effect produced by laser triangulation sensors (a result of the reflection of coherent light from rough surfaces). These speckles make the method unusable for measurements in the µm range. Structured light, for example, is ideally suited for such applications.
Structured light or "structured lighting" - the surgeon of the 3D scan
The results of Structured Light are very precise. The procedure is also based on triangulation, but the procedure is more complex than laser triangulation.
Structured light uses coded light, which is emitted in strips onto the object in various patterns, for example through high-resolution micro-mirror displays. The height structure of the object to be scanned influences the light pattern, which is recorded by a camera at a known angle. This results in a sequence of stripe images in 2D, which are converted into a 3D point cloud and then into a 3D image with the help of a matrix.
A prerequisite for the 3D scan is that the scan object measures a maximum of 2m and rests. If the object moves during the scan, errors may occur in the data set. Once the object is stationary, parameters such as exposure, filtering and analysis settings still need to be optimized. For exposure, LED lighting is usually used, which does not entail any risks for the eyesight.
If the measuring procedure is carried out correctly, the data acquisition takes place within a few seconds. The result is extremely accurate, up to 5 μm, which is achieved by evaluating the light intensity in each camera pixel, as opposed to the laser scanner where this happens over several camera pixels. This measuring accuracy, combined with the absence of laser spots, predestines the structured light process for applications in the micrometer range. It is used, for example, in the context of industrial controls.
Many roads lead to Rome: if you want to capture 3D data, you can use a variety of methods. As so often, the key to success here is knowing what is important in one's own case and what is important.
The 3D industry is a fast-growing, highly innovative industry from which innovations will surely find their way to us again and again. It's an exciting time to get involved with 3D, a topic that will shape our future.
Carolin is responsible for marketing at Scoobe3D. She likes to present difficult, technical topics in a simple way and has gained experience here over several years. She likes ice cream and chocolate.