Scoobe3D

3D Scan Technology

The new Scoobe3D technology in comparison

A 3D printer that creates a 3D model.

The 3D scan has become indispensable in many areas, e.g. in automated optical inspection in industry. On the contrary: it is predestined for applications where previous technologies have sometimes reached their limits. The possible applications are almost endless.

In general, 3D technologies can be divided into methods with angle-based geometric calculations (stereovision, laser triangulation, structured light) and time-based time-of-flight technology. The developments in the exciting, future-oriented 3D field are advancing rapidly. Lasertriangulation & Co. was recently joined by a new, promising technology that combines geometric and time-based methods: The Scoobe3D Technology

The fact is: all current technologies have strengths and weaknesses that give them an advantage or disadvantage depending on the application. An inventory.

Scoobe3D Technology - New entry with potential

The Scoobe3D technology is built into a single compact hardware and combines the 3D image of a time-of-flight camera (ToF) with a polarized RGB image. First in order: How does a ToF camera work?

Display Time of Flight Functionality

ToF cameras use the runtime method to measure distance.

An explanation of the Time of Flight technology. A pulsed illumination is sent to an object, the reflected light is measured and processed by the TOF camera.

ToF cameras use the runtime method to measure distances.

ToF cameras use the transit time method to measure distances. For this purpose, the camera's field of view is illuminated with a light pulse. Infrared lighting is usually used for this purpose. The time required for the light to travel to the object and back is directly proportional to the distance and is recorded by the ToF camera for each pixel. Thus, the distance of the displayed object is recorded for each pixel.

The result is a 3D image, e.g. with 640×480 pixels, which should be taken at a distance of a few meters to about 40 m, depending on the sensor design. The resolution is 5 to 10 mm in the optimum case, but is often significantly worse due to reflections. ToF cameras are used in the logistics or transport sector, but their accuracy is too low for other areas.

For this reason the Scoobe3D technology combines the image of the ToF camera with a polarized RGB image and thus creates an accuracy of 1/10 mm, i.e. 100 µm. This is how it works: The object can either be circumnavigated with the Scoobe device or rotated in front of the camera.

A pulsed, unpolarized illumination is sent to an object. The reflected light is filtered by liquid crystal cells and a polarizer before it reaches the TOF camera.

The angle of incidence on the target leads to a partial polarization of the light beam.

Metrology Presentation EN

.

The angle of impact on the target leads to partial polarization of the light beam.

The light source integrated in the device projects light onto the object to be scanned, regardless of ambient lighting. Each light particle oscillates perpendicular to the direction in which it propagates. The angle of incidence of the light on the object to be measured leads to a partial polarization of the corresponding light beam, i.e. there is more of a certain polar direction in the light than of another (cf. changed dashing of the arrows after hitting the object).

This polarized light first hits one or more liquid crystal modules and then the polarizing filter, which is connected upstream of the RGB camera. The liquid crystals cause an electrically controllable rotation of the polarization of the light.

The polarizing filter acts like a wall of bars that filters the light according to polarization. A photon can either pass through or be reflected or absorbed. The latter happens when the polymers, i.e. the "lattice bars", are aligned with the direction of oscillation of the photons in such a way that they cannot pass. However, if the direction of oscillation of the photon is at a certain angle not equal to 0° to the orientation of the polymers of the polarizing filter, the photon can pass through the polarizing filter and hit the RGB camera.

An explanation of how polarizing filters work. The polarizing filter acts like a wall of bars that filters the light according to polarization.

The polarizing filter acts like a wall of bars that filters the light according to polarization.

An explanation of how polarizing filters work. The polarizing filter acts like a wall of bars that filters the light according to polarization.

ToF cameras use the runtime method to measure distances.

This means that only light components with a certain polarization penetrate the RGB camera. All in all, images can be taken very quickly in which different polarization directions are highlighted. This provides the system with information about the angle at which the object is facing the camera, from which an intelligent algorithm can reconstruct a shape in 3D. However, the resulting information is a relative 3D information, i.e. without absolute distances.

Absolute distances are added by the image of the ToF camera, which alone could not provide such high accuracy of the 3D image. The combination of the two methods can thus generate highly accurate and absolute 3D information. Due to the essential components of the system, the liquid crystal modules and the polarizing filter (polarizer), the process is called Liquid Crystal Polarizatography.

The only requirement for measuring with Scoobe3D technology is that there is no direct sunlight as backlight. Unsteady movements of the hand or the object can be largely compensated by the software, since a large image section - not just a point or a strip - is captured at high recording speed. In this way the Scoobe avoids holes in the 3D scan, laser spots or missing data sets.

The result is a 3D scan in a few seconds, which is an exact reproduction of the real object, true to scale and proportion. As this technology is very new, no fixed areas of application are known yet. One can be curious about the first test reports.

Stereovision - inexpensive procedure with limitations

Stereovision is a method for 3D scanning that was inspired by nature, more precisely by the human eyes. Two normal RGB cameras take - like our eyes - two 2D images of an object from different angles. During this process the object may be moved in principle. The 2D images are then combined into a three-dimensional image by triangulation.

An explanation of how a human eye works. A spatial image is created by two images of an object taken from different angles.

Method based on the example of human eyes.

An explanation of how a human eye works. A spatial image is created by two images of an object taken from different angles.

Procedure based on the model of the human eyes.

Important for the success of this method is that each object point can be clearly assigned to a pixel in both 2D images. This requires reference marks, patterns or texture on the surface of the object. Consequently, the method is not suitable for detecting reflecting or transparent surfaces.

The finished 3D model does not contain any information about the size ratio - just as for people an aeroplane on the horizon looks like a model aeroplane in terms of scale. For the 3D visualisation of spaces that are dangerous or inaccessible to humans, the process is nevertheless popular. However, for those who cannot or do not want to do without scale accuracy, other options are available, such as laser triangulation.

Laser triangulation - the technology for specific cases

Laser triangulation is one of the simplest surveying methods. A line laser points a sharply contoured line at the object to be scanned, which is moved through the laser's field of view. The light of the laser is deformed by the surface geometry of the object. The deviations of the light line are continuously recorded and measured by a camera which is positioned at a known angle to the laser. The basic value is the non-deformed laser beam. Thus, some height profiles are generated, which are combined into a three-dimensional image.

The height information is first displayed as raw data in a range map - as color-coded gray values. This data is not calibrated, the range map is pixel based. Thus it can make differences in height relatively visible, but does not offer metrically concrete values. Calibration with the help of software converts the elevation information of the range map into 3D point clouds (also called point cloud).

If the speed of movement of the object is constant during the measurement, a time-based sampling method can be used. If this is not the case, an encoder that controls the object movements can be used to smooth rotation and position deviations - in all six degrees of freedom. There are already devices that perform this calculation step internally and spit out finished 3D images. This saves the calculation on the host computer. Other 3D cameras perform this step through a connected computer.

Sharply contoured laser lines scan an object

Sharp contouring laser lines scan the object.

Sharply contoured laser lines scan an object

Photo: Stefan ladda, pixelio. Sharply contoured laser lines scan the object

A prerequisite for the feasibility of laser triangulation is that the object to be scanned moves relative to the laser illumination and the camera. For each measuring sequence, the object must be returned to its initial position, but it does not have to be placed or moved very precisely thanks to the subsequent correction and smoothing of errors. This method also allows the detection of very large objects.

Another aspect that needs to be considered is the ambient light (for example, daylight falling through a nearby window). This can vary greatly depending on the time of day and the season and is not subject to human control. Nevertheless, it can significantly impair, falsify or even cause the measurement results of 3D sensors to fail.

Especially for applications under direct sunlight, this should be an essential point in the considerations for the choice of the measuring method. Under such circumstances, laser triangulation is only suitable to a limited extent, e.g. using high-power laser triangulation sensors. This can be preceded by an optical filter that blocks out the ambient light. These high-performance sensors thus increase the quality of recordings with strong ambient light and at the same time reduce the duration of the recordings. On the other hand, the increased power of these lasers increases the risk of eye damage.

But if one has successfully passed all these steps, one should have a finished 3D scan in front of him. For the sake of completeness, it should be mentioned at this point that the possible problems of the laser triangulation method include so-called shadows, i.e. object properties that are obscured by other surface shapes and thus not detected. Since the height information of the location in question was not recognized, it does not appear as an error in the correction step. Shadowing can manifest itself in the form of a hole in the model.

If you want to solve this problem - with further use of the laser triangulation - you can use several cameras, which record the laser line from different angles. These varying data sets from the individual cameras are then combined as usual to form a height profile image. Another possibility is to use two differently positioned lasers. Both procedures reduce the risk of missing object data.

What unfortunately cannot be avoided is the speckle effect caused by laser triangulation sensors (a result of the reflection of coherent light from rough surfaces). These speckles make the method useless for measurements in the µm range. Structured light, for example, is ideally suited for such applications.

Structured lighting - the surgeon of 3D Scan

The results of structured light are very precise. The method is also based on triangulation, but the procedure is more complex than laser triangulation.

A car seat is 3D-scanned with the Structured Light Scan technology. Coded light in different patterns is thrown onto the object in the form of stripes

Coded light is projected in stripes onto the object in various patterns.

A car seat is 3D-scanned with the Structured Light Scan technology. Coded light in different patterns is thrown onto the object in the form of stripes

Coded light is projected onto the object in different patterns in the form of stripes

Structured light uses coded light, which is radiated onto the object in different patterns in the form of stripes, for example through high-resolution micro mirror displays. The height structure of the object to be scanned influences the light pattern, which is recorded by a camera at a known angle. This results in a sequence of stripe images in 2D, which are converted into a 3D point cloud and then into a 3D image by means of a matrix.

A prerequisite for the 3D scan is that the scan object measures and rests at a maximum of 2m. If the object moves during the scan, errors may occur in the data set. Once the object is stationary, parameters such as exposure, filtering and analysis settings still need to be optimized. For exposure, LED lighting is usually used, which does not pose any risk to eyesight.

If the measuring process is carried out correctly, the data acquisition is completed within a few seconds. The result is extremely accurate, up to 5 μm, which is achieved by evaluating the light intensity in each camera pixel, as opposed to the laser scanner where this is done across several camera pixels. This measuring accuracy combined with the absence of laser spots predestines the structured light method for applications in the micrometer range. It is used for example in the context of industrial controls.

Conclusion

Many roads lead to Rome: If you want to capture 3D data, you can do so using a variety of methods. As is so often the case, the key to success here is knowing what is important in one's own case and what is important.

The 3D industry is a fast-growing, highly innovative sector, from which innovations will certainly find their way to us again and again. It is an exciting time to be involved in 3D, a topic that will shape our future significantly.

The most popular 3D scanner technologies

Would you like to know more details about the 3D scan technologies Laser Triangulation, Structured Light, Time-of-Flight and the Scoobe3D technology?

We have summarised detailed information on the scan technologies with advantages and disadvantages and 3D scan examples.

Download the free 3D scanner technologies overview download:

FREE OVERVIEW

The most popular 3D scanner technologies

Get detailed information on the most popular 3D scanner technologies with advantages and disadvantages and a scan example for each.

Sign up for the latest news
We are happy to share the latest 3D trends, tips and tactics.
Sign up for the latest news
We are happy to share the latest trends, tips, and tactics.
Write a comment

Your email address will not be published.