Roberto Scopigno on digitization of Cultural Heritage

Roberto Scopigno and his team gave a very complete introduction on 3d digitisation techniques. Scopigno  started by giving an overview of the different acquisition methodologies, explaining that they include image-based rendering (panoramic images, RTI images), standard CAD modelling (manual process) and approaches based on Sampling (3d scanning (active), 3d from images (passive)).

scopigno

Some image-based rendering are interesting for some applications. For instance, If you don’t have to move in your model, panoramic images can be enough. RTI (Reflection Transmission Images)  is done by putting your camera on a tripod by taking several images and regularly changing the light. Sometimes a dome is used. With such images, you cannot change the view, but you can change the way your image reflects light. For instance, you can move a virtual light and see how it affects your image. A technical implementation overview can be found here: http://culturalheritageimaging.org/Technologies/RTI/

Modelling vs. Sampling: There are big differences in terms of approaches between modelling approaches and sampling approaches. Modelling implies redrawing. For instance, before photography, painters were making drawings of other painters’ painting. With the arrival of photography they could start “sampling” the painting. The same olds for 3d models. You have wonderful technology developed for the movie market that permits to produce great 3d models. Such kinds of 3D model is usually complete. On the contrary, sampling/scanning approaches of 3D models are usually uncompleted, with several unsampled regions). If you want to communicate, 3D models are great. If you want to study a building, sampling is interesting. In this lesson, Scopigno only talked about scanning/sampling techniques.

Triangulation and Time of Flight techniques: There are different 3d scanning devices. They all use active optical technologies, like a laser and a camera. The regions that are not seen by the two devices cause problem in the restitution. Some techniques use laser or structured light with triangulation. Triangulation is an old an simple approach (Thales-Talete). Such systems are good for small/medium scale artefacts (e.g. statues). They permit to reach high accuracy (>0.05 mm) and a very dense sampling. Time of Flight techniques measure the time a light impulse needs to travel form the emitter to the target point. A source emits a light pulse and starts a nanosecond watch. With Time of flight techniques, one can do large scale models (architectures). This can work in wide workspace, but accuracy is smaller. As Scopigno explained, this because sound is too slow and light is too fast.

Remark: Kinect acquisitions are of less good quality than these techniques. But they have  a better frame rate. If you have application where the dynamic acquisition is important, Kinect can be a good choice.

3d scanning pipeline : The 3d scanning pipeline includes 6 steps: Planning (where are you going to put your scanners), acquisition, editing (removing people, etc.), registration (aligning coordinate systems of different scans, 4 points are enough), merging (Based on a set of range map, a single surface is computed, this is typically done by another software using for instance Poisson surface reconstruction), simplification (for using on a webpage or for 3d printing), texturing.

Remarks:

Merging: In some sense, merging is destroying the data, creating an average shape. Actually, some architects prefer to use point clouds directly obtained based on sampling. But it is also a way of improving the accuracy of the model, removing the noise thought this smoothing process.

Simplification: 3d scanning tools produce huge meshes (from 4M faces up to Giga faces). Data simplification is a must for managing these data on common computers. Standard simplification approach include edge collapse with quadric-bassed error control (QEM).

– Multiresolution encoding: Multiresolution encoding can be build on top of simplification technology. The goal is to structure the data to allow to extract from the model (in real time) an optimal representation for the current view (view dependent models produced on the fly). This is particularly interesting if you are rendering terrains. The mesh is more and more coarse as we get farther from viewpoint. Zones which are outside the view frustum are very coarse. For multiresolution encoding, you have to keep all the intermediate levels of simplification. Some de facto standards exist for terrain (used in Google earth). For object there is currently no de facto standard, but Scopigno’s lab has developed a format called Nexus.

3d from images

The next lesson focused on image-based acquisitions for 3d models . The principle of stereo acquisition was explained. If you want to extract geometric information on images you can use assisted modelling or automatic stereo matching.

With assisted modelling you take take different pictures of an object and start modelling.

SketchUp: You can even start with a single image and use SketchUp. SketchUp is a very strange modelling tool but efficient in some contexts. If your image permits to extract the main lines of perspective ,you can model rapidly the 3d shape of an object. You need to find some features of the object that permits to have two axis (ideally orthogonal). You draw the two vanishing lines. Partial calibration with only a single photo is sufficient provided only the axis can be recovered.

Photogrammetry: For photogrammetry you need several images and to click on points which are common to the different images. These points permit to estimate the camera position for the different images. ImageModeler is for instance a possible photogrammetry commercial tool. Another tool is PhotoModeler. For simple geometry you can get very good reconstruction is very short time.

In the recent years a new class of algorithms has tried to completely automatised these processes. These are multi-view stereo matching algorithms. In some sense the approach is the inverse of assisted learning. You can obtain a very large number of point (with a lot of errors). You remove the errors and wait to have a good number of points. You need to take pictures close to one another, so that the computer can match them easily.

One interesting example of the use of such algorithms is the Photo Tourism project using photo by tourists (http://phototour.cs.washington.edu/). A nice thing is that most of the programmer who produced these algorithms released their code. One can have a look at the PhotoSynth toolkit or Python Photogrammetry Toolbox. You don’t need to be a computer scientists to use these. Another example is the Autodesk 123DCatch. It works on a remote servers. Another solution is PhotoScan. Very fast, works on local machines, directly produces textured model. It is very robust and reliable.

To summarise, for these automatic approaches to succeed, you need many features on the objects. On the contrary, assisted modelling approach using for instance SketchUp can work for non textured objects (e.g. big white buildings).

Color and appearance information on 3d models

The next part of the course focused on color and appearance information. Color is difficult. A reflectance scattering function includes 12 variables (Light and view direction, Incident and outgoing surface point, Wavelength, Time). This is an extremely complex. The most impressive rendering are done of 3d models that are not acquired but built.

Usages of such technologies

To summarise in this lesson we have seen an overview of 3 techniques : 3d scanning (different methods), geometry processing (this includes measurements), rendering (transforming bits in digital images). In addition there is a need for (semantic) repositories. All the technical pipeline must be documented (which kinds of scanners, data processing, was used, etc.).

Beyond rendering, you can use 3d printing for producing model. This is much safer than “calco” and cause no harm to the original. Costs are now becoming affordable. With calco, you can produce only in 1:1 scale, with the digital model you can reproduce in any scale. Calco may degrade the original, Digital is not-contact and safe.

You can also use 3d models for studying artworks.

Advertisements

One thought on “Roberto Scopigno on digitization of Cultural Heritage

  1. Pingback: Digital Humanities Venice Fall School 2013 | Mostly DH and networks

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s