Skip to end of metadata
Go to start of metadata

Grant Agreement




Project full title

Planetary mapping




D 5.2

Deliverable Name

3D products of the merged GIS and

maps of Gale crater

Nature of deliverable


Dissemination level


Scheduled delivery date`

31th August 2019





Prepared by:

Gwénaël Caravaca, Stéphane Le Mouélic, Nicolas Mangold

Table of contents

Executive summary

The scope of this deliverable is to provide integrated multiscale 3D models of the Kimberley outcrop and its local vicinity (Gale crater, Mars, traversed by the MSL rover Curiosity in 2014), according to the 2700 m x 2800 m working area defined in deliverable 5.1 (PlanMap consortium, 2019).
Data provided with this deliverable consist in three 3D models constructed using orbital-derived data and ground-based data: a regional 3D model based on HiRISE DEM, a photogrammetric DOM of the Kimberley outcrop itself based on Curiosity’s Navcam and Mastcam imagery, and a photogrammetric micro-model of the Windjana drill hole based on curiosity’s MAHLI micro-imagery.
All base data used to compute the 3D models provided with this deliverable were retrieved from publicly available and accessible repositories. These models are useful to the PlanMap effort to serve as 3D depictions of the Kimberley outcrop for geological mapping and characterization, but also to serve for future implementation of measurement tools into Virtual Reality. 

List of acronyms

Acronym Signification 


2 Dimensions


3 Dimensions


CHEMistry CAMera


Centre National de la Recherche Scientifique


Digital Elevation Model


Digital Outcrop Model


Dust Removal Tool


Geographic Information System


Graphical user Interface


High Resolution Imaging Science Experiment


Joint Photographic Experts Group


grey Level, Alpha


Laser-Induced Breakdown Spectroscopy


Laboratoire de Planétologie et Géodynamique


MArs Hand Lens Imager


Mast Camera


Mars Science Laboratory


National Aeronautics and Space Admnistration


Navigation Camera


Planetary Data System


Planetary Mapping


Red Green Blue




Tagged Image File Format


Virtual Reality


Work Package 5


The deliverable 5.2, led by CNRS-LPG, is focused on the release of 3D models of a part of the Gale Crater (Mars) where the MSL rover Curiosity landed in August 2012. These products are useful to the PlanMap project effort in providing an accurate, multiscale and realistic 3D depiction of the Kimberley outcrop area, traversed by the rover in 2014.
This multiscale (outcrop to regional) approach is important to better reconstruct and understand the Kimberley outcrop within its local geological context. Therefore, the use of different scale data (from orbit to ground-based) implies that different 3D meshes will be constructed and then integrated together to produce the Kimberley geomodels provided with this deliverable.
Base data used to produce the 3D models provided with this deliverable (cf. Annex A) are presented in and issued from the GIS project of deliverable 5.1 (PlanMap consortium, 2019). These models give useful insights for characterization of the geological features and for stratigraphical correlations. 

Localization and Area of interest 

Since its landing in 2012 in the Gale Crater on Mars, the MSL rover Curiosity explored the diverse geologic formations of Aeolis Palus, the northern plain of the crater (Fig. 1). During its mission, the rover studied the diversity of rocks encountered at Gale, bringing new insights into the past environments of the planet, but also raising new scientific questions (e.g. Grotzinger et al., 2014; Le Deit et al., 2016; Stack et al., 2016).
As specified in PlanMap deliverable 5.1 (PlanMap consortium, 2019), WP5’s works focus on an area of the Gale crater encompassing Curiosity’s traverse between sols 431 and 960 and notably including the Cooperstown and Pahrump Hills outcrops. This area corresponds to the ~2700 x 2800 m red box on Figure 1, more specifically centred around the Kimberley outcrop, traversed by Curiosity between sols 603 and 630 (Fig. 1; PlanMap consortium, 2019).
The Kimberley outcrop mainly consists in a butte of about 5 meters high (informally named Mt. Remarkable; Fig. 2) dominating over a flatter terrain. Those features are easily recognized from orbital imagery as observed on Figure 2. The outcrop shows siliciclastic series recording fluvial settings as many outcrops observed since Bradbury Landing (Fig. 1; Le Deit et al. 2016; Stack et al., 2016, Rice et al., 2017). Yet, this specific outcrop stands out by the unusually high abundance of potassium (2-5 wt.% of K2O) analysed in these clastic sediments (Le Deit et al., 2016), contrasting with other sedimentary rocks with basaltic composition lacking the high abundance in potassium (e.g., Mangold et al., 2017).
While the origin of the high potassium abundance might likely be detrital (Treiman et al., 2016), the question of the local to regional stratigraphical relationship of this outcrop with the rest of the Gale sedimentary series remains open to further investigation in absence of continuity in outcrops and clear contact with other units. In order to better constrain the structure, geometry and facies of this outcrop, we focused the work of this deliverable on the photorealistic reconstruction of a multiscale DOM of the Kimberley outcrop area using all the available ground-based imagery (Curiosity rover) and DEM from orbital imagery for the broader regional context, as described in deliverable 5.1 (PlanMap consortium, 2019).

Figure 1: View of Aeolis Palus and Aeolis Mons, Gale Crater, Mars, and the traverse of MSL rover Curiosity (white line). The red box highlights the working area considered by this deliverable, and including the Cooperstown, Kimberley and Pahrump Hills outcrops


Figure 2: Curiosity rover at Kimberley during sol 595 as seen from orbit by Mars Reconnaissance Orbiter’s HiRISE camera. The informally named “Mt. Remarkable” butte is located, as well as the approximative limit of the area reconstructed in the DOM provided by this deliverable (red dashed line). Base image: HiRISE ESP_036128_1755 full colour tile, NASA/JPL/University of Arizona,

3D models from orbital data 

These 3D models correspond to the regional scale of observation of the Kimberley area. Their spatial extension corresponds to the total area included in the red box on Figure 1, as delimited and described above and in deliverable 5.1 (PlanMap consortium, 2019). The 3D products are constructed using orbital-derived data to give a good view of the local to regional relations between the Kimberley outcrop and its vicinity, which is important for stratigraphic correlations. To that purpose, we produced a mesh using orbital DEM, textured by three different visible layers to bring specific information to the end-user, in a GIS-like way. 

Base data 

All the base data used in this deliverable are presented and available in the GIS project provided with the deliverable 5.1 (PlanMap consortium, 2019). To produce the 3D orbital models, a selected set of orbital data is extracted from the GIS project: DEM elevation data to basically construct the 3D mesh, and 3 visible layers to be draped onto the mesh as texture: greyscale and colour orthoimage, and a geomorphological map. 


To construct the mesh of the orbital data model, we use an extract of the HiRISE DEM (Fig. 3; Parker & Calef III, 2016). Accuracy in the southeasternmost part of the map is reduced due to the presence of dunes in this area, thus preventing a good reconstruction of the topography using classical photogrammetric methods, from which it was first made by the original authors. This part of the region is outside of our region of interest.
The DEM is extracted from the GIS project (deliverable 5.1; PlanMap consortium, 2019) in the form of a 16-bits greyscale image file (cf. Annex A), with the colorization of each pixel representing a value of elevation. This value will then be used by the 3D modelling software to produce a relief corresponding to the data, and representing the elevation encoded in the DEM.


Figure 3: Digital Elevation Model of the working area (with horizontal resolution of 1 m/px), obtained by photogrammetry from HiRISE imagery data.

HiRISE visible imagery 

The orbital data model is draped with different textures in the form of extracts of the HiRISE visible basemap (Figs. 4 and 5; Calef III & Parker, 2016). The first texture proposed here (Fig. 4) is a greyscale image displaying the highest resolution available for this area (25 cm/pixel), allowing distinction of metric-scale geological features on the Martian surface. It is currently the most advanced imagery product available and serves as a base for geomorphological observations and interpretations (e.g. Grotzinger et al., 2014, see Geomorphological map and deliverable 5.1, PlanMap consortium, 2019). The second texture proposed here is the same greyscale basemap, with an additional layer featuring an extract of the full colour (processed using IDL/ENVI) HiRISE image ESP_036128_1755 (Fig. 5; complete tile is available online at This image shows the position of Curiosity during sol 597 while arriving at the Kimberley outcrop (as seen in Fig. 2).
Those image data are extracted from the GIS project (deliverable 5.1; PlanMap consortium, 2019) in the form of both greyscale and colour image files (cf. Annex A), and are used as is as texture for the DEM-derived 3D mesh.


Figure 4: High-resolution (up to 25 cm/pixel) greyscale image of the working area from the HiRISE instrument.

Figure 5 High-resolution (up to 25 cm/pixel) greyscale image of the working area from the HiRISE instrument with co-registered extract of the ESP_036128_1755 fill colour tile.

Geomorphological map 

The final texture draped on the 3D mesh to produce the orbital data models is an extract of the regional (sub-basin)-scale geomorphological map of the Aeolis Palus and foothills of Aeolis Mons (Mt. Sharp) constructed by Grotzinger et al. (2014; Fig. 6). This map, largely based on orbital imagery observation to discriminate several types of geological features and/or (apparent) formations, shows “orbital facies” corresponding to specific sets of morphological properties and/or observed geological structures on the surface of the northern part of the Gale crater.
This part of the larger geomorphological map of the basin, is extracted from the GIS project (deliverable 5.1; PlanMap consortium, 2019) in the form of a colour image file (cf. Annex A), and is used as is as texture for the DEM-derived 3D mesh. 

Regional geomorphological map (after Grotzinger et al., 2014) of the working area, displaying 7 different “orbital facies”. The Bradbury/Murray contact discriminates terrains of the Bradbury Group and the stratigraphically higher Murray Fm. Basemap is high-resolution HiRISE imagery (see Fig. 4).

Construction of the mesh 

The construction of the mesh for the orbital data 3D model is made using a 3D modelling software. Several solutions both commercial and open source are available to do that, such as Autodesk 3DS Max or Maya (commercial), or Blender (open source). In this deliverable, the mesh is constructed using Autodesk 3DS Max, but this procedure can be achieve using equivalent high-end modelling software.
The construction of the mesh itself starts with the construction of a surface whose dimensions correspond to the resolution of the image file of the DEM (x and y in Fig. 7). This surface is divided into a grid containing a certain number of independent polygons. The more important the polygon count is, the greater the resolution of the mesh will be allowing for a better accuracy in the reconstruction of the relief in accordance to DEM elevation data. Then, the DEM image file is attached to the surface (Fig. 7). Using elevation data encoded in this attached file, the polygons of the surface are “displaced” along the vertical axis. This “displacement” (or “extrusion”) of the surface therefore creates a 3 dimensional representation of the relief of the basemap (Fig. 7).
Once the mesh constructed, a texture is draped onto the model to create an intelligible 3D map of the area. These textures have a shape and dimensions corresponding to that of the mesh to ease their mapping onto the mesh (the process to display a texture over non-flat surfaces; Fig. 7).
Three different orbital data models are provided with this deliverable (cf. Annex A), but all three share the same base mesh created from the HiRISE DEM, with only their textures differing to bring specific information (visible imagery and geomorphological maps in the case of this deliverable). The base model can therefore be reused in future products with new textures to bring new information, the same way one user would do on a GIS project by adding/modifying layers.


Figure 7: Schematic depiction of the process used to create a 3D model based on orbital-derived DEM and imagery. x and y are the dimensions of the source files (DEM and textures) used to construct this model.

Orbital data models 

Model with HiRISE orthoimage 

This 3D model, provided with this deliverable (cf. Annex A), shows relief derived from the HiRISE DEM. The texture represents the HiRISE orbital visible greyscale orthoimage, with a maximum resolution of 25 cm/pixel (Fig. 8), allowing the observation at a relative fine scale of the geological and geomorphological structures of the Martian surface. 

Figure 8: 3D model of the working area, with relief reconstructed using HiRISE DEM, and textured with a HiRISE greyscale orthoimage. Detail show the Kimberley outcrop.

Model with HiRISE colour orthoimage 

This 3D model, provided with this deliverable (cf. Annex A), shows relief derived from the HiRISE DEM. The texture represents the HiRISE orbital visible greyscale orthoimage, with a maximum resolution of 25 cm/pixel, with an additional extract of the HiRISE colour image ESP_036128_1755, giving a more realistic rendering of this part of the modelled terrain (Fig. 9).

Figure 9: 3D model of the working area, with relief reconstructed using HiRISE DEM, and texture with a HiRISE greyscale orthoimage and a co-registred extract of the ESP_03612_1755 full colour tile. Detail shows the Kimberley outcrop. 

Model with geomorphological map

This 3D model, provided with this deliverable (cf. Annex A), shows relief derived from the HiRISE DEM. The texture represents the geomorphological map of the area by Grotzinger et al. (2014), showing the 3D spatial repartition of the various orbital facies described in this part of the Gale crater (Fig. 10). 


Figure 10: 3D model of the working area, with relief reconstructed using HiRISE DEM, and textured with an excerpt of the regional geomorphological map after Grotzinger et al. 2014). Detail shows the Kimberley outcrop. 

3D models from ground-based data 

These 3D models correspond to the local scale of observation of the Kimberley area (the outcrop itself). The spatial extension of the DOM roughly corresponds to the eastern flank of the Mt. Remarkable, the low-reliefs in the easternmost part of the outcrop, and the flat terrain in-between, as delimited by the red dashed line in Figure 2. An additional detail micro-model is also constructed to represent the Windjana drill, performed by Curiosity on the lower banks of the Mt. Remarkable for geochemical experiments. The 3D products are constructed by photogrammetry using ground-based data to reproduce the actual shape and geometry of the Kimberley outcrop at real scale. Image data are gathered by different imaging instruments onboard Curiosity, with varying optical parameters and resolutions. Nevertheless, the quality of these data is high, with a resolution up to the sub-cm-scale (even sub-mm-scale for MAHLI images, locally), allowing for close observation of the various geological structures photographed on the ground by the rover. 

Base data 

To produce the ground-based 3D DOM, we use a set of imagery data gathered by the MSL rover Curiosity presented in the deliverable 5.1 (PlanMap consortium, 2019). These images are publicly accessible under two main forms:

A formula for an easy sol-by-sol access to these data for each repository is provided in the form of a clickable layer in the GIS project provided with deliverable 5.1 (PlanMap consortium, 2019). It can be used for a direct access to the image data used in this work.
While Curiosity is equipped with 17 cameras (e.g. Alexander & Deen, 2015), for the purpose of the DOM reconstruction effort of this deliverable, we are using imagery data issued only from the pair of navigation cameras (Navcam), the mast cameras (Mastcam), and the MArs Hans Lens Imager (MAHLI). The Navcam and Mastcam are mounted atop of the 2 meters high remote sensing mast (Fig. 11a). The MAHLI is situated on the mobile robotic arm of the rover. These cameras have been designed for different objectives. Their respective optical parameters (e.g. resolution, focal field, etc.) are summarized in Table 1. Figure 11b compares the respective field of views of the remote sensing mast cameras. These differences in optical parameters complicates the photogrammetric processing.
The Navcam greyscale wide angle stereo pair analyses the vicinity of the rover to determine the trafficability of the area (Maki et al., 2012) and to identify the scientific targets. Both full frame images (1024x1024 pixels wide, “NAV”) and degraded versions (256x256 pixels wide, “TRAV”) taken by the Navcam are considered, “TRAV” version being acquired during each displacement of the rover. The latter are very useful as they contain a significant overlap and often serve to connect the whole set of images. Straightforward results for photogrammetric reconstruction are generally obtained with Navcam images, as they are systematically acquired as a stereoscopic pair using left and right cameras. Navcam greyscale images are useful to reconstruct terrains, but generally show only a low level of fine-scale details (Table 1, Fig. 11b), restraining their use for science purposes.
The full colour high-resolution Mastcam34 wide angle and Mastcam100 telelenses are a pair of cameras designed to retrieve more precise scientific information from the surroundings. But unlike the Navcam pair, their different focal lengths (34 mm and 100 mm; Table 1) result in poorly overlapping images (Fig. 11b). These differential images thus restrict straightforward photogrammetric reconstruction despite their obvious interest given their resolution.
Finally, the MAHLI is a colour high-resolution microscope. It can be operated locally to take images with a good overlap and different viewing angles, allowing an accurate reconstruction of very small-scale geological features (e.g. Edgett et al., 2015; Ostwald & Hurtado, 2017; Table 1). 


Common designation



Focal length

Field of view


Navigation Cameras



1024 x 1024 px

(256 x 256 px for “TRAV”)

14.67 mm

45° x 45°

Maki et al., 2012

Mast Camera (left)

MastCam left (ML or M-34)


1600 x 1200 px

34 mm

18.4° x 15°

Malin et al., 2010; Bell et al., 2013

Mast Camera (right)

MastCam right (MR or M-100)


1600 x 1200 px

99.9 mm

6.3° x 5.1°

Malin et al., 2010; Bell et al., 2013

MArs Hand Lens Imager



1600 x 1200 px

18.4 mm

26.8° x 20.1° to 31.1° x 23.3°

Edgett et al., 2015

Table 1: Optical parameters of the Navcam, Mastcam and MAHLI imagers on board Curiosity used to take the images for the photogrammetric reconstruction of the Kimberley DOM.


Figure 11: a) Detail of the “selfie” picture of Curiosity taken on sol 1943 by the MAHLI imager, showing the mast of the rover, supporting the position of the different Navcam and Mastcam imagers. ML: left Mastcam, focal length = 34 mm; MR: right Mastcam, focal length = 100 mm. (base image PIA22207, NASA/JPL-Caltech/MSSS, b) Composite image showing the variations in resolution, colorization and field of view of the different Navcam and Mastcam (MR and ML) imagers on board Curiosity for a given target between sols 603 and 604 (images 0604MR0025500120400908E01_DRCL, 0603ML0025450060301382E01_DRCL and NLB_451026778RADLF0311094NCAM00271M1)

The imagery dataset used in this work consists in 2005 images of the MSL archive taken during sols 597 to 630. We use 638 Navcam images, 1505 Mastcam images (890 Mastcam left, 615 Mastcam right) and 32 MAHLI images. These images are downloaded from the PDS servers rather that the MSL Raw image gallery to obtain the original source files for both the image data (.IMG file) and associated metadata (stored under a .LBL file). 

Construction of the mesh using photogrammetry 

To reconstruct the 3D DOM of the Kimberley outcrop, we use a technique called photogrammetry, which consists in recreating the 3 dimensional shape of an object from 2D photos. DEM obtained from orbital imagery are also obtained using photogrammetry, and more specifically from stereophotogrammetry (in a similar way to the human vision). In this deliverable, we are using Structure-from-Motion photogrammetry (e.g. Ullman, 1979; Westoby et al., 2012) to obtain 3D meshes from a set of overlapping photos gathered by the Curiosity rover during its traverse of the Kimberley outcrop in 2014. 

Basics of Structure-from-Motion photogrammetry 

The SfM photogrammetry relies on a suite of algorithms designed to transform a set of multiple overlapping images into geometric 3D meshes (e.g. Ullman, 1979; Westoby et al., 2012). The workflow for SfM reconstruction starts with the alignment of the entry 2D images to detect tie-points across at least two of the images taken from multiple points of view (e.g. red points in Fig. 12). The position of these tie-points and their apparent displacement across several photos allow the SfM algorithms to calculate their X, Y, Z position within a 3D space, given the optical parameters and position of the cameras. From there, the projection of each tie-point into a 3D space creates a sparse point cloud of a few to several thousand points (Fig. 12), representing the position of these tie-points in a 3D space, and thus the overall 3D geometry of the targeted object.
Using the sparse point cloud as a base, the algorithm then generates the projection within the 3D space of other points present on the entry images to create a finer dense point cloud up to several million points (Fig. 12). The dense cloud contains a lot of 3D information (e.g. coordinates, colour, normal vectors of the points). It could already be used as it is for geological analysis (e.g. Tavani et al., 2014; Triantafyllou et al., 2019). It also serves as the frame to compute the 3D mesh. All points in the dense cloud are therefore linked by a complex grid of vertices delimitating triangular polygons that will compose the actual resulting 3D mesh of the targeted object (Fig. 12). While the vertices and polygons are coloured according to the original entry images (Fig. 12), a texture can also be built (or UV mapped), allowing the 3D mesh to be draped using parts of the original entry images to obtain a final photorealistic 3D model (e.g. Tavani et al., 2014; Fig. 12).


Figure 12: Schematic depiction of the Structure-from-Motion photogrammetric process to reconstruct a 3D textured mesh of a targeted object based on a set of overlapping images taken from varied points of view. Software detects similar tie-points across the images, and then projects them into a 3D space as point clouds (sparse then dense). These points then serve as a base to generate a 3D mesh by triangulation of the points into vertices-delimited polygons. A texture based on the original images can be generated and mapped over the 3D mesh to obtain a photorealistic rendition of the object. 

Choice of relevant solutions for Martian DOM reconstruction 

To ensure the most accurate reconstruction for the Kimberley DOM, we tried to set up a comprehensive methodology, including the most potent software given the different characteristics of our entry images issued from Curiosity, and formatting of these images. 


Several commercial and opensource SfM solutions are available (e.g. Martell et al., 2018): VisualSFM (open source), MicMac (open source), Capturing Reality RealityCapture (commercial) or Agisoft PhotoScan/Metashape (commercial). However, if these softwares basically rely on the same suite of SfM algorithms to construct the 3D meshes, their properties, interface, performance and entry parameters might largely differ from one another. Moreover, the specificities of using planetary data restrict the use of some of these softwares.
Several tests were thus conducted by the WP5 team in order to select the best solution and pipeline for the DOM reconstruction after three main parameters: performance of the software, ability to reconstruct a geologically and geomorphologically accurate 3D mesh, and ease of setting. Using a same set of data (642 Navcam images of the Kimberley outcrop), we try to reconstruct a first approximation of the expected DOM with VisualSFM (v0.5.2x), RealityCapture (v1.0.3) and PhotoScan Professional (v1.4.4) to assess these parameters. Results of these tests are as follow:

  • VisualSFM: The GUI of the software did not allow us to fine tune any specific parameters such as the georeferencing or the presence of alpha layering to produce masks for the Navcam. Notwithstanding these problems in setting, the suite of algorithms used by this software was not powerful enough to construct a reasonably correct model for the Kimberley outcrop.
  • RealityCapture: This software proposes an easy-to-handle GUI, with the ability of fine tuning some critical parameters. However, it is not possible to manually enter an absent georeferencing system such as that of Mars. The software is quite powerful in reconstructing a first DOM of Kimberley based on Navcam imagery (Fig. 13a). The reconstruction is mostly accurate, down to the rover’s tracks (green arrow on Fig. 13a). The Mt. Remarkable butte is mostly reconstructed, except for the northern part, as shown by the red arrow on Figure 13a. Also, while the quality of the model is high, several setbacks are presents: a major hole in the middle of the model where images could not be aligned together (yellow arrow, Fig. 13a), and “exotic” blocky polygons in the mesh in the western part of the outcrop (blue arrow, Fig. 13a). These latter out of shape polygons are due to the “depth map” processing of the mesh by that software (mesh generation by reprojection of the points using a map of their spatial coordinates, and not by using the point cloud as illustrated in Figure 12) and therefore cannot be controlled nor restrained.
  • PhotoScan: This software also proposes a very easy-to-handle GUI, with ability of fine tuning all critical parameters. It features powerful georeferencing tools, with the possibility of manually adding absent systems, such as that of Mars. The software is also powerful in reconstructing a first DOM of Kimberley based on Navcam imagery (Fig. 13b). Here too, the reconstruction is mostly accurate, down to the rover’s tracks (green arrow, Fig. 13b). The Mt. Remarkable butte is reconstructed is a greater extent that in RealityCapture (Fig. 13), notably in the northern part pointed by the red arrow in Figure 13b. A hole is also present in the middle of the central flat terrains (yellow arrow, Fig. 13b), but it is a lot more reduced hole, restricted to roughly the size of the MSL rover (whose body “shadowed” the terrain underneath), as opposed to that present in the model issued from RealityCapture (Figs 13a and 13b). Finally, the blocky polygons seen in the previous model are not present using PhotoScan, due to a different processing of the mesh, using the dense cloud (as illustrated in Figure 12). However, the borders of the mesh appear rough and need some cleaning (blue arrow, Fig. 13b). 

Using this testbed, VisualSFM was rapidly discarded as it was not adapted to our dataset and did not produce expected results. Between the last two solutions, we finally decided to use Agisoft PhotoScan Professional (v.1.4.4, Agisoft LLC, 2018). It appeared to provide a good trade-off between performance and efficiency to handle Curiosity’s images. It is also one of the most widely used in the community for similar geological applications (e.g. Verhoeven, 2011; Arbués et al., 2012; Tavani et al., 2014; Ostwald & Hurtado, 2017; Triantafyllou et al., 2019).

Figure 13: Comparative results of the reconstruction (in first approximation) of the Kimberley DOM using Navcam imagery only by a) RealityCapture v1.0.3 and b) PhotoScan Professional v1.4.4 softwares. Green arrow: reconstruction of the rover’s tracks on the DOM. Red arrow: reconstructed northern Mt Remarkable butte. Yellow arrow: “hole” in the DOM mesh due to errors in alignment of the base images. Blue arrow: western edge of the mesh, showing “exotic” blocky polygons in a) and rough borders in b), due to a different processing of the data (depth map for RealityCapture, dense point cloud triangulation for PhotoScan). 

Entry images: formats and metadata 

Going forward with PhotoScan, we had to test several different parameters to ensure the maximum efficiency of the photogrammetric treatment given that images gathered by Curiosity are not optimally designed for that purpose (see more on those limits further). We made several iterative tests using the same set of Navcam images as in the software test, this time changing image format parameters and metadata as follow: image format, image colorization and georeferencing.
To test the image format, we tried to use standardized JPEG and TIFF formats:

  • JPEG offers lightweight and universal encoding of the image; however, it is a compressed (with loss) format, and cannot store additional information excepted the colorization of the pixel (no alpha channel-layering).
  • TIFF offers a lossless encoding of the image, enabling high fidelity storage for the data, at the expanse of a large weight of the file. However, TIFF also offers the possibility to store additional levels of information per pixel, thus allowing multi layering (and for example the presence of an alpha channel, which will be useful to mask rover hardware in Navcam images). It has a native support for georeferencing too (via the GeoTIFF norm).

For our purpose, the size of the data not being an issue, we then selected the TIFF format.
To test the images colorization, in order to reconcile Navcam and Mastcam data sets, we tried to use RGB and LA colorization parameters:

  • RGB: Red Green Blue colorization is a standard full colour colorization, used for a great majority of image data. This colour encoding allows to transcript a large part of the visible spectrum in the form of a numerical combination of three base channels (namely, red, green and blue). Polychromatic as well as monochromatic images can be encoded in RGB, and then rendered in their original form.
  • LA: Greyscale Level and Alpha layer is a standard colorization for monochromatic (greyscale) data, with an additional alpha layering to support transparency. This encoding transcript the intensity of the light received (from “full light white” to “no light black”) in the form of a numerical value stored into a single channel. Polychromatic as well as well as monochromatic images can be encoded in LA, however polychromatic images would not been rendered in original full colours.

In our case, the Navcam imagery is obtained by monochromatic cameras, producing greyscale images. However, formatting of these images for further use transform them into RGB images. We wanted to assess if using the original simplified encoding of the image (single-channel LA vs multi-channel RGB) would help improve the photogrammetric treatment. For Kimberley outcrop, we obtained these results:

  • RGB (original multi-channel “greyscale” images): 491/642 aligned photos (76%)
  • LA (simplified mono-channel greyscale images): 431/642 aligned photos (67%)

It appears that even if the RGB format transcript more information than necessary, better results are obtained with this multi-channel colorization. This might be because Agisoft PhotoScan is designed and adapted to process full colours images rather than greyscale pictures. Moreover, the choice of using RGB colorization is also more pertinent since Mastcam images are full colours images.
We finally test the impact of georeferencing of the images to improve image alignment. As precise localization of Curiosity when the rover took each image is known, it is possible to enter this information into the metadata of the image. Therefore, PhotoScan is able to read the position of the image and to directly replace it within its internal workspace (Fig. 14a). This allows to save processing time and resources for the software to recognize tie-points, as well as adding a geographic control on the position, and on the scale of the processed model. This ability of the software can also allow to check the placement of the images before the photogrammetric treatment, and to remove misplaced images if needed.
To summarize, entry images used to reconstruct the DOM using PhotoScan Professional are RGB georeferenced images in TIFF format. 

Kimberley Digital Outcrop Model 

Pre-processing of the imagery dataset 

Before any further processing, the raw data downloaded from the PDS archive need to be converted into files answering the properties previously tested and readable by PhotoScan. The .IMG raw image data are thus converted into.TIFF files using IMG2PNG command-line software (available at: Then, EXIF metadata extracted from the .LBL label files such as focal length, pixel size and geographic coordinates (projected into the IAU Mars 2000 spherical projection; Seidelmann et al., 2002) are incorporated into the .TIFF images using the ExifTool software (Harvey, 2013). Lastly, an alpha-layer mask is added to .TIFF images to mask rover elements present in the field of view, either automatically for Navcam images as masks are procedurally generated and already available on the PDS (Alexander & Deen, 2015), or manually drawn in the case of Mastcam images. The metadata information and alpha-layer masks are then considered by the SfM software during the alignment process to enhance treatment efficiency. 

Cameras alignment and sparse point cloud generation 

The variations in optical parameters between the different MSL cameras prevent any automated optimal alignments of the whole set of images during the first step of the photogrammetric process. Additionally, several other important limits are inherent to the images taken by the rover: non-optimal or inadequate points of view, non-overlapping images, non-consecutive views and/or changing lighting. To handle this problem, we first took advantage of the software’s embedded advanced geospatial features to preposition the images into its workspace (Fig. 14a), using the geographic coordinates retrieved from the PDS data. By doing so, PhotoScan is also able to produce a geographically constrained and scaled model. Figure 14a shows the positions of the georeferenced Navcam images within PhotoScan’s workspace (Fig. 14a right), perfectly mirroring the traverse of Curiosity as obtained from the PDS data (Fig. 14a left). 

Figure 14: a) Georeferencing of the entry images within PhotoScan allows to constrain the position of the cameras within the software’s workspace. Left: actual path of Curiosity’s traverse between sols 597 and 630 (white line; base image: HiRISE ESP_036128_1755_COLOR full colour tile, NASA/JPL/University of Arizona, Right: position of 530 aligned Navcam images (blue frames) within PhotoScan’s workspace, mirroring the rover’s real traverse. b) Sparse point cloud obtained after alignment of 530 Navcam greyscale images. c) Dense cloud obtained from the aligned Navcam greyscale images. d) Dense cloud obtained using 530 Navcam greyscale images and 1443 Mastcam colour images, based on the position of the Navcam images from the initial alignment. e) Low-resolution greyscale mesh triangulated from the Navcam dense cloud. Detail shows that resolution is limited to the dm-scale, providing a blurry and inaccurate mesh under that level. f) High-resolution full colour mesh triangulated from the Navcam plus Mastcam dense cloud. Detail shows a higher resolution (up to the sub-cm-scale), providing a sharp and accurate model even at very close range. 

A first alignment is computed using only the 638 Navcam images. After several iterations of the alignment process, first with the “TRAV” degraded Navcam products only, and then with all the Navcam images (“TRAV” and “NAV”), we obtain a first sparse point cloud (Fig. 14b) composed by 565 271 points with 530 cameras aligned out of 638 (~83 %). The general shape and form of the Kimberley outcrop is recognizable on this sparse cloud, with a highest density of points located in the central part of the outcrop, where a more important number of images of greater quality were acquired (Fig. 14b). At this stage, outliers can be identified as well as misaligned cameras by direct comparison with the source 2D images and are therefore removed. The position of the aligned Navcam cameras is saved for further steps. 

Dense clouds generation 

From these alignments and sparse point cloud, a dense point cloud is computed (Fig. 14c). This cloud is composed of 30 317 710 points representing a precise spatial coverage of the Kimberley outcrop area (Fig. 2). The level of detail varies as per the density of images for a targeted object and the distance where the images were taken from. Furthermore, this dense cloud is only in greyscale according to the Navcam images colorization. Its resolution is not sufficient to fulfil our scientific needs. In order to reconstruct the most accurate and precise DOM possible for the Kimberley outcrop, we also need to integrate Mastcam wide angle and telelens colour images.
Mastcam images cannot be correctly aligned if treated simultaneously with the set of Navcam images, due to the lack of overlap and the significant difference in focal length (14.67 mm vs 34 mm vs 100 mm; Table 1 and Fig. 11b) or illuminating conditions. Still, the Mastcam imagers are situated very close to the Navcam imager on Curiosity’s mast, resulting in a very close position of the cameras and a relatively similar viewpoint when imaging the same targets (see Fig. 11a). Therefore, to get an alignment on the Mastcam images, we reimported in our workspace the previously saved position of the 530 aligned Navcam cameras. PhotoScan was then able to realign and project ~96 % of the Mastcam images (856/890 and 587/615 for the ML and MR, respectively), resulting in a highly detailed dense point cloud (Fig. 14d). This dense cloud differs from the previously obtained cloud using only Navcam by the number of points available (100 416 052 points) and by a much higher level of detail, while the spatial extent covered by the model remains similar (~2780 m²). “Traces” of colorization due to the use of RGB images are also characteristics of the presence of points derived from projection of the Mastcam images.

Mesh generation, texturing and export 

The next step in the photogrammetric reconstruction process is to generate a 3D mesh from each of the dense point clouds. As to compare the variation in resolution and quality of the resulting mesh, similar parameters are set: a count of ~3.5 million of polygons and a texture composed of 8 tiles with a size of 8192x8192 pixels each.
The mesh in Figure 14e has been triangulated from the Navcam only dense cloud (Fig. 14c) and textured accordingly using Navcam imagery only. Like the dense cloud, the DOM shows a good spatial coverage: geological structures of the outcrop (e.g. sandstone beds) are identifiable, as well as the tracks made by Curiosity on Martian dust. However, several holes in the model are present, due to gaps in the projection of the original images that the software could not interpolate correctly (Fig. 14e). They were already present in the dense cloud (Fig 14c). The resolution of the model and its associated textures allows us to observe a level of details down to the 10 cm-scale (Fig. 14e). Under that limit, textures become too blurry to distinguish any significant feature (e.g. pebbles). The mesh itself is not resolved enough to accurately reproduce the shape of the objects (e.g. the sandstone bed detailed in Fig. 14e that is identifiable at midrange but becomes indistinguishable in a close-up view).
The mesh in Figure 14f has been triangulated from the Navcam plus Mastcam dense cloud (Fig. 14d) and textured using Mastcam colour imagery only. The full colour texturing of this DOM is a significant improvement over the other mesh (Fig. 14e) as it allows both to observe real colours and to see much finer details of this Martian outcrop. Like its lower resolution counterpart (Fig. 14e), this DOM provides a good spatial coverage of the outcrop area. However, fewer holes can be observed, enhancing the overall quality of the mesh (Fig. 14f). This DOM shows a much higher resolution in the mesh and textures allowing the visualization of details at cm-scale, revealing key-features for the scientific investigation. For example, the same sandstone bed that appeared blurry and roundish in the Navcam DOM (detail on Fig. 14e) now shows sharp edges on this highly resolved mesh (detail on Fig. 14f). This DOM allows us to observe geological features at the cm-scale all over the reconstructed area, and below the-cm-scale on specific points where the rover obtained close-range images. Nevertheless, its resolution varies and decreases toward the edges of the mesh as expected due to the increased distance of the targeted object.
Before exporting the DOM, the mesh was manually cleaned, in order to suppress rough edges and “floating” polygons generated on the margins of the model. After this step, the overall area of the model decreased from ~2780 m² to a useful area of 1625.8 m². The mesh was then exported in Wavefront .OBJ format to be used in other softwares, along with its 8 textures as .JPG images, and an associated .MTL library file (cf. Annex A).
A textured low-resolution version of the Kimberley DOM can be seen on the Sketchfab public repository using this link:, and is also provided with this deliverable (cf. Annex A). 

Validation of scaling and georeferencing 

Even though PhotoScan Professional is able to scale the 3D model using the absolute spatial position of the georeferenced cameras, a verification of the scaling is needed to ensure the correct dimension of the model. On Earth, it would be easy to constrain the scaling using markers of a given size and/or Ground Control Points (GCP) with fixed and predetermined GPS-controlled coordinates (e.g. Tavani et al., 2014; Triantafyllou et al., 2019). But in our case, no such tools are available. We therefore rely only on an indirect observation and/or cross comparison with orbital imagery to validate our scaling. This is achieved by using Curiosity’s wheels tracks (wheels are 40 cm wide; Heverly et al., 2013), which are present on several images and therefore in the final texture (Figs. 14 and 15).
The scaling and orientation of the Kimberley DOM was validated in the CloudCompare software (v.2.9, Girardeau-Montaut, 2015) using georeferenced orbital HiRISE products as a reference. We were able to make our DOM match the HiRISE DEM for the most part of the mesh, the only noticeable exception being a sandstone bed on the northern flank of Mt. Remarkable (Fig. 15). This step allowed us to validate both the scaling and the georeferencing of our DOM within the Mars 2000 system.


Figure 15: High-resolution full colour DOM of the Kimberley outcrop, scaled and fitted on the georeferenced and scaled HiRISE DEM of the Gale Crater (with HiRISE colour texture draped onto it), showing nearly perfect alignment between the two meshes (with exception for the northernmost part of Mt. Remarkable). A 1:1 model of the Curiosity rover is placed for scale.

Windjana drill DOM 

The drill holes performed by Curiosity are a major part of the geological and geochemical experiments in Gale. As such an experiment was performed on Kimberley outcrop, with drilling of the Windjana hole, it was important for us to proceed to the integration within our DOM of an accurate reconstruction of this “rover-made” structure.
To that extent, an additional photogrammetric mesh of the Windjana drill hole is computed using a standalone set of MAHLI microscopic images (Fig. 16). This model thus allows to add a sub-mm-scale level of detail to our outcrop-wide model.
Due to both their different optical parameters (cf. Table 1) and position of the camera (at the tip of the robotic arm vs on top the remote sensing mast), the MAHLI images have to be processed separately from all other images. Moreover, their field of view is usually too small to be correlated with Navcam and Mastcam images. Therefore, a complete run of the photogrammetric process is done based on 32 MAHLI images of the Windjana drill hole. First the photos are aligned, resulting in a sparse cloud with 101 494 points and all 32 cameras aligned. From this sparse cloud, a dense cloud is computed with 3 620 962 points. Finally, the 3D mesh is generated from the dense point cloud, resulting in a ~5x7 cm model (Fig. 16) with 724 421 polygons, and one texture with a size of 8192 x 8192 pixel.
The figure 16 shows the exceptional resolution of this micro-model and details such as the powder generated by the drilling process on both the test and definitive drills. Also, the area brushed off by the DRT instrument is visible on the model, as well as the sub-mm-scale burn marks (“laser shots”) made by the LIBS measurements of the ChemCam instrument (Maurice et al., 2012).
A textured version of the Kimberley DOM can be seen on the Sketchfab public repository using this link:, and is also provided with this deliverable (cf. Annex A).


Figure 16: High-resolution full colour DOM of the Windjana drill performed at Kimberley by Curiosity. The sub-mm-scale and resolution of the model allow to distinguish very tiny details such as the sub-mm burn marks left by the LIBS experiments on the bedrock.

3D models of the study area 

All the 3D model presented in this deliverable give different but complementary information for the study at various scale of the Kimberley outcrop and its vicinity: from the regional-scale using orbital-derived data, to the scale of the structure using ground-based data.
These models, taken independently, allow to observe characteristic geological and geomorphological features. However, their integration into a single scaled and georeferenced model, allow the user to experience a multiscale reconstruction of the outcrop, enabling observations and characterization of the Geology at this site with greater accuracy and precision, especially using Virtual Reality. 

Integration of the meshes into a multiscale 3D models 

All the 3D models presented in this deliverable are scaled according to the entry data. However, due to discrepancies in the georeferencing and the source of the entry data, some variations in relative scaling of one model to another might be present. The same issue occurs with georeferencing which can vary according to the original reference system used on the processed entry data (even if it was standardized to the Mars2000 system).
In order to fix these errors and ensure maximum accuracy, all three models are imported into the CloudCompare software. Within this software, we use embedded alignment tools (similar to georeferencing tools in GIS softwares) to finely align the three models one onto another. This results in a perfectly aligned, scaled and also georeferenced multiscale model of the entire Kimberley outcrop (Figs. 15 and 17), featuring orbital-derived HiRISE data as regional and contextual background (Figs. 15 and 17a), the Kimberley photogrammetric DOM as main object and foreground (Figs. 15 and 17b), and the Windjana photogrammetric micro-model of the drill hole as specific detail within the main object (Figs. 17b and 17c). New updated spatial coordinates of the models can therefore be saved to allow future realignment of the independent models and/or the integrated scene can be saved into a new single mesh incorporating the former. 

Uses of the 3D models of the Kimberley area 

The scaled and integrated regional 3D models of the Kimberley area can be used for different purposes: observation of the outcrop at various scales, short-/mid-range correlations between stratigraphically concordant units, characterization of the structures, section logging or contextualization of the sampling (e.g. Windjana drill hole). Interest of such geomodels for these activities and compared to “flat” 2D images lies on the accurate and realistic reconstruction of the shape, volumes and geometries of the real outcrop in 3 dimensions, helping geologists in understanding the spatial organization of the rocks, which is important in deciphering past processes that led to the current state of the outcrop (deposition, lithification, erosion, etc.).
Moreover, while the use of new visualization technologies such as virtual reality (VR) in planetary sciences is not a recent idea (McGreevy, 1993), hardware and software developments as well as the democratization of powerful computing units in the past three years made it more accessible to our community. Recently, several solutions were made available to produce VR experiences using dedicated game-based engines (e.g., Mat et al., 2014; Murray, 2017). While the complete VR integration with the development of dedicated measurement tools and networking features will be more targeted in the next deliverable 5.3, a first validation test of our computed models has been done to ensure the compatibility with this VR technologies. The multiscale model has been integrated into a basic VR scene to simulate the local environment of the real outcrop and its surroundings (Fig. 17). Within this environment, one user can experience and “walk” the Kimberley outcrop at real scale as if he was on the field.


Figure 17: a) General view of the Kimberley simulated outcrop within the VR environment. Foreground high-resolution mesh is this work’s DOM, middle-ground lower resolution model is the HiRISE DEM. The Curiosity rover mesh is placed for scale. b) Close-up view of the simulated outcrop toward the Windjana drill at the base of Mt. Remarkable. In this view, two users are exploring the outcrop in VR. c) Detailed view of the very high-resolution MAHLI mesh of the Windjana drill. The sub-mm-scale resolution allows to see the very small marks left by the LIBS shots on the 3D mesh (<1 mm). 


Agisoft, LLC, 2018. PhotoScan Professional., Accessed: 10 October 2018.

Alexander, D. & Deen, R., 2015. Mars Science Laboratory Project Software Interface Specification (SIS); Camera & LIBS Experiment Data Record (EDR) and Reduced Data Record (RDR) Data Products, version 3.0. NASA Planetary Data System. URL:

Arbués, P., García-Sellés, D., Granado, P., López-Blanco, M., Muñoz, J., 2012. A method for producing photorealistic digital outcrop models. In 74th EAGE Conference and Exhibition incorporating EUROPEC 2012, Abstract #D029. DOI: 10.3997/2214-4609.20148218

Bell, J. F., Godber, A., Rice, M. S., Fraeman, A. A., Ehlmann, B. L., Goetz, W., Hardgrove, C. J., Harker, D. E., Johnson, J. R., Kinch, K. M., Lemmon, M. T., McNair, S., Le Mouélic, S., Madsen, M. B., Malin, M. C., MSL Science Team, 2013. Initial multispectral imaging results from the Mars Science Laboratory Mastcam investigation at the Gale crater field site. Lunar and Planetary Sciences Conference 44, Abstract #1719, 1417.

Calef III, F.J., Parker, T., 2016, MSL Gale Merged Orthophoto Mosaic, Publisher: PDS Annex, U.S. Geological Survey, URL:

Edgett, K. S., Caplinger, M. A., Maki, J. N., Ravine, M. A., Ghaemi, F. T., McNair, S., Herkenhoff, K. E., Duston, B. M., Willson, R. G., Yingst, R. A., Kennedy, M. R., Minitti, M. E., Sengstacken, A. J., Supulver, K. D., Lipkaman, L. J., Krezoski, G. M., McBride, M. J., Jones, T. L., Nixon, B. E., Van Beek, J. K., Krysak, D. J., Kirk, R. L., 2015. Curiosity’s robotic arm-mounted Mars Hand Lens Imager (MAHLI): Characterization and calibration status. MSL MAHLI Technical Report 0001 (19 June 2015). DOI: 10.13140/RG.2.1.3798.5447

Girardeau-Montaut, D., 2015. Cloud compare—3d point cloud and mesh processing software. Open Source Project. EDF R&D, Telecom ParisTech. URL:

Grotzinger, J. P. , Sumner, D. Y., Kah, L. C., Stack, K., Gupta, S., Edgar, L., Rubin, D., Lewis, K., Schieber, J, Mangold, N., Milliken, R., Conrad, P. G., DesMarais, D., Farmer, J., Siebach, K., Calef III, F., Hurowitz, J., McLennan, S. M., Ming, D., Vaniman, D., Crisp, J., Vasavada, A., Edgett, K. S., Malin, M., Blake, D., Gellert, R., Mahaffy, P., Wiens, R. C., Maurice, S., Grant, J. A., Wilson, S., Anderson, R. C., Beegle, L., Arvidson, R., Hallet, B., Sletten, R. S., Rice, M., Bell III, J., Griffes, J., Ehlmann, B., Anderson, R. B., Bristow, T. F., Dietrich, W.E., Dromart, G., Eigenbrode, J., Fraeman, A., Hardgrove, C., Herkenhoff, K., Jandura, L., Kocurek, G., Lee, S., Leshin, L. A., Leveille, R., Limonadi, D., Maki, J., McCloskey, S., Meyer, M., Minitti, M., Newsom, H., Oehler, D., Okon, A., Palucis, M., Parker, T., Rowland, S., Schmidt, M., Squyres, S., Steele,A., Stolper, E., Summons, R., Treiman, A., Williams, R., Yingst, A., MSL Science Team, 2014. A Habitable Fluvio-Lacustrine Environment at Yellowknife Bay, Gale Crater, Mars. Science 343(6169), 1242777

Heverly, M., Matthews, J., Lin, J., Fuller, D., Maimone, M., Biesiadecki, J., Leichty, J., 2013. Traverse performance characterization for the Mars Science Laboratory rover. Journal of Field Robotics 30(6), 835-46. DOI: 10.1002/rob.21481

Le Deit, L., Mangold, N., Forni, O., Cousin, A., Lasue, J., Schröder, S., Wiens, R. C., Sumner, D., Fabre, C., Stack, K. M., Anderson, R. B., Blaney, D., Clegg, S., Dromart, G., Fisk, M., Gasnault, O., Grotzinger, J. P., Gupta, S., Lanza, N., Le Mouélic, S., Maurice, S., McLennan, S., Meslin, P.-Y., Nachon, M., Newsom, H., Payré6, V., Rapin, W., Rice, M., Sautter, V., Treiman17, A. H., 2016. The potassic sedimentary rocks in Gale Crater, Mars, as seen by ChemCam on board Curiosity. Journal of Geophysical Research: Planets 121. DOI: 10.1002/2015JE004987

Maki, J., Thiessen, D., Pourangi, A., Kobzeff, P., Litwin, T., Scherr, L., Elliott, S., Dingizian, A., Mainome, M., 2012. The Mars Science Laboratory Engineering Cameras. Space Science Reviews 170(1-4), 77-93. DOI: 10.1007/s11214-012-9882-4

Malin, M. C., Caplinger, M. A., Edgett, K. S., Ghaemi, F. T., Ravine, M. A., Schaffner, J. A., Baker, J. M., Bardis, J. D., Dibiase, D. R., Maki, J. N., Willson, R. G., Bell, J. F., Dietrich, W. E., Edwards, L. J., Hallet, B., Herkenhoff, K. E., Heydari, E., Kah, L. C., Lemmon, M. T., Minitti, M. E., Olson, T. S., Parker, T. J., Rowland, S. K., Schieber, J., Sullivan, R. J., Sumner, D. Y., Thomas, P. C., Yingst, R. A., 2010. The Mars Science Laboratory (MSL) Mast-mounted Cameras (Mastcams) Flight Instruments. Lunar and Planetary Sciences Conference 41, Abstract #1533, 1123

Mangold, N., Schmidt, M.E., Fisk, M.R., Forni, O., McLennan, S.M., Ming, D.W., Sautter, V., Sumner, D., Williams, A.J., Clegg, S.M., Cousin, A., Gasnault, O., Gellert, R., Grotzinger, J.P., Wiens, R.C., 2017. Classification scheme for sedimentary and igneous rocks in Gale Crater, Mars. Icarus 284, 1-17. DOI: 10.1016/j.icarus.2016.11.005

Martell, A., Lauterbach, H. A., Schilling, K., Nuchtcer, A., 2018. Benchmarking structure from motion algorithms of urban environments with applications to reconnaissance in search and rescue scenarios. In 2018 IEEE International Symposium on Safety, Security, and Rescue Robotics (SSRR) 1-7. DOI: 10.1109/SSRR.2018.8468612

Mat, R. C., Shariff, A. R. M., Zulkifli, A. N., Rahim, M. S. M., Mahayudin, M. H., 2014. Using game engine for 3D terrain visualisation of GIS data: A review. In IOP Conference Series: Earth and Environmental Science 20, 012037. URL:

Maurice, S., Wiens, R. C., Saccoccio, M., Barraclough, B., Gasnault, O., Forni, O., Mangold, N., Baratoux, D., Bender, S., Berger, G., Bernardin, J., Berthé, M., Bridges, N., Blaney, D., Bouyé, M., Caïs, P., Clark, B., Clegg, S., Cousin, A., Cremers, D., Cros, A., DeFlores, L., Derycke, C., Dingler, B., Dromart, G., Dubois, B., Dupieux, M., Durand, E., d’Uston, L., Fabre, C., Faure, B., Gaboriaud, A., Gharsa, T., Herkenhoff, K. E., Kan, E., Kirkland, L., Kouach, D., Lacour, J.-L., Langevin, Y., Lasue, J., Le Mouélic, S., Lescure, M., Lewin, E., Limonadi, D., Manhès, G., Mauchien, P., McKay, C., Meslin, P.-Y., Michel, Y., Miller, E., Newsom, H. E., Orttner, G., Paillet, A., Parès, L., Parot, Y., Pérez, R., Pinet, P., Poitrasson, F., Quertier, B., Sallé, B., Sotin, C., Sautter, V., Séran, H., Simmonds, J. J., Sirven, J.-B., Stiglich, R., Striebig, N., Thocaven, J.-J., Toplis, M. J., Vaniman, D., 2012. The ChemCam Instrument Suite on the Mars Science Laboratory (MSL) Rover: Science Objectives and Mast Unit Description. Space Science Reviews 170(1-4), 95-166. DOI: 10.1007/s11214-012-9912-2.

McGreevy, M. W., 1993. Virtual reality and planetary exploration. In Virtual Reality pp. 163-97. Elsevier. DOI: 10.1016/B978-0-12-745045-2.50018-0

Murray, J. W., 2017. Building virtual reality with Unity and Steam VR. AK Peters/CRC Press. DOI: 10.1201/b21862

Ostwald, A. & Hurtado, J., 2017. 3D Models from Structure-from-Motion Photogrammetry Using Mars Science Laboratory Images: Methods and Implications. In Lunar and Planetary Science Conference 48, Abstract#1964, 1787. DOI:

Parker, T., Calef III, F.J., 2016, MSL Gale Merged Digital Elevation Model, Publisher: PDS Annex, U.S. Geological Survey, URL:

PlanMap consortium, 2019. Deliverable 5.1: Merged products (in GIS and as maps) of orbital and in situ data of Gale crater. URL:

Rice, M. S., Gupta, S., Treiman, A. H., Stack, K. M., Calef, F., Edgar, L. A., Grotzinger, J., Lanza, N., Le Deit, L., Lasue, J., Siebach, K. L., Vasavada, A., Wiens, R. C., Williams, J., 2017. Geologic overview of the Mars Science Laboratory rover mission at the Kimberley, Gale crater, Mars. Journal of Geophysical Research: Planets 122(1), 2-20. DOI: 10.1002/2016JE005200

Seidelmann PK, Abalakin VK, Bursa M, Davies ME, de Bergh C, Lieske JH, Oberst J, Simon JL, Standish EM, Stooke P, Thomas P, 2002. Report of the IAU/IAG working group on cartographic coordinates and rotational elements of the planets and satellites: 2000. Celest Mech Dyn Astron 82(1):83–111. DOI: 10.1023/A:1013939327465

Stack, K. M., Edwards, C. S., Grotzinger, J. P., Gupta, S., Summer, D. Y., Calef III, F. J., Edgar, L. A., Edgett, K. S., Framan, A. A., Jacob, S. R., Le Deit, L., Lewis, K. W., Rice, M. S., Rubin, D., Williams, R. M. E., Williford, K. H., 2016. Comparing orbiter and rover image-based mapping of an ancient sedimentary environment, Aeolis Palus, Gale Crater, Mars. Icarus 280, 3-21. DOI: 10.1016/j.icarus.2016.02.024.

Tavani, S., Granado, P., Corradetti, A., Girundo, M., Iannace, A., Arbués, P., Muñoz, J. A., Mazzoli, S., 2014. Building a virtual outcrop, extracting geological information from it, and sharing the results in Google Earth via OpenPlot and Photoscan: An example from the Khaviz Anticline (Iran). Computers & Geosciences 63, 44-53. DOI: 10.1016/j.cageo.2013.10.013

Treiman, A. H., Bish, D.L., Taniman, D.T., Chipera, S.J., Blake, D.F., Ming, D.W., Morris, R.V., Bristow, R.W., Morrison, S.M., Baker, M.B., Rampe, E.B., Downs, R.T., Filiberto, J., Glazner, A.F., Gellert, R., Thompson, L.M., Schmidt, M.E., Le Deit, L., Wiens, R.C., McAdam, A.C., Achilles, C.N., Edgett, K.S., Farmer, J.D., Fendrich, K.V., Grotzinger, J.P., Gupta, S., Morookian, J.M., Newcombe, M.E., Rice, M.S., Spray, J.G., Stolper, E.M., Sumner, D.Y., Vasavada, A.R., Yen, A.S., 2016. Mineralogy, provenance, and diagenesis of a potassic basaltic sandstone on Mars: CheMin X-ray diffraction of the Windjana sample (Kimberley area, Gale crater), Journal of Geophysical Research: Planets 121, 75–106. DOI: 10.1002/2015JE004932.

Triantafyllou, A., Watlet, A., Le Mouélic, S., Camelbeeck, T., Civet, F., Kaufmann, O., Quinif, Y., Vandycke, S., 2019. 3-D digital outcrop model for analysis of brittle deformation and lithological mapping (Lorette cave, Belgium). Journal of Structural Geology 120, 55-66. DOI: 10.1016/j.jsg.2019.01.001

Ullman, S., 1979. The interpretation of structure from motion. Proceedings of the Royal Society of London. Series B. Biological Sciences 203(1153), 405-26. DOI: 10.1098/rspb.1979.0006

Verhoeven, G., 2011. Taking computer vision aloft–archaeological three‐dimensional reconstructions from aerial photographs with photoscan. Archaeological prospection 18(1), 67-73. DOI: 10.1002/arp.399

Westoby, M. J., Brasington, J., Glasser, N. F., Hambrey, M. J., Reynolds, J., 2012. ‘Structure-from-Motion’ photogrammetry: A low-cost, effective tool for geoscience applications. Geomorphology 179, 300-14. DOI: 10.1016/j.geomorph.2012.08.021 

Annex A: List of files (meshes, textures and supporting files) of the independent 3D models provided by this deliverable 

Table A1 lists all 3D models files (mesh, texture, supporting files) provided by this deliverable: 

3D model


Supporting files


Ground Data

Kimberley DOM











Windjana drill DOM




Orbital Data

HiRISE orthoimagery




HiRISE orthoimagery colour




Geomorphological map




  • No labels