How to…

Get full stereo products

Pléiades / SPOT 6-7 products (DINAMIS)

DINAMIS is a platform that acquires and distributes satellite Earth imagery (Pléaides and Spot 6-7) for french and foreign institutional users under specific subscription conditions.

AIRBUS Pleiades NEO example files

Example files are available here: https://intelligence.airbus.com/imagery/sample-imagery/pleiades-neo-tristereo-marseille/ (A form must be filled out to access the data).

Maxar WorldView example files

Example files are available on AWS S3 through the SpaceNet challenge here: s3://spacenet-dataset/Hosted-Datasets/MVS_dataset/WV3/PAN/
You need to install aws-cli:
python -m venv venv-aws-cli # create a virtual environment
source ./venv-aws-cli/bin/activate # activate it
pip install --upgrade pip # upgrade pip
pip install awscli

And download a stereo:

aws s3 cp --no-sign-request s3://spacenet-dataset/Hosted-Datasets/MVS_dataset/WV3/PAN/18DEC15WV031000015DEC18140522-P1BS-500515572020_01_P001_________AAE_0AAAAABPABJ0.NTF .
aws s3 cp --no-sign-request s3://spacenet-dataset/Hosted-Datasets/MVS_dataset/WV3/PAN/18DEC15WV031000015DEC18140554-P1BS-500515572030_01_P001_________AAE_0AAAAABPABJ0.NTF  .

Prepare input images

Make input ROI images

cars-extractroi script allows to extract region of interest from your image product.

usage: cars-extractroi [-h] -il [IL [IL ...]] -out OUT -bbx x1 y1 x2 y2

Helper to extract roi from bounding box

optional arguments:
  -h, --help         show this help message and exit
  -il [IL [IL ...]]  Image products
  -out OUT           Extracts directory
  -bbx x1 y1 x2 y2   Bounding box from two points (x1, y1) and (x2, y2)

How to find the coordinates of the bounding box ?

For example, if you have downloaded the maxar example data Maxar WorldView example files, you are working in an area near to San Fernando in Argentina. Go to the website geojson.io in order to select your ROI:

roisanfernando

You can either select the upper left corner with the lower right corner (in red in the previous image):

cars-extractroi -il *.NTF -out ext_dir -bbx -58.5809 -34.4934 -58.5942 -34.4869
cars-starter -il ext_dir/*.tif -out out_dir > config.json
cars config.json

or the lower left corner with the upper right corner (in purple in the previous image):

cars-extractroi -il *.NTF -out ext_dir -bbx -58.5809 -34.4869 -58.5942 -34.4934
cars-starter -il ext_dir/*.tif -out out_dir > config.json
cars config.json

N.B.: Instead of using cars-extractroi, you can directly give the GeoJson dictionnary in the configuration file (Please, see Configuration for details). In this case, the sparse steps (geometric corrections) are processed on the entire image and not only on the ROI.

Monitor tiles progression

cars-dashboard script allows to monitor the progression of tiles computation on a web browser.

usage: cars-dashboard [-h] -out OUT

Helper to monitor tiles progress

optional arguments:
-h, --help  show this help message and exit
-out OUT    CARS output folder to monitor

For example, if you want to monitor the computation of a CARS run:

cars-dashboard -out output_cars

Make a simple pan sharpening

In the case of Pleiades sensors, the XS color isn’t superimposable to the Panchromatic image.

It can be recommended to apply a P+XS pansharpening with OTB.

otbcli_BundleToPerfectSensor -inp image.tif -inxs color.tif -out color_pxs.tif
docker run -w /data -v "$(pwd)"/data:/data --entrypoint=/bin/bash  cnes/cars otbcli_BundleToPerfectSensor -inp /data/image.tif -inxs /data/color.tif -out /data/color_pxs.tif

Convert RGB image to panchromatic image

CARS only uses panchromatic images for processing.

If you have a multi-spectral image, you’ll need to extract a single band to use, or convert it to a panchromatic image before using it with CARS.

The line below use “Grayscale Using Luminance” expression with OTB BandMath

otbcli_BandMath -il image.tif -out image_panchromatic.tif -exp "(0.2126 * im1b1 + 0.7152 * im1b2 + 0.0722 * im1b3)"

Make a water mask

To produce a water mask from R,G,B,NIR images, it can be recommended to compute a Normalized Difference Water Index (NDWI) and threshold the output to a low value.

The low NDWI values can be considered as water area.

gdal_calc.py -G input.tif --G_band=2 -N input.tif --N_band=4 --outfile=mask.tif --calc="((1.0*G-1.0*N)/(1.0*G+1.0*N))>0.3" --NoDataValue=0

It is also possible to produce a water mask with SLURP.

See next section to apply a gdal_translate to convert the mask with 1bit image struture.

Convert image to binary image

To translate single image or multiband image with several nbits per band to 1bit per band, it can be recommended to use gdal_translate as follows:

gdal_translate -ot Byte -co NBITS=1 mask.tif mask_1nbit.tif

Add band name / description in TIF files metadata

To add a band name / description (“water”, “cloud”, etc.) in TIF files, for classification or color files in order to be used:

data_in = gdal.Open(infile, gdal.GA_Update)
band_in = data_in.GetRasterBand(inband)
band_in.SetDescription(band_description)
data_in = None

Get low resolution DEM

SRTM 90m DEM

It is possible to download a low resolution DEM (90-m SRTM) corresponding to your area. To get a SRTM tile, you need to run the following python script knowing the latitude and the longitude of your area:

import numpy as np

def get_srtm_tif_name(lat, lon):
    """Download srtm tiles"""
    # longitude: [1, 72] == [-180, +180]
    tlon = (1+np.floor((lon+180)/5)) % 72
    tlon = 72 if tlon == 0 else tlon

    # latitude: [1, 24] == [60, -60]
    tlat = 1+np.floor((60-lat)/5)
    tlat = 24 if tlat == 25 else tlat

    srtm = "https://srtm.csi.cgiar.org/wp-content/uploads/files/srtm_5x5/TIFF/srtm_%02d_%02d.zip" % (tlon, tlat)
    return srtm

if __name__ == "__main__":
    print("Get SRTM tile corresponding to latitude and longitude couple")
    while 1:
        print(">> Latitude? ", end="")
        lat = input()
        print(">> Longitude? ", end="")
        lon = input()
        print(">> SRTM filename:", get_srtm_tif_name(int(lat), int(lon)))
        input()

If your area intersects multiple latitudes and longitudes, get all the SRTM tiles and create a VRT from them:

gdalbuildvrt srtm.vrt srtm_tile1.tif srtm_tile2.tif

Post process output

Merge Laz files

CARS generates several laz files corresponding to the tiles processed.

To merge them:

laszip -i data\*.laz -merged -o merged.laz

Docker

A docker is available to use CARS and OTB applications. CARS is the docker entrypoint. To use otb, entrypoint must be specified.

Use CARS in docker

docker run -w /data -v "$(pwd)"/data_gizeh_small:/data cnes/cars /data/configfile.json

Use OTB in docker

Any OTB application can be ran in docker

docker run  --entrypoint=/bin/bash  cnes/cars otbcli_BandMath -help

Resample an image

If you want to upscale or downscale the resolution of you input data, use rasterio:

import rasterio
from rasterio.enums import Resampling
# Get data
upscale_factor = 0.5
with rasterio.open("example.tif") as dataset:
    # resample data to target shape
    data = dataset.read(
        out_shape=(
            dataset.count,
            int(dataset.height * upscale_factor),
            int(dataset.width * upscale_factor)
        ),
        resampling=Resampling.bilinear
    )
    # scale image transform
    transform = dataset.transform * dataset.transform.scale(
        (dataset.width / data.shape[-1]),
        (dataset.height / data.shape[-2])
    )
    profile = dataset.profile
    # Save data
    profile.update(
        width=data.shape[-1],
        height=data.shape[-2],
        transform=transform
    )
    with rasterio.open('resampled_example.tif', 'w', **profile) as dst:
        dst.write(data)

Use CARS with Pleiades images …

… with raw data

If you want to generate a 3D model with the following pair:

IMG_PHR1B_MS_003
IMG_PHR1B_MS_004
IMG_PHR1B_P_001
IMG_PHR1B_P_002

You should find in each folder the following data:

...
DIM_PHR1B_***.XML
IMG_PHR1B_***.TIF
RPC_PHR1B_***.XML

For each product, the user must provide the path to the pancromatic data (P.TIF) with its geomodel, all contained in the DIMAP file (DIMAP*P*.XML):

{
"inputs": {
    "sensors" : {
        "one": {
            "image": "IMG_PHR1B_P_001/DIM_PHR1B_***.XML"
        },
        "two": {
            "image": "IMG_PHR1B_P_002/DIM_PHR1B_***.XML",
        }
    },
    "pairing": [["one", "two"]]
    }
}

If you want to add the colors, a P+XS fusion must be done, to specify a color.tif with the same shape and resolution than the Pancromatic data. It can be performed with otbcli_BundleToPerfectSensor as explained in make_a_simple_pan_sharpening

{
"inputs": {
    "sensors" : {
        "one": {
            "image": "IMG_PHR1B_P_001/DIM_PHR1B_***.XML",
            "color": "color_one.tif"
        },
        "two": {
            "image": "IMG_PHR1B_P_002/DIM_PHR1B_***.XML",
            "color": "color_two.tif"

        }
    },
    "pairing": [["one", "two"]]
    }
}

… with a region of interest

There are two different uses of roi in CARS:

  • Crop input images: the whole pipeline will be done with cropped images

  • Use input roi parameter: the whole images will be used to compute grid correction and terrain + epipolar a priori. Then the rest of the pipeline will use the given roi. This allow better correction of epipolar rectification grids.

If you want to only work with a region of interest for the whole pipeline, use cars-extractroi:

cars-extractroi -il DIM_PHR1B_***.XML -out ext_dir -bbx -58.5896 -34.4872 -58.5818 -34.4943

It generates a .tif and .geom to be used as:

{
"inputs": {
    "sensors" : {
        "one": {
            "image": "ext_dir/***.tif",
            "geomodel": "ext_dir/***.geom",
            "color": "color_one.tif"
        }
}

And use generated data as previously explained with raw data.

If you want to compute grid correction and compute epipolar/ terrain a priori on the whole image, keep the same input images, but specify terrain ROI to use:

{
    "inputs":
    {
        "roi" : {
            "type": "FeatureCollection",
            "features": [
                {
                "type": "Feature",
                "properties": {},
                "geometry": {
                    "coordinates": [
                    [
                        [5.194, 44.2064],
                        [5.194, 44.2059],
                        [5.195, 44.2059],
                        [5.195, 44.2064],
                        [5.194, 44.2064]
                    ]
                    ],
                    "type": "Polygon"
                }
                }
            ]
        }
    }
}

See Usage Sensors Images Inputs configuration for more information.