Warninig: the code works for tarballs with both data and splits in it. For instance, NYU comes as a zip file, we suggest to re-comprress it, with split files, as tarball.
Before training, the data need to be unzipped in the <BASE-PATH>/datasets
If your folder structure is different, you may need to change the corresponding paths in the config files.
You can look into the ../splits
folder for the pre-processed splits lists.
Download the official dataset from here, including the raw data (about 200G) and fine-grained ground-truth depth maps. Unzip files and copy the split files in the kitti folder. The files not contained in the splits can be deleted to save disk space. Remember to organize the directory structure following the file system tree structure in splits files (which is the original raw files' structure). Finally, compress everything in a single tarball.
You can download the dataset from Google Drive Link. Splits are provided here. For compatibility, we suggest to re-compress with splits files as a tarball.
You can download the dataset from link from GeoNet repo. Splits are provided here.
The dataset could be downloaded from here. Splits are provided here. For compatibility, we suggest to re-compress with splits files as a tarball.
The dataset could be downloaded from here. Splits are provided here. For compatibility, we suggest to re-compress with splits files as a tarball.
Clone argoverse repo and export to your PYTHONPATH
.
cd ..
git clone https://github.com/argoverse/argoverse-api
export PYTHONPATH="$PWD/argoverse-api:$PYTHONPATH"
Then run the code in ../splits/argo
to download and process the dataset. You can then use the splits and info files in ../splits/argo
.
The option --split
refers to downloading and processing the Argoverse splits (namely: train1, train2, train3, train4, val), see original website for more details.
<BASE-PATH>
in this example corresponds to the root directory where to download, extract, process and output data
cd ./idisc
pyhton ./splits/argo/get_argoverse.py --base-path <BASE-PATH> --split <split-chosen>
Clone DDAD repo and export to your PYTHONPATH
.
cd ..
git clone https://github.com/TRI-ML/DDAD
export PYTHONPATH="$PWD/dgp:$PYTHONPATH"
Then run the code in ../splits/ddad
to download and process the dataset. You can then use the splits and info files in ../splits/ddad
The option --split
refers to downloading and processing the DDAD splits (namely, train or val), see original website for more details.
````` in this example corresponds to the root directory where to download, extract, process and output data
cd ./idisc
pyhton ./splits/ddad/get_ddad.py --base-path <BASE-PATH> --split <split-chosen>