nuScenes lidarseg and panoptic tutorial

Welcome to the nuScenes lidarseg and panoptic tutorial. The lidarseg and panoptic share quite many functions in the tutorial, so we put them into single tutorial. But you can opt to set up either the lidarseg or the panoptic dataset, and only run the portion for specific task.

This demo assumes that nuScenes is installed at /data/sets/nuscenes. The mini version (i.e. v1.0-mini) of the full dataset will be used for this demo.

Setup

To install the nuScenes-lidarseg and/or nuScenes-panoptic expansion, download the dataset from https://www.nuscenes.org/download. Unpack the compressed file(s) into /data/sets/nuscenes and your folder structure should end up looking like this:

└── nuscenes  
    ├── Usual nuscenes folders (i.e. samples, sweep)
    │
    ├── lidarseg
    │   └── v1.0-{mini, test, trainval} <- Contains the .bin files; a .bin file 
    │                                      contains the labels of the points in a 
    │                                      point cloud (note that v1.0-test does not 
    │                                      have any .bin files associated with it)
    │
    ├── panoptic
    │   └── v1.0-{mini, test, trainval} <- Contains the *_panoptic.npz files; a .npz file 
    │                                      contains the panoptic labels of the points in a 
    │                                      point cloud (note that v1.0-test does not 
    │                                      have any .npz files associated with it) 
    └── v1.0-{mini, test, trainval}
        ├── Usual files (e.g. attribute.json, calibrated_sensor.json etc.)
        ├── lidarseg.json  <- contains the mapping of each .bin file to the token
        ├── panoptic.json  <- contains the mapping of each .npz file to the token       
        └── category.json  <- contains the categories of the labels (note that the 
                              category.json from nuScenes v1.0 is overwritten)

Google Colab (optional)


Open In Colab

If you are running this notebook in Google Colab, you can uncomment the cell below and run it; everything will be set up nicely for you. Otherwise, go to Setup to manually set up everything.

Download and setup nuScenes-devkit for nuScenes-lidarseg dataset.

Download and setup nuScenes-panoptic dataset.

Initialization

Let's start by importing the necessary libraries:

As you can see, you do not need any extra libraries to use nuScenes-lidarseg and nuScenes-panoptic. The original nuScenes devkit which you are familiar with has been extended so that you can use it seamlessly with nuScenes-lidarseg and nuScenes-panoptic.

Point statistics of lidarseg/panoptic dataset for the v1.0-mini split

Let's get a quick feel of the lidarseg dataset by looking at what classes are in it and the number of points belonging to each class. The classes will be sorted in ascending order based on the number of points (since sort_by='count' below); you can also sort the classes by class name or class index by setting sort_by='name' or sort_by='index' respectively.

With list_lidarseg_categories, you can get the index which each class name belongs to by looking at the leftmost column. You can also get a mapping of the indices to the class names from the lidarseg_idx2name_mapping attribute of the NuScenes class.

Conversely, you can get the mapping of the class names to the indices from the lidarseg_name2idx_mapping attribute of the NuScenes class.

For nuScenes-panoptic, it shares the same member variables lidarseg_idx2name_mapping and lidarseg_names2idx_mapping with nuScenes-lidarseg. Similarly, you can check the number of points for each semantic category from the nuScenes-panoptic dataset. The only thing to do is add gt_from='panoptic' argument. By default, gt_from='lidarseg'.

You might have noticed the point numbers for certain categories vary slightly between lidarseg and panoptic dataset. The reason is the overlapping points between instances are set to noise (category 0) in nuScenes-panoptic. You can see the increased number of points for noise category in nuScenes-panoptic, and the total point number remains the same.

Instance statistics of panoptic dataset for the v1.0-mini split

Instances statistics are specific to panoptic dataset. We provide list_panoptic_instances() function for this purpose. You can set the sort_by to one of ['count', 'index', 'name']. The function will calculate the number of instances per frame, total number of instances (unique object ID) and instance states (one instance could have more than one states, a.k.a, a track). Also it calculates the per-category statistics, including the mean and standard deviation for number of frames an instance spans, and mean and standard deviation for number of points per instance.

Note only thing categories have instances. The point statistics could refer to the point statistics section.

Pick a sample token

Let's pick a sample to use for this tutorial.

Get statistics of a lidarseg/panoptic sample token

Now let's take a look at what classes are present in the pointcloud of this particular sample.

By doing sort_by='count', the classes and their respective frequency counts are printed in ascending order; you can also do sort_by='name' and sort_by='index' here as well.

Similarly, we can use the same function to get the category frequency counts using the panoptic dataset by adding gt_from='panoptic'. As mentioned in list_lidarseg_categories(), the point count might be slightly different to lidarseg, due to the overlapping points of multiple instances are set to noise in nuScenes-panoptic.

Render the lidarseg labels in the bird's eye view of a pointcloud

In the original nuScenes devkit, you would pass a sample data token into render_sample_data to render a bird's eye view of the pointcloud. However, the points would be colored according to the distance from the ego vehicle. Now with the extended nuScenes devkit, all you need to do is set show_lidarseg=True to visualize the class labels of the pointcloud.

But what if you wanted to focus on only certain classes? Given the statistics of the pointcloud printed out previously, let's say you are only interested in trucks and trailers. You could see the class indices belonging to those classes from the statistics and then pass an array of those indices into filter_lidarseg_labels like so:

Now only points in the pointcloud belonging to trucks and trailers are filtered out for your viewing pleasure.

In addition, you can display a legend which indicates the color for each class by using show_lidarseg_legend.

Render the panoptic labels in the bird's eye view of a pointcloud

Similar to lidarseg, the same function is used to render the panoptic labels. The argument difference is show_panoptic=True. By default, both show_lidarseg and show_panoptic are set to False. If both are set to True, i.e. show_lidarseg=True, show_panoptic=True, lidarseg will have the priority to render.

You can see different vehicle instances from the same category will be displayed with unique colors. Similarly, you can play with the filter_lidarseg_labels and show_lidarseg_legend=True to show panoptic labels for certain thing and stuff categories, and the category legends. Note these 2 arguments are shared between lidarseg and panoptic datasets as well. Only legends of stuff categories will be displayed as the thing instances of same category have different colors.

Render lidarseg/panoptic labels in image

If you wanted to superimpose the pointcloud into the corresponding image from a camera, you can use render_pointcloud_in_image like what you would do with the original nuScenes devkit, but set show_lidarseg=True (remember to set render_intensity=False). Similar to render_sample_data, you can filter to see only certain classes using filter_lidarseg_labels. And you can use show_lidarseg_legend to display a legend in the rendering.

Again, this function supports show_panoptic=True mode, panoptic labels will be displayed rather than semantic labels. Only legends for stuff categories will be displayed.

Render sample (i.e. lidar, radar and all cameras)

Of course, like in the original nuScenes devkit, you can render all the sensors at once with render_sample. In this extended nuScenes devkit, you can set show_lidarseg=True to see the lidarseg labels. Similar to the above methods, you can use filter_lidarseg_labels to display only the classes you wish to see.

To show panoptic labels with render_sample, set show_panoptic=True

Render a scene for a given camera sensor with lidarseg/panoptic labels

You can also render an entire scene with the lidarseg labels for a camera of your choosing (the filter_lidarseg_labels argument can be used here as well).

Let's pick a scene first:

We then pass the scene token into render_scene_channel_lidarseg indicating that we are only interested in construction vehicles and man-made objects (here, we set verbose=True to produce a window which will allows us to see the frames as they are being random).

In addition, you can use dpi (to adjust the size of the lidar points) and imsize (to adjust the size of the rendered image) to tune the aesthetics of the renderings to your liking.

(Note: the following code is commented out as it crashes in Jupyter notebooks.)

This function also works for panoptic labels, by adding show_panoptic=True

To save the renderings, you can pass a path to a folder you want to save the images to via the out_folder argument, and either video or image to render_mode.

(Note: the following code is commented out as it crashes in Jupyter notebooks.)

When render_mode='image', only frames which contain points (after the filter has been applied) will be saved as images.

Also the same function can be used to render scene channel for panoptic.

Render a scene for all cameras with lidarseg/panoptic labels

You can also render the entire scene for all cameras at once with the lidarseg labels as a video. Let's say in this case, we are interested in points belonging to driveable surfaces and cars.

(Note: the following code is commented out as it crashes in Jupyter notebooks.)

Again, we can render a scene for panoptic labels.

Visualizing LiDAR segmentation predictions

In all the above functions, the labels of the LiDAR pointcloud which have been rendered are the ground truth. If you have trained a model to segment LiDAR pointclouds and have run it on the nuScenes-lidarseg dataset, you can visualize your model's predictions with nuScenes-lidarseg as well!

Each of your .bin files should be a numpy.uint8 array; as a tip, you can save your predictions as follows:

np.array(predictions).astype(np.uint8).tofile(bin_file_out)

Then you simply need to pass the path to the .bin file where your predictions for the given sample are to lidarseg_preds_bin_path for these functions:

For example, let's assume the predictions for my_sample is stored at /data/sets/nuscenes/lidarseg/v1.0-mini with the format <lidar_sample_data_token>_lidarseg.bin:

For these functions that render an entire scene, you will need to pass the path to the folder which contains the .bin files for each sample in a scene to lidarseg_preds_folder:

Pay special attention that each set of predictions in the folder must be a .bin file and named as <lidar_sample_data_token>_lidarseg.bin.

(Note: the following code is commented out as it crashes in Jupyter notebooks.)

Visualize LiDAR panoptic predictions

Similarly, panoptic prediction results could be rendered as well! Each of your .npz files should be a compressed numpy.uint16 array; You can save your predictions as follows:

np.savez_compressed(npz_file_out, data=predictions.astype(np.uint16))

Then you simply need to pass the path to the .npz file where your predictions for the given sample are to lidarseg_preds_bin_path (Note the path name is correct as we share these arguments with nuscenes-lidarseg predictions) for these functions:

For example, let's assume the predictions for my_sample is stored at /data/sets/nuscenes/panoptic/v1.0-mini with the format <lidar_sample_data_token>_panoptic.npz:

For these functions that render an entire scene, you will need to pass the path to the folder which contains the .npz files for each sample in a scene to lidarseg_preds_folder:

Pay special attention that each set of predictions in the folder must be a .npz file and named as <lidar_sample_data_token>_panoptic.npz.

(Note: the following code is commented out as it crashes in Jupyter notebooks.)

Conclusion

And this brings us to the end of the tutorial for nuScenes-lidarseg and nuScenes-panoptic, enjoy!