Formula Student Driverless Simulator on Docker

Aditya NG
2 min readFeb 19, 2021
A clip from the live stream

With autonomy and robotics attracting an increasing number of students, the demand for suitable simulation software and hardware to run it has also gone up. In 2020, many talented students took part in FSOnline 2020. The driverless event was streamed online and teams ran their software to drive a car through a track consisting of a set of cones.

Recently, our team began working on a driverless software+hardware stack that would be inexpensive to manufacture and maintain by making vision the primary input and optimizing the compute requirements. For testing out our ideas before manufacturing anything, we decided to turn to a simulator which provides realistic vision input and we decided to use the Formula Student Driverless Simulator which was used in the FSOnline Driverless event.

We needed to be able to run our code on a server and we were not guaranteed a persistent environment and so we decided to put FSDS inside a Docker container which brings the convenience of being able to have a single command to quickly run our project on any Linux machine with an Nvidia GPU. To view our source code and a Getting Started guide refer to :

Project : https://github.com/AdityaNG/Formula-Student-Driverless-Simulator/tree/docker/docker

If you’d like to run our project, simply use the following

docker run --runtime=nvidia -it -d --gpus all --net=host \ 
-e DISPLAY -v /tmp/.X11-unix \
-e NVIDIA_DRIVER_CAPABILITIES=all \
--env DISPLAY_COOKIE="(DISPLAY_COOKIE)" \
adityang5/fsds:v1 \
/bin/sh /fsds/run.sh

You’ll need to replace DISPLAY_COOKIE with your your display cookie which you can get from xauth, it should look like the following

xauth list
username/machine:0 MIT-MAGIC-COOKIE-1 [32 character string]

You will also have to install the Nvidia runtime for docker, refer to this User Guide from Nvidia for more information.

To read more about the hardware acceleration tasks that made it possible to run the simulation in real time, do check out our article “Go Fast or Go Home ! — Thinking Parallelly

To read more about how we used hardware acceleration to compare Li-DAR and Vision for the task of autonomous driving, do check out our article “Vision Based Depth for Autonomous Machines

--

--