Before you launch Smilei, write a namelist file containing all the information of your simulation (grid shape, particles, lasers, diagnostics, etc.).

You can also start from an example provided in the benchmarks directory.

The smilei executable

Compiling Smilei creates an executable file smilei in the source directory.

smilei arg1 arg2 arg3 ...

The command-line arguments arg1, arg2, arg3 (etc.) can be:

  • the path to a namelist

  • any python instruction that you want to execute during the namelist reading.

The simplest example, to run your namelist, is


You may also add an additional instruction to be appended at the end of the namelist:

./smilei  "Main.print_every=10"

Note that, in addition, you will generally use the mpirun or mpiexec command to run Smilei on several MPI processes:

mpirun -n 4 ./smilei  "Main.print_every=10"

If you want to run several openMP threads per MPI processes, you usually have to set the following environment variable to the desired number of threads before running mpirun:


When running Smilei, the output log will remind you how many MPI processes and openMP threads your simulation is using.

Running in test mode

A second executable smilei_test is available (after the usual compilation) to run in the test mode:


This test mode does the same initialization as the normal mode, except it only loads the first patch of the full simulation. After initialization, the test mode exits so that the PIC loop is not computed.

This mode may be used to check the consistency of the namelist, and to make sure simple errors will not occur. It does not check all possible errors, but it runs fast.

Running in test mode requires to run on 1 MPI process only. However, it is possible to indicate what is the partition of MPI processes and OpenMP threads intended for the actual simulation. For instance, to test your namelist that is intended to run on 1024 MPI processes, each hosting 12 OpenMP threads, use the following syntax:

./smilei_test 1024 12

Directory management

Let us assume you have written your namelist, and that you placed it inside your home directory. Also, we assume that the Smilei directory is also there, so that the smilei executable is located in ~/Smilei/.

Knowing that Smilei generally writes out all the results in the current directory, it is recommended to create a new directory to store these results. For instance:

$ mkdir ~/my_simulation                     # New directory to store results
$ cp ~/ ~/my_simulation       # Copies the namelist there
$ cd ~/my_simulation                        # Goes there
$ mpirun -n 4 ~/Smilei/smilei my_namelist   # Run with 4 processors

Using the provided script

For simple cases such as the previous one, use the script, provided in the Smilei directory. You only have to run

$ ./ 4

where the number 4 says that the code will run 4 MPI processes. A directory with all the results will automatically be created next to your namelist.

Running on large clusters

We do not provide instructions to run on super-computers yet. Please refer to your administrators.

Running on GPU-equiped nodes

On a supercomputer equipped with GPUs it is necessary to use a binding script. Here are two examples:

With Nvidia GPUs: srun  ./smilei

With AMD GPUs using cray on Adastra: srun --cpu-bind=none --mem-bind=none --mpi=cray_shasta --kill-on-bad-exit=1 -- ./bind ./smilei

For the binding scripts themselves, as it depends completely on the node architecture, please contact your admin support team.

Binding script for adastra can be found here: with the example of a slurm script. it can be used as a template for other AMD GPUs based supercomputers/clusters.

Be aware that GPU support is in development and not all features are currently available. Please refer to the list of current supported features.


In case of problems, the code can be compiled with additional debugging flags (usual -g and -O0) and internal checks by compiling it with

make config=debug

Compiling the whole code with this command will make it very slow to run. But to check only a particular file for errors, first compile the code with make, then modify the file, and recompile in debug mode.

In debug mode, these C++ macros are activated:

  • DEBUG("some text" [<< other streamable])

  • HEREIAM("some text" [<< other streamable])

Known issues

  • OpenMPI 2.* often causes unstable behavior in Smilei. For instance, with openmpi 2.1, the vader protocol seems to interfere with Smilei’s memory management and comunications. We therefore recommend to disable this protocol when running mpirun, as follows:

    $ mpirun --mca btl ^vader -n 4 ~/Smilei/smilei my_namelist   # Disable vader