Best Visualization:

Ab-initio Simulation of Electron Flow through a Si Nanowire

Mauro Calderara, Sascha Brueck, Andreas Pedersen, Mohammad Hossein Bani-Hashemian, Joost Vandevondele, Mathieu Luisier and Jean M. Favre

To continue Moore’s scaling law beyond 2020, new classes of devices will have to emerge that exhibit improved switching performance as compared to the currently manufactured FinFET technology. Horizontal wrap-gate nanowire transistors belong to this category of promising future logic switches. Aggressive scaling of the past decades have pushed down the dimension of transistors to the nanoscale, where each individual atom has a strong influence on the “current vs. voltage” characteristics of the underlying logic components. When an external voltage is applied between both extremities of the nanowire, an electrical current starts to flow through it. The electron trajectories depend on the atomic configuration and on the bias-induced potential profile inside the device structure. While the flow of electrons remains homogeneous close to the source region it splits into four branches when reaching the opposite side. This unexpected effect could be revealed thanks to accurate quantum transport simulations [1] and a proper visualization of the current routes.

A visualization application was build on top of VTK's latest release, which includes new GPU-based rendering techniques we needed for an efficient and interactive exploration of the nano devices. Electric field lines were constructed within a Delaunay tetrahedralization of the volume containing all atoms. The electric vector is integrated, and static field lines are built. To portray moving electrons along these lines, we used a new feature of VTK's Point Sprites, or Point Gaussian Mapper; an opacity array can modulate the intensity of the sprites. We use forward-moving sinusoidal functions to give this moving effect. The PointSprites, coupled with geometric imposters for the atoms and bonds provide an efficient, artifact-free rendering effect.

[1] M. Calderara, S. Brück, A. Pedersen, M. H. Bani-Hashemian, J. VandeVondele, and M. Luisier, “Pushing Back the Limit of Ab-initio Quantum Transport Simulations on Hybrid Supercomputers”, SC 15, Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis, Article No. 3, doi>10.1145/2807591.2807673 (2015)

Best Student Visualization:

In situ, steerable, hardware-independent and data-structure agnostic visualization with ISAAC

Alexander Matthes, Axel Huebl, René Widera, Sebastian Grottel, Stefan Gumhold and Michael Bussmann

We showcase the C++ template library ISAAC [1,2] for in situ visualization of simulations or other high rate data sources running distributed on modern HPC systems. As most in situ visualization solutions suffer from the problem that the simulation data needs to be converted to visualization specific data structures, ISAAC implements a data structure agnostic raycasting algorithm using C++ templates and C++ meta programming. Using this approach ISAAC is not only able to visualize nearly arbitrary simulation data without the need of deep copying or converting it beforehand, but is also capable to use the very same computation device as the simulation itself.

Using the same computation device as the simulation usually limits the scope of usable hardware as modern many-core devices require programming models optimized for the specific hardware, e.g. CUDA for NVIDIA devices, to achieve optimum performance. In order to circumvent this problem ISAAC is based on the abstract kernel interface library Alpaka [3,4], which defines a redundant parallel hierarchy model for many-core architectures that serves as a front end to underlying models such as CUDA, OpenMP or Thread Building Blocks. Using this approach, the ISAAC software renderer can run in situ on almost every platform currently available.

Not all simulation data is perfectly suited for direct visualization and sometimes requires transformation. ISAAC thus introduces so called Functor Chains, which are very simple precompiled, but at runtime selectable, functions used for local domain transformations of the original simulation data performed before the data is streamed to the raycasting algorithm.

ISAAC is capable of scaling up to Petascale systems using the IceT library. It is not intended for highly specialized visualization but instead renders the classical representation as glowing gas or as iso surfaces. Aside from the obligatory transfer functions for the classification ISAAC also supports an arbitrary amount of random clipping planes useful for a deeper look into the simulated volumes.

ISAAC includes an interface for simulations to send arbitrary live meta data with the live preview and to receive live steering data. The whole communication layer of ISAAC is intentionally based only on open and widely used standards such as Websockets, RTP Streams and especially the open-standard format JSON.

ISAAC provides a server running on the head or login node of the HPC system, which creates the video streams from the visualization and forwards them together with the meta data to a freely selectable number of clients. The video stream created by the server can be received from arbitrary clients such as VLC or even streaming platforms like Twitch. Furthermore, each client can steer the simulations. ISAAC itself introduces a platform-independent HTML5 client, which can be adjusted to the needs of specific simulations easily.

Since every part of ISAAC is open source and makes use of the openly documented JSON communication protocol, it is easily possible to implement new clients or to extend the visualization core itself for simulation-specific features. ISAAC is designed to be as language-, framework-, data-format- and platform-agnostic as possible.

In order to demonstrate the real time capabilities of ISAAC we will showcase a live visualization of the GPU-accelerated plasma simulation PIConGPU [5,6]. We show that we can achieve more than ten frames per seconds using 64 GPUs on the Hypnos cluster at Helmholtz-Zentrum Dresden -- Rossendorf running both the simulation and visualization simultaneously.

[1] A. Matthes, In situ Visualisierung und Streaming von Plasmasimulationsdaten, Technical University Dresden (2016)

[2] ISAAC github repository

[3] E. Zenker et al., Alpaka - An Abstraction Library for Parallel Kernel Acceleration, ArXiv Pre-print (2016)

[4] Alpaka github repository

[5] M. Bussmann et al., Radiative Signatures of the Relativistic Kelvin-Helmholtz Instability, Proceedings of the International Conference on High Performance Computing, Networking, Storage and Analysis, SC'13, 5, 1 (2013)

[6] Picongpu github repository


ISC Workshops BARCO University of Stuttgart University of Tartu

Webpage by S.Lubi, last edited 31.01.2017