IFF588 Möllenstedt

The node IFF588 (IP: 134.94.162.235) is a data analyses server accessing the PICO & Holo small data storage. The primary purpose of the server is data analyses and therefore it has priority over any simulations. The two separate parts of the data storage are mapped as /storage/holo and /storage/pico for the Holo and PICO data respectively.

The server is named in honor of Gottfried Möllenstedt a pioneer of off-axis electron holography.

Technical data

  • Operating system: CentOS Linux
  • Processors: Intel Xeon W-2195 Processor (18 cores, 36 logical processors)
  • Cache memory: L1 1.125 MB, L2 18 MB, L3 24.75 MB
  • RAM: 256 GB @ 2666 MHz
  • GPU: GeForce GTX 1080 Ti X2, 11 GB memory, Compute capability 6.1 (Pascal)
  • Storage: 2x 500 GB SATA SSD, 2x 2 TB PCIe SSD, 2x 10 TB HDD.

Rules of usage

Data evaluation has priority over simulations. The simulations may be started with lowest priority as follows:

nice -n 19 program_name

the priority of an already running process can be changed as follows:

renice -n nice_value -p process_id

For long running simulations (more than a few minutes), we have a calendar where you can "book" time. This is done to prevent CPU and GPU over-subscription, which causes an overall slowdown. Please contact d.weber@fz-juelich.de if you need access to the calendar.

If you observe that another user is blocking computational power by a simulation such that you can't handle your experimental data analysis, please report to the administrators immediately. Simulation tasks may be cancelled in such cases without further notice.

Access and usage

Access via SSH

LDAP users can access the machine via ssh protocol using following command:

ssh [username]@moellenstedt.iff.kfa-juelich.de

where [username] is your LDAP user name.

Using local and network storage

There are several drives physically mounted on the server:

  • An SSD RAID0 array for system files mounted in root directory.
  • An HDD RAID0 array (~19 TB) mounted in /data. Directory /data/user should be used to store user data.
  • A PCIe SSD RAID0 array (~4 TB) for fast IO operations mounted in /cachedata. User data can be temporary cached in directory /cachedata/users/ to enable it for fast read/write during data analyses. NOTE that no data should be stored in this location permanently. If the drive is full contact users with request move the data away.

In addition following network destinations are accessible:

Using Jupyter notebook server

https://moellenstedt.iff.kfa-juelich.de

Use LDAP credentials to connect to the Jupiter notebook server.

Using LiberTEM with UI

See the main LiberTEM wiki page for details.

Transferring data

The home folder of each user is mounted as iffpscrv/USERNAME. This folder can be used for transfer of small amounts of data via the iffpcsrv.

Alternatively direct connections to other machines can be established by mounting remote drives.

Use LiberTEM API with Jupyter or JupyterHub

See the LiberTEM documentation!

Administration

The system is administrated by Alexander Clausen and Markus Consoir

Software Tools

Current Troubles and Issues

  • none.
Loading...