Hasso-Plattner-Institut Potsdam Operating Systems and Middleware Group at HPI University of Potsdam, Germany
Operating Systems and Middleware Group at HPI

Gentoo Linux on Intel SCC

We have created a Gentoo Linux suitable for running on the Intel SCC. You can download it here: http://www.dcl.hpi.uni-potsdam.de/research/scc/scc_gentoo_20101117.tar.bz2.

Installation

Un-tar the archive, then copy the contents of the shared folder to your own /shared file system. The package contains the following:
FileNotes
/shared/gentoo/Gentoo root filesystem
/shared/to_gentoo.shScript to enter Gentoo
/shared/set_nat.shScript to configure the MCPC as a NAT router

Usage Overview

This Linux has been built such that it can be used on the SCC as well as on the MCPC itself. In both cases, we rely on an already running Linux kernel, then use chroot to exchange the user mode environment. This works fine on both the default SCC Linux (2.6.16-mcemu) as well as the MCPC Ubuntu kernels (2.6.32 and later, compiled for x86_64).

To enter Gentoo, execute /shared/to_gentoo.sh. This will mount /proc and /dev into the Gentoo root, then open an interactive shell. If you are running on an i586 kernel (SCC Linux/"uname -m" returns "i586"), it will also start some daemons.

From within Gentoo, all usual Linux tools are available. Please note that Gentoo uses Portage for package management, so you need to invoke emerge instead of aptitude when installing or updating them. For more details on emerge, please refer to the Gentoo Handbook, especially Section 1. A Portage Introduction. When installing new or updated packes, emerge usually wants to download the respective sources; to give your SCC access to the MCPC's primary network connection, execute /shared/set_nat.sh as root on the MCPC. The script assumes that eth0 is the primary (WAN) connection, whereas the SCC cores are connected via eth1 (LAN), so it enables NAT routing from eth1 to eth0. If your network setup is different, please change the corresponding defines in the script. (For example, if you are using sccKit 1.3.0 or earlier, your SCC cores are connected via crb0 instead of eth0, because your FPGA won't be able to use the RockyLake board's ethernet ports. In this case, change the definition of the LAN interface to crb0). If everything is set up correctly, you should be able to do something like ping www.intel.com from the SCC, and emerge should also get access to its repositories.

X11 Redirection

Our Gentoo comes with X11 installed, so you can use X11 redirection to run your applications on the SCC while displaying the user interface on a different machine (e.g., the MCPC).

We use ssh's built-in X11 redirection, but this requires xauth on the server (the machine running sshd). As this depends (obviously) on X11, which is not delivered in the standard SCC Linux distribution, we chose to start a second ssh server when Gentoo is running. This is done automatically when running to_gentoo.sh on the SCC, so you usually don't need to do anything special. The second sshd listens on port 1234. Therefore, to start an ssh session with X11 redirection, you can simply use the following command on the MCPC:

ssh -Xp 1234 jan@rck00

This works once Gentoo has been started on the respective SCC node (rck00 in the example). It opens an ssh connection that automatically forwards X11 to the client. As a first test, try running xeyes. The user jan (the password, in case you ever need it, is identical to the name :) can use su and sudo freely to perform actions as root.

Ganglia

As the SCC can be seen as cluster-on-a-chip, we also included support for Ganglia, a famous monitoring system for clusters. A good setup manual can be found here: http://www.ibm.com/developerworks/wikis/display/WikiPtype/ganglia.

When starting Gentoo on the SCC, it also starts the gmond, Ganglia's node monitor daemon. gmond is configured to send its data to the MCPC (via the rckhost alias), but in order to view the data, you'll also need a Ganglia installation on the MCPC itself. To do so, install the packages gmetad and ganglia-monitor via Ubuntu's packet management, then use our configuration files gmetad.conf and gmond.conf for the daemons on the MCPC (make a backup of the default files in /etc/ganglia first). To be able to view the gathered data, you need to insert something like the following line at the end of your /etc/apache2/apache.conf:

Alias /ganglia /usr/share/ganglia-webfrontend

After restarting Apache, point your web browser to http://localhost/ganglia to view your cluster status (the MCPC will be considered one node in the "DCL-RockyLake" cluster; feel free to change the name to anything you like :).

For this setup to work, both gmetad and gmond (named ganglia-monitor on Ubuntu) need to be running on the MCPC, and the gmond instances on the SCC need to be started after the MCPC daemons (otherwise, some metrics simply aren't available). To make sure the daemons on the MCPC start at boot time, you can use a command like sudo rcconf --on gmetad,ganglia-monitor.

Contact

Please feel free to contact us (Jan-Arne Sobania, Peter Tröger) if you need help or have suggestions for further updates.