Tags:
create new tag
, view all tags

Accessing Slices

Once your account has been enabled and you have uploaded a public key, you may begin to access your slice(s). If you have not yet created an account, please refer to the Request Access page.

UPDATE — As of 08/31/2009, users may now create their own slices using the PlanetLab Slice Federation Architecture (SFA) code by specifying a set of compute and network resources! Details on exactly how this is done can be found below.

References and Background Information

We would recommend you consult one of the following pages if you require more in-depth information regarding the PlanetLab's Slice Federation Architecture (SFA):

Install PlanetLab SFA code

The first step to accessing your slice is to download and install PlanetLab's implementation of the Slice Federation Architecture (SFA).

You will need subversion to check-out the code from the repository. This software requires the python programming language to be installed on your system. You will also need the M2Crypto SSL toolkit for Python.

Check-out and install the SFA code by running the following commands:

% svn -q export http://svn.planet-lab.org/svn/sfa/tags/sfa-0.9-0 % cd sfa-0.9-0 % sudo python setup.py install

sfa-0.9-0 was the latest tag as of this writing. Please visit http://svn.planet-lab.org/browser/sfa/tags/, check if there is a newer release, and update the commands above as needed. We recommend running the latest tagged release, as the software is quickly evolving — this will ensure that you get all of the latest bug fixes and new features.

If you are using a Fedora Linux system, you may install an RPM version of the SFA command-line tools. More information about how this is done can be found at http://svn.planet-lab.org/wiki/SFAGuide. Please note that RPM versions may be slightly older than the latest tagged release in Subversion.

Configure PlanetLab SFA code

Create sfi_config

Create a new directory ~/.sfi and create a new text file ~/.sfi/sfi_config in this directory, using the example below to populate this file:

$ mkdir ~/.sfi $ cat > ~/.sfi/sfi_config SFI_AUTH="plc.max" SFI_USER="plc.max.maxpl.<username>" SFI_REGISTRY="http://max-myplc.dragon.maxgigapop.net:12345/" SFI_SM="http://max-myplc.dragon.maxgigapop.net:12346/" [ctrl-D] $

Replace <username> with the user portion of your e-mail address. For example, if your e-mail address is joe@domain.org, your username would be plc.max.maxpl.joe. Replace any periods ('.') that appear in the user portion of your e-mail address with underscores ('_'). For example, joe.smith@gmail.com would become plc.max.maxpl.joe_smith.

NOTE: All Human-Readable Names (HRNs) for nodes, slices, users and authorities in our current deployment are prefixed with plc.max.maxpl.

Configure authentication

The planetlab_id_rsa private key file which you generated using ssh-keygen (while following the steps on the RequestAccess page) must be copied into the ~/.sfi/ directory. The full path to this file must be ~/.sfi/<username>.pkey. For example:

$ cp planetlab_id_rsa ~/.sfi/<username>.pkey

Test PlanetLab SFA code functionality

You should now be able to run a command-line utility called sfi.py to view information about compute resources available on the Substrate. For example, the command below will list all compute nodes:

$ sfi.py list --type node plc.max.maxpl plc.max.maxpl.planetlab4 (node) plc.max.maxpl.planetlab2 (node) plc.max.maxpl.planetlab3 (node) plc.max.maxpl.planetlab5 (node)

You may obtain more detailed information about a record (e.g. the hostname of a particular node) by using the sfi.py show [HRN] command, for example:

$ sfi.py show plc.max.maxpl.planetlab5 gid: hrn: plc.max.maxpl.planetlab5
uuid
306548836508516109729588157985821644515 last_updated: 1245778372 hrn: plc.max.maxpl.planetlab5 type: node date_created: 1245778372 node_type: regular hostname: planetlab5.dragon.maxgigapop.net

Examine your user and slice records

The next step to accessing your slice is to examine your user record to determine the HRN of your slice record. Use the sfi.py show [HRN] command, replacing [HRN] with the value that you have set for SFI_USER in your ~/.sfi/sfi_config file:

$ sfi.py show plc.max.maxpl.joe gid: hrn: plc.max.maxpl.joe
uuid
150374423476497122358740116308144214612 last_updated: 1250787500 hrn: plc.max.maxpl.joe type: user date_created: 1245706563 last_name: Smith slices: ['plc.max.maxpl.joe_slice1','plc.max.maxpl.joe_slice2'] phone: 123-123-1234 key: plc.max.maxpl.joe#user first_name: Joe email: joe@domain.org

You are looking for the slices: line in particular, highlighted in red in the example above. This is showing that Joe has access to two slices: plc.max.maxpl.joe_slice1 and plc.max.maxpl.joe_slice2.

Now that we know the HRN of our slices, we may examine which resources have been allocated to our slice by using sfi.py resources [HRN]:

$ sfi.py resources --format xml plc.max.maxpl.joe_slice1 <?xml version="1.0" encoding="UTF-8"?> <rspec xmlns="http://www.maxgigapop.net/sfa/07/09"> <capacity><netspec name="predefined_physical_topology"> <nodespec name="planetlab2"> <node>planetlab2.dragon.maxgigapop.net</node> <ifspec name="pl23" linkid="2745"></ifspec> <ifspec name="pl24" linkid="2005"></ifspec> <ifspec name="pl25" linkid="2271"></ifspec> </nodespec> <nodespec name="planetlab3"> <node>planetlab3.dragon.maxgigapop.net</node> <ifspec name="pl32" linkid="2745"></ifspec> <ifspec name="pl34" linkid="3157"></ifspec> <ifspec name="pl35" linkid="2975"></ifspec> </nodespec> <nodespec name="planetlab4"> <node>planetlab4.dragon.maxgigapop.net</node> <ifspec name="pl42" linkid="2005"></ifspec> <ifspec name="pl43" linkid="3157"></ifspec> <ifspec name="pl45" linkid="1123"></ifspec> </nodespec> <nodespec name="planetlab5"> <node>planetlab5.dragon.maxgigapop.net</node> <ifspec name="pl52" linkid="2271"></ifspec> <ifspec name="pl53" linkid="2975"></ifspec> <ifspec name="pl54" linkid="1123"></ifspec> </nodespec> </netspec></capacity><request></request> </rspec>

The result of this command is an XML document (reformatted in the output above to appear more readable) showing the RSpec for the specified slice.

To summarize, this slice consists of 4 compute nodes, with a full mesh of dedicated network links connecting the nodes via Ethernet VLANs specified by the linkid field. The diagram below depicts the topology expressed in the RSpec above:

planetlab sample rspec topo

Login to slice nodes

Using your private RSA key, you may use ssh to login to any node associated with your slice. Similarly, you may use scp to copy files to nodes.

Use the last portion of the slice HRN, prefixed by maxpl, as the login name — for example, if the slice HRN is plc.max.maxpl.joe_slice1, the SSH username would be maxpl_joe_slice1. The hostnames of the nodes associated with your slice may be found using the sfi.py example commands above.

In the example below, we specify the location of the private RSA key using the ssh -i option. This is the private RSA key that was generated by running ssh-keygen as described on the RequestAccess page. When we run the df command, we see that approximately 5GB of the node's total disk space has been allocated to our slice.

$ ssh -i ~/.ssh/id_rsa_maxpl maxpl_joe_slice1@planetlab5.dragon.maxgigapop.net [maxpl_joe_slice1@planetlab5 ~]$ df -h Filesystem Size Used Avail Use% Mounted on /dev/hdv1 4.8G 88K 4.7G 1% / none 4.8G 88K 4.7G 1% /tmp

We can find out more about the type of system that we are running on by running normal Linux commands, such as cat /proc/cpuinfo. This particular system is a dual quad-core Xeon machine, so the output reveals 8 CPUs total:

[maxpl_joe_slice1@planetlab5 ~]$ cat /proc/cpuinfo | grep "model name" model name : Intel(R) Xeon(R) CPU E5410 @ 2.33GHz model name : Intel(R) Xeon(R) CPU E5410 @ 2.33GHz model name : Intel(R) Xeon(R) CPU E5410 @ 2.33GHz model name : Intel(R) Xeon(R) CPU E5410 @ 2.33GHz model name : Intel(R) Xeon(R) CPU E5410 @ 2.33GHz model name : Intel(R) Xeon(R) CPU E5410 @ 2.33GHz model name : Intel(R) Xeon(R) CPU E5410 @ 2.33GHz model name : Intel(R) Xeon(R) CPU E5410 @ 2.33GHz

Acquire root access

Simply run sudo [command] or sudo -s to run commands as root, or get a root shell, respectively:

[maxpl_joe_slice1@planetlab5 ~]$ sudo whoami root [maxpl_joe_slice1@planetlab5 ~]$ sudo -s   We trust you have received the usual lecture from the local System Administrator. It usually boils down to these three things:   #1) Respect the privacy of others. #2) Think before you type. #3) With great power comes great responsibility.   [root@planetlab5 ~]#

Install optional packages

Use yum install [packagename] to install RPMs into your slice. For example, to install gcc:

[root@planetlab5 ~]# which gcc /usr/bin/which: no gcc in (/usr/bin:/bin) [root@planetlab5 ~]# yum -y install gcc Setting up Install Process Parsing package install arguments Resolving Dependencies [...] Installed: gcc.i386 0:4.1.2-33 [...]   Complete! [root@planetlab5 ~]# which gcc /usr/bin/gcc

Using dynamic end-to-end VLAN circuits

Configure VLAN sub-interfaces on nodes

Ethernet VLANs may be provisioned dynamically across the substrate to provide dedicated, high-speed connectivity between PlanetLab nodes.

Normally, to terminate such circuits, one would run a command such as vconfig add eth1 2000 to add a new interface called eth1.2000 to the system (which sends/receives 802.1Q tagged Ethernet frames with VLAN ID 2000). The vconfig command cannot be run directly on the VMs that are allocated to slices — it must be run on the main system, which is inaccessible to users of a slice.

To work around this problem, PlanetLab uses a generalized mechanisms called vsys to allow users of a VM to run specific types of commands on the main system. First, examine the /vsys directory and ensure that you see two special files called getvlan.in and getvlan.out, as shown in the example below. If you do not see these, or you get a permission denied error, please contact the MANFRED administrators to help you.

[maxpl_joe_slice1@planetlab5 ~]$ sudo ls -l /vsys total 0 prw-r--r-- 1 root root 0 Aug 22 04:32 getvlan.in prw-r--r-- 1 root root 0 Aug 22 04:32 getvlan.out

Imagine that you are logged into node planetlab5 and want to provision the interface which terminates VLAN 2975 — eth1.2975 — and set an IP address of 10.2.20.5 on this interface. This may be done by using the following procedure, assuming that your slice has the proper access to the given VLAN ID and IP address:

[root@planetlab5 ~]# /sbin/ifconfig eth1.2975 eth1.2975: error fetching interface information: Device not found [root@planetlab5 ~]# cat /vsys/getvlan.out & [1] 10708 [root@planetlab5 ~]# cat > /vsys/getvlan.in 2975 10.2.20.5 Added VLAN with VID == 2975 to IF -:eth1:- [press Ctrl-D] [1]+ Done cat /vsys/getvlan.out [root@planetlab5 ~]# /sbin/ifconfig eth1.2975 eth1.2975 Link encap:Ethernet HWaddr 00:22:15:99:D6:40 inet addr:10.2.20.5 Bcast:10.2.20.7 Mask:255.255.255.252 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:6 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0.0 b) TX bytes:468 (468.0 b)

A similar procedure should be followed for the node planetlab3, the only difference being that the other IP address in the /30 subnet should be assigned to that side of the link. In this case, 10.2.20.5/30 was assigned to planetlab5's eth1.2975, so 10.2.20.6/30 must be assigned to planetlab3's eth1.2975.

Utilize the dynamic circuit

Once the configuration procedure has been completed, you should be able to ping across the dynamic circuit by pinging the other side of the link (e.g., from planetlab3's perspective, the other side of the link would be 10.2.20.5):

[root@planetlab3 ~]# ping -c 1 10.2.20.5 PING 10.2.20.5 (10.2.20.5) 56(84) bytes of data. 64 bytes from 10.2.20.5: icmp_seq=1 ttl=63 time=1.31 ms   --- 10.2.20.5 ping statistics --- 1 packets transmitted, 1 received, 0% packet loss, time 0ms rtt min/avg/max/mdev = 1.312/1.312/1.312/0.000 ms

The link is now ready to be utilized by any applications that might require bulk data transfer between nodes, or deterministic network performance (in terms of latency, bandwidth, or jitter).

Another advantage of such circuits (especially in the context of PlanetLab), is that the sub-interface which appears in your slice does not have to be shared with other slices, so there is no contention for bind() operations that may want to bind to a particular TCP or UDP port, for example.

Topic revision: r2 - 2009-08-31 - ChrisTracy
 
This site is powered by the TWiki collaboration platform Powered by PerlCopyright © 2008-2019 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback