CHAPTER 2: INSTALLING THE FIRST AFS MACHINE
. . . . . . . . . . Assumptions
. . . . . . . . . . Configuration Decisions
. . . . . . . . . . How to Use This Chapter
2.1 . . . . Overview: Installing File Server Functionality
2.2 . . . . Choosing the First AFS Machine
2.3 . . . . Beginning with System-Specific Tasks
2.3.1 . . . . . . Loading Files Using a Local Tape Drive
2.3.2 . . . . . . Loading Files from a Remote Machine
2.3.3 . . . . . . How to Continue
2.4 . . . . Getting Started on AIX Systems
2.4.1 . . . . . . Using the Kernel Extension Facility on AIX Systems
2.4.2 . . . . . . Setting Up AFS Partitions on AIX Systems
2.4.3 . . . . . . Replacing fsck on AIX Systems
2.5 . . . . Getting Started on Digital UNIX Systems
2.5.1 . . . . . . Building AFS into the Kernel on Digital UNIX Systems
2.5.2 . . . . . . Setting Up AFS Partitions on Digital UNIX Systems
2.5.3 . . . . . . Replacing fsck on Digital UNIX Systems
2.6 . . . . Getting Started on HP-UX Systems
2.6.1 . . . . . . Using dkload on HP-UX Systems
2.6.2 . . . . . . Building AFS into the Kernel on HP-UX Systems
2.6.3 . . . . . . Setting Up AFS Partitions on HP-UX Systems
2.6.4 . . . . . . Replacing fsck on HP-UX Systems
2.7 . . . . Getting Started on IRIX Systems
2.7.1 . . . . . . Using ml on IRIX Systems
2.7.2 . . . . . . Building AFS into the Kernel on IRIX Systems
2.7.3 . . . . . . Installing the Installation Script on IRIX Systems
2.7.4 . . . . . . Setting Up AFS Partitions on IRIX Systems
2.8 . . . . Getting Started on NCR UNIX Systems
2.8.1 . . . . . . Building AFS into the Kernel on NCR UNIX Systems
2.8.2 . . . . . . Setting Up AFS Partitions on NCR UNIX Systems
2.8.3 . . . . . . Replacing fsck on NCR UNIX Systems
2.9 . . . . Getting Started on Solaris Systems
2.9.1 . . . . . . Using modload on Solaris Systems
2.9.2 . . . . . . Setting Up AFS Partitions on Solaris Systems
2.9.3 . . . . . . Replacing fsck on Solaris Systems
2.10 . . . . Getting Started on SunOS Systems
2.10.1 . . . . . . Using dkload on SunOS Systems
2.10.2 . . . . . . Using modload on SunOS Systems
2.10.3 . . . . . . Building AFS into the Kernel on SunOS Systems
2.10.4 . . . . . . Setting Up AFS Partitions on SunOS Systems
2.10.5 . . . . . . Replacing fsck on SunOS Systems
2.11 . . . . Getting Started on Ultrix Systems
2.11.1 . . . . . . Using dkload on Ultrix Systems
2.11.2 . . . . . . Installing an AFS-Modified Kernel on an Ultrix System
2.11.3 . . . . . . Setting Up AFS Partitions on Ultrix Systems
2.11.4 . . . . . . Replacing fsck on Ultrix Systems
2.12 . . . . Starting the BOS Server
2.13 . . . . Defining the Cell Name and the Machine's Cell Membership
2.14 . . . . Starting the Authentication Server
2.14.1 . . . . . . A Note on Kerberos
2.14.2 . . . . . . Instructions for Installing the Authentication Server
2.15 . . . . Initializing Security Mechanisms
2.16 . . . . Starting the Protection Server
2.17 . . . . Starting the Volume Location Server
2.18 . . . . Starting the Backup Server
2.19 . . . . Starting the File Server, Volume Server, and Salvager
2.20 . . . . Starting the Server Portion of the Update Server
2.21 . . . . Starting the Controller for NTPD
2.22 . . . . Completing the Installation of Server Functionality
2.23 . . . . Overview: Installing Client Functionality
2.24 . . . . Defining the Client Machine's Cell Membership
2.25 . . . . Creating the Client Version of CellServDB
2.26 . . . . Setting Up the Cache
2.26.1 . . . . . . Setting Up a Disk Cache
2.26.2 . . . . . . Setting Up a Memory Cache
2.27 . . . . Creating /afs and Starting the Cache Manager
2.28 . . . . Overview: Completing the Installation of the First AFS Machine
2.29 . . . . Setting Up the Top Levels of the AFS Tree
2.30 . . . . Turning On Authorization Checking
2.31 . . . . Setting Up Volumes to House AFS Binaries
2.31.1 . . . . . . Loading AFS Binaries into a Volume and Creating a Link to the Local Disk
2.32 . . . . Storing System Binaries in AFS
2.32.1 . . . . . . Setting the ACL on System Binary Volumes
2.32.2 . . . . . . Volume and Directory Naming Scheme
2.33 . . . . Enabling Access to Transarc and Other Cells
2.34 . . . . Enabling Access to New Cells in the Future
2.35 . . . . Improving Your Cell's Security
2.35.1 . . . . . . Controlling root Access
2.35.2 . . . . . . Controlling System Administrator Access
2.35.3 . . . . . . Protecting Sensitive AFS Directories
2.36 . . . . Enabling AFS login
2.36.1 . . . . . . Enabling AFS login on AIX 3.2 Systems
2.36.2 . . . . . . Enabling AFS login on AIX 4.1 Systems
2.36.3 . . . . . . Enabling AFS login on IRIX Systems
2.36.4 . . . . . . Enabling AFS login on Other System Types
2.37 . . . . Altering File System Clean-Up Scripts on Sun Systems
2.38 . . . . Removing Client Functionality


  2 INSTALLING THE FIRST AFS MACHINE

This chapter describes how to install the first AFS machine in your cell,
setting it up as both an AFS file server machine and a client machine. After
completing all procedures in this chapter, you can remove the client
functionality, if desired.  (Section 2.38 explains how to remove the client
functionality.)

To install additional file server machines, follow the instructions in Chapter3,
"Installing Additional File Server Machines," after completing this chapter.

To install additional client machines, follow the instructions in Chapter 4,
"Installing Additional Client Machines," after completing this chapter.

Assumptions

The instructions in this chapter assume that:

 - you are typing the instructions at the console of the machine you are
installing

 - you are logged in to the local UNIX file system as "root"

 - you have already installed a standard UNIX kernel on the machine being
installed

Configuration Decisions

During the installation, you must make several decisions about how you to set
up your cell.  They include:

 - choosing the first AFS machine

 - choosing a cell name

 - determining the cache size on client machines

 - defining partition names

 - setting up your cell's AFS tree structure

Chapter 2 of the AFS System Administrator's Guide, "Issues in Cell Set-up and
Administration," provides guidelines.  It is recommended that you read it before
beginning the installation.

How to Use This Chapter

This chapter is divided into three large sections corresponding to the three
different functions you must perform to install the first AFS machine in your
cell.  For the installation to be complete and correct, you must perform all
steps in the order they appear.  A summary of the procedures in each functional
section appears at the beginning of the section.  The functions are:

 - Installing file server machine functionality (beginning in Section 2.1)

 - Installing client machine functionality (beginning in Section 2.23)

 - Setting up your cell's file tree, establishing further security mechanisms
and enabling access to foreign cells, among other tasks (beginning in Section 2.28)

 2.1. OVERVIEW: INSTALLING FILE SERVER FUNCTIONALITY

The first phase of the installation is to install file server machine
functionality, by performing the following procedures:

1. Choose which machine to install as the first AFS machine.

2. Load the file server and client binaries and files to the local disk.

3. Incorporate AFS modifications into the machine's kernel (using either a
dynamic kernel loader or the kernel build procedure).

4. Set up AFS partitions suitable for storing volumes.

5. Replace the standard fsck with the AFS version of fsck.

6. Start the Basic OverSeer (BOS) Server.

7. Define the cell name and the machine's cell membership.

8. Start the Authentication Server.

9. Set up initial security mechanisms, including Authentication Database
entries.

10. Start the Protection Server.

11. Start the Volume Location Server.

12. Start the Backup Server.

13. Start the fs process, which incorporates three component processes: 
the File Server, Volume Server and Salvager.

14. Start the server portion of the Update Server.

15. Start the controller process (called runntp) for the Network Time Protocol
Daemon, which synchronizes clocks.

To start and configure server processes, you will issue commands from several of
the AFS command suites.  These reside in the /usr/afs/bin directory on the file
server machine.  Specifically, you will use commands

 - from the bos suite to contact the BOS Server.

 - from the kas suite to contact the Authentication Server.

 - from the pts suite to contact the Protection Server.

 - from the vos suite to contact the Volume and Volume Location Servers.

The instructions in this chapter do not explain the function of each command or
argument; for complete information, consult the AFS Command Reference Manual.

 2.2. CHOOSING THE FIRST AFS MACHINE

The first AFS machine you install will function as an AFS file server, so it
should have sufficient disk space to store AFS volumes.  It is recommended that
you store not only user files in volumes on a file server, but also AFS client
binaries.  When you later install additional file server machines in your cell,
you can distribute these volumes among the different machines as you see fit.

If you follow these installation procedures, the first AFS file server machine
in the cell will be designated as a database server machine, the binary
distribution machine for its CPU/operating system type, and (if you are using
the United States edition of AFS) your cell's system control machine.  For more
information on the roles of these machine types, refer to the AFS System
Administrator's Guide.

Since it will be your cell's first database server machine, the first machine
you install should, if possible, have the lowest IP address of any file server
machine you currently plan to install.  As explained in more detail in the
introduction to Section 3.2, if you later install database server functionality
on a machine with a lower Internet address, you must first update the
/usr/vice/etc/CellServDB file on all of your cell's client machines.

The instructions in this chapter make the machine into both an AFS file server
machine and an AFS client machine.  Setting up the machine as a client makes it
easier to create and view AFS volumes and to set up your cell's file tree under
/afs/cellname.  Section 2.38 explains how to remove the client functionality
after you have completed the installation, if you wish to do so.

 2.3. BEGINNING WITH SYSTEM-SPECIFIC TASKS

There are three initial tasks you must perform using procedures that vary a good
deal depending on which system type you are installing.  Because of the wide
amount of variation, the instructions are provided in a different section for
each system type.  After you have completed the procedures in the section for
your system type, continue to Section 2.12.

The three tasks are:

 - incorporating AFS modifications into the kernel, either by using a dynamic
kernel loader or by building a new kernel.

If dynamic loading is possible for your system type, it is the recommended
method, as it is significantly quicker and easier than kernel building.  A
dynamic loader adds the AFS modifications to the memory version of the kernel
created at each reboot, without altering the disk version (/vmunix or
equivalent).  Dynamic loading is possible on all system types except Digital
UNIX and NCR UNIX.

Incorporating AFS modifications during a kernel build is possible on all system
types except AIX and Solaris; on Ultrix, you must have an Ultrix source license.

 - setting up AFS partitions

 - replacing the standard fsck program with a version that properly handles AFS
partitions

To perform these tasks, you must first load file server and client binaries and
files from the Binary Distribution to the local disk, along with the libraries
for incorporating AFS into the kernel dynamically.  If you wish to build a
kernel, you must also load the AFS kernel building libraries.

If the machine you are installing has a tape drive attached, follow the
instructions in Section 2.3.1.  Otherwise, following the instructions in Section
2.3.2.  Then see Section 2.3.3 to learn how to continue on your system type.

 2.3.1. LOADING FILES USING A LOCAL TAPE DRIVE

If the machine you are installing has a tape drive, you can load the third and
fourth tar sets from the Binary Distribution Tape directly onto the local disk.
The third set contains the AFS file server binaries, and the fourth contains the
AFS client machine binaries/files, initialization scripts and (on all system
types except Digital UNIX and NCR UNIX) the files needed for running a dynamic
kernel loader.


If you want to build AFS modifications into a new kernel, also load the second
tar set.  You must build a kernel on Digital UNIX and NCR UNIX systems, and may
elect to do so on any system type other than AIX and Solaris; to build a kernel
on Ultrix systems you must have an Ultrix source license.

Step 1: Create the /usr/afs and /usr/vice/etc directories.

	 -----------------------
	! # mkdir /usr/afs      |
	! # mkdir /usr/vice     |
	! # mkdir /usr/vice/etc |
	 -----------------------

Step 2: The following commands extract the third and fourth tar sets from the tape. --------------------------------------------------------------------------------- On AIX systems: Before reading the tape, verify that block size is set to 0 (meaning variable block size); if necessary, use SMIT to set block size to 0. Also, substitute tctl for mt. On HP-UX systems: Substitute mt -t for mt -f. On all system types: For <device>, substitute the name of the tape device for your system that does not rewind after each operation.
	--------------------------------
	| # cd /usr/afs			|                                                
	| # mt -f /dev/<device> rewind		|                                                
	| # mt -f /dev/<device> fsf 2		|                                                 
	| # tar xvf /dev/<device>		|                                                 
	| # cd /usr/vice/etc		|                                                 
	| # mt -f /dev/<device> rewind		|                                                
	| # mt -f /dev/<device> fsf 3		|                                                
	| # tar xvf /dev/<device>		|                                                
	--------------------------------

---------------------------------------------------------------------------------

Step 3: On Digital UNIX and NCR UNIX systems or any other system type on
which you will build a new kernel with AFS modifications, create the
/usr/afs/sys directory and load the second tar set.  Kernel building is possible
on all system types except AIX and Solaris; on Ultrix systems, it requires an
Ultrix source license.

On HP-UX systems: Substitute mt -t for mt -f.                                  

On all system types:  For <device>, substitute the name of the tape device for 
your system that does not rewind after each operation.                         


	# mkdir /usr/afs/sys                                                           
	# cd /usr/afs/sys                                                              
	# mt -t /dev/<device> rewind                                                   
	# mt -t /dev/<device> fsf 1                                                    
	# tar xvf /dev/<device>                                                        

----------------------------------------------------------------------------------

Step 4: Proceed to Section 2.3.3 (page 2-12) to learn how to continue on
each system type.

 2.3.2. LOADING FILES FROM A REMOTE MACHINE

If the machine you are installing (the "local" machine) does not have a tape
drive, you must transfer the needed files onto the local disk from a remote
machine's existing /usr/afsws directory.  (You loaded files onto the remote
machine in Section 1.4.2.2.)

On Digital UNIX and NCR UNIX, you must load kernel building files. You may
choose to build a kernel on any system type other than AIX and Solaris; kernel
building on Ultrix systems requires an Ultrix source license.

The relevant subdirectories of /usr/afsws are as follows:

 - /usr/afsws/root.client/bin contains the kernel building files

 - /usr/afsws/root.server/usr/afs contains the file server binaries

 - /usr/afsws/root.client/usr/vice/etc contains the client binaries


Step 1: Working on the local machine, create the /usr/afs and
/usr/vice/etc directories, as well as /usr/afs/sys if appropriate.

-------------------------------------------------------
	# mkdir /usr/afs                                    

	# mkdir /usr/afs/sys if you will build a new kernel 

	# mkdir /usr/vice                                   

	# mkdir /usr/vice/etc                               
-------------------------------------------------------

Step 2: Use ftp, NFS, or another network file transfer method to copy the
files from the remote machine's

 - /usr/afsws/root.server/usr/afs directory to the local /usr/afs
directory

 - /usr/afsws/root.client/usr/vice/etc to the local /usr/vice/etc
directory

 - /usr/afsws/root.client/bin directory to the local /usr/afs/sys
directory, if you will build a new kernel (mandatory on Digital UNIX systems)

 2.3.3. HOW TO CONTINUE

You are now ready to perform the first three procedures in the installation.
Proceed to the section for your system type.

	For AIX, see Section 2.4 on page 2-15.

	For Digital UNIX, see Section 2.5 on page 2-20.

	For HP-UX, see Section 2.6 on page 2-25.

	For IRIX, see Section 2.7 on page 2-32.

	For NCR UNIX see Section 2.8 on page 2-40.

	For Solaris, see Section 2.9 on page 2-44.

	For SunOS, see Section 2.10 on page 2-51.

	For Ultrix, see Section 2.11 on page 2-59.


 2.4. GETTING STARTED ON AIX SYSTEMS

On AIX systems, begin by using the AIX kernel extension facility to load AFS
into the kernel dynamically (kernel building is not possible). Then use SMIT to
create partitions for storing AFS volumes, and replace the standard fsck program
with an AFS-safe version.

 2.4.1. USING THE KERNEL EXTENSION FACILITY ON AIX SYSTEMS

The AIX kernel extension facility is a dynamic kernel loader provided by IBM
Corporation for AIX.  Transarc's dkload program is not available for this system
type, nor is it possible to add AFS during a kernel build.

For this machine to remain an AFS machine, the kernel extension facility must
run each time the machine reboots.  You can invoke the facility automatically in
the machine's initialization files, as explained in Step 3 below.

To invoke the kernel extension facility:

Step 1: Verify that

 - the /usr/vice/etc/dkload directory on the local disk contains: afs.ext,
cfgafs, cfgexport, export.ext, and export.ext.nonfs.

 - NFS is already in the kernel, if you wish NFS to run on this machine; it must
be running for the machine to function as an NFS/AFS translator machine.  For
systems running AIX 3.2.2 or lower, this requires that you have loaded nfs.ext;
for version 3.2.3 and later, NFS loads automatically.

Step 2: Invoke cfgexport and cfgafs. 
If this machine is to act as an NFS/AFS translator machine, you must make a
substitution in this step.  For details, consult the section entitled "Setting
Up an NFS/AFS Translator Machine" in the NFS/AFS Translator Supplement to the
AFS System Administrator's Guide.

	# cd /usr/vice/etc/dkload                                    

If the machine's kernel does not support NFS server functionality, issue the
following commands.  The machine cannot function as an NFS/AFS translator
machine in this case.

	# ./cfgexport -a export.ext.nonfs                            

	# ./cfgafs -a afs.ext                                        

If the machine's kernel supports NFS server functionality, issue the following
commands.  If the machine is to act as an NFS/AFS translator machine, you must
make the substitution specified in the NFS/AFS Translator Supplement.

	# ./cfgexport -a export.ext                                  

	# ./cfgafs -a afs.ext   

----------------------------------------------------------------

Step 3: IBM delivers several function-specific initialization files for
AIX systems, rather than the single file used on some other systems.  If you
want the kernel extension facility to run each time the machine reboots, verify
that it is invoked in the appropriate place in these initialization files.  An
easy way to add the needed commands is to copy the contents of
/usr/vice/etc/dkload/rc.dkload, which appear in Section 5.11.

The following list summarizes the order in which the commands must appear in
initialization files for the machine to function properly (you will add some of
the commands in later sections).

 - NFS commands, if appropriate (for example, if the machine will act as an
NFS/AFS translator). For AIX version 3.2.2 or lower, commands loading the NFS
kernel extensions (nfs.ext) should appear here; with AIX version 3.2.3 and
higher, NFS is already loaded into the kernel. Then invoke nfsd if the machine
is to be an NFS server.

 - the contents of rc.dkload, to invoke the kernel extension facility.  If the
machine will act as an NFS/AFS translator machine, be sure to make the same
substitution as you made when you issued the cfgexport and cfgafs commands in
the previous step.

 - bosserver (you will be instructed to add this command in Section 2.22)

 - afsd (you will be instructed to add this command in Section 2.27)

 2.4.2. Setting Up AFS Partitions on AIX Systems

AFS volumes must reside on partitions associated with directories named
/vicepx, where x is a lowercase letter of the alphabet.  By convention, the
first directory created on a machine to house AFS volumes is called /vicepa,
the second directory /vicepb, etc.

Every file server machine must have at least one partition devoted exclusively
to storing AFS volumes (preferably associated with the directory /vicepa). You
cannot simply create a directory under an existing directory (for example,
/usr/vicepa is not legal).

You can also perform the steps in this section when you want to set up a new
/vicepx partition on an existing file server machine.  In that case, you must
restart the fs process to force recognition of the new partition.  Complete
instructions also appear in Chapter 11 of the AFS System Administrator's Guide.

Step 1: Decide how many partitions to devote to storage of AFS volumes.
There must be at least one.  Then create a directory called /vicepx for each
one.  The example instruction creates three directories.

Create a directory for each partition to be used for storing AFS volumes. 

-----------------------------------------------------------------------------
	# mkdir /vicepa                                                           

	# mkdir /vicepb                                                           

	# mkdir /vicepc                                                           

	and so on                                                             
-----------------------------------------------------------------------------

Step 2: Use the SMIT program to create a journaling file system for each
AFS partition that you want to use.  Mount each of these partitions on the
/vicepx directories created in the previous step.  You can use SMIT for mounting
these partitions after you create them.  Be sure to configure the /vicep
partitions so that they are automatically mounted on a reboot.  For more
information, refer to your operating system documentation.

Step 3: Mount the partition(s) either by issuing the mount -a command to
mount all partitions at once or by issuing the mount command for
each partition in turn.

It is recommended that you also add the following line to /etc/vfs
at this point:

afs     4     none     none

If you do not add this line, you will receive an error message from
the mount command any time you use it to list the mounted file
systems (but note that the mount command is working properly even if
you receive the error message).

 2.4.3. REPLACING FSCK ON AIX SYSTEMS

You should never run the standard fsck program on an AFS file server machine.
It will discard the files that make up AFS volumes on the partitions associated
with the /vicepx directories, because they are not standard UNIX directories.
In this step, you replace standard fsck with a modified fsck provided by
Transarc.  It properly checks both AFS and standard UNIX partitions.  To repeat:

NEVER run the standard vendor-supplied fsck program on an AFS file server
machine.  It discards AFS volumes.

You can tell you are running the correct AFS version when it displays the
banner:

[Transarc AFS 3.4 fsck]

For AIX systems, you do not replace fsck itself, but rather the "program helper"
distributed as /sbin/helpers/v3fshelper.

Step 1: Move the standard fsck program helper to a save file and install
the AFS-modified helper (/usr/afs/bin/v3fshelper) in the standard location.

	---------------------------------------------
	# cd  /sbin/helpers                       

	# mv  v3fshelper  v3fshelper.noafs        

	# cp  /usr/afs/bin/v3fshelper  v3fshelper 
	---------------------------------------------

Step 2: Proceed to Section 2.12 (page 2-65).


 2.5. GETTING STARTED ON Digital UNIX SYSTEMS

On Digital UNIX systems (formerly known as DEC OSF/1), you must build AFS
modifications into a new kernel (dynamic loading is not possible).  Then
continue by installing the initialization script, creating partitions for
storing AFS volumes, and replacing the standard fsck program with an AFS-safe
version.

 2.5.1. BUILDING AFS INTO THE KERNEL ON Digital UNIX SYSTEMS

For the sake of consistency with other system types (on which both loading and
building are possible), the complete instructions for kernel building appear in
Chapter 5.

For this machine to remain an AFS machine, its initialization script must be
invoked each time it reboots.  Step 5 below explains how to install the script.

Step 1: Follow the instructions in Section 5.2 (page 5-7) or Section 5.3
(page 5-11)  to build AFS into a new Digital UNIX kernel. 

Step 2: Move the existing kernel on the local machine to a safe location.

	-------------------------------
	# mv  /vmunix  /vmunix_save 
	-------------------------------

Step 3: Use a copying program (either cp or a remote program such as ftp
or NFS) to copy the AFS-modified kernel to the appropriate location.

Step 4: Reboot the machine to start using the new kernel.

	---------------------------------
	#shutdown -r now
	---------------------------------

Step 5: Copy the afs.rc initialization script from /usr/vice/etc/dkload
to the initialization files directory (standardly, /sbin/init.d), make sure it
is executable, and link it to the two locations where Digital UNIX expects to
find it.

	---------------------------------------------
	# cd  /sbin/init.d                        

	# cp  /usr/vice/etc/dkload/afs  afs       

	# chmod  555  afs                         

	# ln -s ../init.d/afs  /sbin/rc3.d/S99afs 

	# ln -s ../init.d/afs  /sbin/rc0.d/K66afs 
	---------------------------------------------

 2.5.2. SETTING UP AFS PARTITIONS ON Digital UNIX SYSTEMS

AFS volumes must reside on partitions associated with directories named /vicepx,
where x is a lowercase letter of the alphabet.  By convention, the first
directory created on a machine to house AFS volumes is called /vicepa, the
second directory /vicepb, etc.

Every file server machine must have at least one partition devoted exclusively
to storing AFS volumes (preferably associated with the directory /vicepa). You
cannot simply create a directory under an existing directory (for example,
/usr/vicepa is not legal).

You can also perform the steps in this section when you want to set up a new
/vicepx partition on an existing file server machine.  In that case, you must
restart the fs process to force recognition of the new partition.  Complete
instructions also appear in Chapter 11 of the AFS System Administrator's Guide.

Step 1: Decide how many partitions to devote to storage of AFS volumes.
There must be at least one.  Then create a directory called /vicepx for each
one.  The example instruction creates three directories.

-----------------------------------------------------------------------------
Create a directory for each partition to be used for storing AFS volumes. 

	# mkdir /vicepa                                                           
	# mkdir /vicepb                                                           
	# mkdir /vicepc                                                           
 	and so on                                                             
-----------------------------------------------------------------------------

Step 2: For each /vicep directory just created, add a line to /etc/fstab,
the "file systems registry" file.

-------------------------------------------------------------------
Add the following line to /etc/fstab for each /vicep directory. 

	/dev/<disk> /vicep<x> ufs rw 0 2                                

For example,                                                    

	/dev/rz3a /vicepa ufs rw 0 2                                    
-------------------------------------------------------------------

Step 3: Choose appropriate disk partitions for each AFS partition you
need and create a file system on each partition.  The command shown should be
suitable, but consult the Digital UNIX documentation for more information.

------------------------------------------------------------------
Repeat this command to create a file system on each partition. 

	# newfs  -v  /dev/rz<xx>                                       
------------------------------------------------------------------

Step 4: Mount the partition(s), using either the mount -a command to
mount all at once or the mount command to mount each partition in turn.

 2.5.3. REPLACING FSCK ON Digital UNIX SYSTEMS

You should never run the standard fsck program on an AFS file server machine.
It will discard the files that make up AFS volumes on the partitions associated
with the /vicepx directories, because they are not standard UNIX directories.
In this step, you replace standard fsck with a modified fsck provided by
Transarc.  It properly checks both AFS and standard UNIX partitions. 
To repeat:

NEVER run the standard vendor-supplied fsck program on an AFS file server
machine.  It discards AFS volumes.

You can tell you are running the correct AFS version when it displays the
banner:

[Transarc AFS 3.4 fsck]

For Digital UNIX systems, /sbin/fsck and /usr/sbin/fsck are driver programs.
Rather than replacing either of them, you replace the actual binary distributed
as /sbin/ufs_fsck and /usr/sbin/ufs_fsck.

Step 1: Move the distributed fsck binaries to save files, install the
AFS-modified fsck ("vfsck") in the standard locations, and link the standard
fsck programs to it.

	-----------------------------------------------------
	# mv  /sbin/ufs_fsck  /sbin/ufs_fsck.orig         

	# mv  /usr/sbin/ufs_fsck  /usr/sbin/ufs_fsck.orig 

	# cp  /usr/afs/bin/vfsck  /sbin/vfsck             

	# cp  /usr/afs/bin/vfsck  /usr/sbin/vfsck         

	# ln  -s  /sbin/vfsck  /sbin/ufs_fsck             

	# ln  -s  /usr/sbin/vfsck  /usr/sbin/ufs_fsck     
	-----------------------------------------------------

Step 2: Proceed to Section 2.12 (page 2-62).

 2.6. GETTING STARTED ON HP-UX SYSTEMS

To load AFS into the kernel on HP-UX systems, choose one of two methods:

 - dynamic loading using Transarc's dkload program (proceed to Section 2.6.1)

 - building a new kernel (proceed to Section 2.6.2)

After loading AFS, you will continue by creating partitions for storing AFS
volumes and replacing the standard fsck program with an AFS-safe version.

 2.6.1. Using dkload on HP-UX Systems

The dkload program is the dynamic kernel loader provided by Transarc for HP-UX
systems.  For this machine to remain an AFS machine, dkload must run each time
the machine reboots.  You can invoke dkload automatically in the machine's
initialization file (/etc/rc or equivalent), as explained in Step 3.

The files containing the AFS kernel modifications are libafs.a and
libafs.nonfs.a (the latter is appropriate if this machine's kernel does not
include support for NFS server functionality).

To invoke dkload:

Step 1: Verify that

 - there is at least one spare megabyte of space in /tmp for temporary files
created as dkload runs

 - the following are in /bin on the local disk (not as symbolic links): as, ld,
and nm.

 - the /usr/vice/etc/dkload directory on the local disk contains: dkload (the
binary), libafs.a, libafs.nonfs.a, libcommon.a, and kalloc.o

Step 2: Invoke dkload.

--------------------------------------------------------------------------------
If the machine's kernel does not include support for NFS server functionality,
you must substitute libafs.nonfs.a for libafs.a.  Either use the mv command to
replace libafs.a with libafs.nonfs.a in the /usr/vice/etc/dkload directory
(before issuing these commands), or make the substitution on the command line.

	# cd /usr/vice/etc/dkload
	
	# ./dkload libafs.a
--------------------------------------------------------------------------------

Step 3: Modify the machine's initialization file (/etc/rc or equivalent)
to invoke dkload by copying the contents of /usr/vice/etc/dkload/rc.dkload (the
contents appear in full in Section 5.10).  Place the commands after the commands
that mount the file systems.  If the machine's kernel does not include support
for NFS server functionality, remember to substitute libafs.nonfs.a for
libafs.a.

Step 4: Proceed to Section 2.6.3 (page 2-28).

 2.6.2. BUILDING AFS INTO THE KERNEL ON HP-UX SYSTEMS

For the sake of consistency with other system types, the complete instructions
for kernel building appear in Chapter 5.

Step 1: Follow the kernel building instructions in Section 5.4 (page
5-16) for HP 700 systems, or in Section 5.5 (page 5-20) for HP 800 systems.

Step 2: Move the existing kernel on the local machine to a safe location.

	-----------------------------
	# mv  /hp-ux  /hp-ux_save 
	-----------------------------

Step 3: Use a copying program (either cp or a remote program such as ftp
or NFS) to copy the AFS-modified kernel to /hp-ux.  A standard location for the
AFS-modified kernel is /etc/conf/hp-ux for Series 700 and
/etc/conf/<conf_name>/hp_ux for Series 800 systems.

Step 4: Reboot the machine to start using the new kernel.

	---------------------------------
	# shutdown -r
	---------------------------------

 2.6.3. SETTING UP AFS PARTITIONS ON HP-UX SYSTEMS

Note: AFS supports disk striping for the hp700_ux90 system type.  The hp800_ux90
system type uses logical volumes rather than disk striping.

AFS volumes must reside on partitions associated with directories named /vicepx,
where x is a lowercase letter of the alphabet.  By convention, the first
directory created on a machine to house AFS volumes is called /vicepa, the
second directory /vicepb, etc.

Every file server machine must have at least one partition devoted exclusively
to storing AFS volumes (preferably associated with the directory /vicepa). You
cannot simply create a directory under an existing directory (for example,
/usr/vicepa is not legal).

You can also perform the steps in this section when you want to set up a new
/vicepx partition on an existing file server machine.  In that case, you must
restart the fs process to force recognition of the new partition.  Complete
instructions also appear in Chapter 11 of the AFS System Administrator's Guide.

Step 1: Decide how many partitions to devote to storage of AFS volumes.
There must be at least one.  Then create a directory called /vicepx for each
one.  The example instruction creates three directories.

-----------------------------------------------------------------------------
Create a directory for each partition to be used for storing AFS volumes. 

	# mkdir /vicepa

	# mkdir /vicepb

	# mkdir /vicepc

	and so on                                                             
-----------------------------------------------------------------------------

Step 2: For each /vicep directory just created, create a file system on
the associated partition.

On Series 700 systems and Series 800 systems that do not use logical volumes:

--------------------------------------------------------------------------------
Add the following line to /etc/checklist, the "file systems registry" file, for 
each /vicep directory.                                                          

	/dev/dsk/<disk> /vicep<x> hfs defaults 0 2

Then use the newfs or makefs command to create a file system on each
/dev/dsk/<disk> partition mentioned above.  Consult the operating system
documentation for syntax.
---------------------------------------------------------------------------------

An example of an /etc/checklist entry:

	/dev/dsk/1s0 /vicepa hfs defaults 0 2


On Series 800 systems that use logical volumes:

-------------------------------------------------------------------------------
Use the SAM program to create a file system on each partition.  Consult the
operating system documentation for syntax.
-------------------------------------------------------------------------------

Step 3: Mount the partition(s), using either the mount -a command to
mount all at once or the mount command to mount each partition in turn.  Note
that SAM automatically mounts the partition on some HP Series 800 systems that
use logical volumes.

 2.6.4. REPLACING FSCK ON HP-UX SYSTEMS

You should never run the standard fsck program on an AFS file server machine.
It will discard the files that make up AFS volumes on the partitions associated
with the /vicepx directories, because they are not standard UNIX directories.
In this step, you replace standard fsck with a modified fsck provided by
Transarc.  It properly checks both AFS and standard UNIX partitions.
To repeat:

NEVER run the standard vendor-supplied fsck program on an AFS file server
machine.  It discards AFS volumes. 

You can tell you are running the correct AFS version when it displays the
banner:

[Transarc AFS 3.4 fsck]

Step 1: Move standard fsck to a save file, install the AFS-modified fsck
("vfsck") to the standard location and link standard fsck to it.

	--------------------------------------
	# mv /etc/fsck /etc/fsck.orig      

	# cp /usr/afs/bin/vfsck /etc/vfsck 

	# ln -s /etc/vfsck /etc/fsck       
	--------------------------------------

Step 2: Proceed to Section 2.12 (page 2-65).

 2.7. GETTING STARTED ON IRIX SYSTEMS

To load AFS into the kernel on IRIX systems, choose one of two methods:

 - dynamic loading using SGI's ml program (proceed to Section 2.7.1)

 - building a new kernel (proceed to Section 2.7.2)

After loading AFS, you will continue by creating partitions for storing AFS
volumes.

Silicon Graphics, Inc. has modified the IRIX fsck program to handle AFS volumes
properly, so it is not necessary to replace standard fsck on IRIX systems.
Transarc does not provide a replacement fsck program for this system type as it
does for others.

 2.7.1. USING ml ON IRIX SYSTEMS

The ml program is the dynamic kernel loader provided by Silicon Graphics, Inc.
for IRIX systems. For this machine to remain an AFS machine, either ml must run
each time the machine reboots or a prebuilt kernel with AFS modifications must
be used.  To ensure this, you must install the IRIX initialization script as
detailed in Section 2.7.3.

To invoke ml:

Step 1: Verify that the /usr/vice/etc/sgiload directory on the local disk
contains: afs, afs.rc, afs.sm, in addition to the "libafs" library files.

Step 2: On sgi_53 machines only, run the afs_rtsymtab.pl script, issue
the autoconfig command, and reboot the machine.

-------------------------------------------------------------------------------
	# /usr/vice/etc/sgiload/afs_rtsymtab.pl -run

       	# autoconfig -v
	
	# shutdown -i6
-------------------------------------------------------------------------------

Step 3: Issue the ml command, replacing <library file> with the name of
the appropriate library file. Select R3000 versus R4000 processor, no NFS
support versus NFS support, and single processor (SP) versus multiprocessor
(MP).

If you do not know which processor your machine has, issue IRIX's hinv command
and check the line in the output that begins "CPU."

-------------------------------------------------------------------------------
Issue the ml command, replacing <library file> with the name of the appropriate
library file.

In each case below, read "without NFS support" to mean that the kernel does not
include support for NFS server functionality.

 - libafs.MP.R3000.o for R3000 multiprocessor with NFS support            
 - libafs.MP.R3000.nonfs.o for R3000 multiprocessor without NFS support   
 - libafs.MP.R4000.o for R4000 Multiprocessor with NFS support            
 - libafs.MP.R4000.nonfs.o for R4000 multiprocessor without NFS support   
 - libafs.SP.R3000.o for R3000 single processor with NFS support          
 - libafs.SP.R3000.nonfs.o for R3000 single processor without NFS support 
 - libafs.SP.R4000.o for R4000 single processor with NFS support          
 - libafs.SP.R4000.nonfs.o for R4000 single processor without NFS support 

# ml  ld -v -j /usr/vice/etc/sgiload/<library file> -p afs_ -a afs   
-------------------------------------------------------------------------------

Step 3: Proceed to Section 2.7.3 (page 2-36) to install the
initialization script provided by Transarc for IRIX systems; it automatically
invokes ml at reboot, if appropriate.

 2.7.2. BUILDING AFS INTO THE KERNEL ON IRIX SYSTEMS

For the sake of consistency with other system types, the complete instructions
for kernel building appear in Chapter 5.

Step 1: Follow the kernel building instructions in Section 5.6 (page
5-23).

Step 2: Copy the existing kernel on the local machine to a safe location.
Note that /unix will be overwritten by /unix.install the next time the machine
is rebooted.

	---------------------------
	# cp  /unix  /unix_save 
	---------------------------

Step 3: Reboot the machine to start using the new kernel.  This example

	---------------------------------
	# shutdown -i6
	---------------------------------

 2.7.3. INSTALLING THE INSTALLATION SCRIPT ON IRIX SYSTEMS

On System V-based machines such as IRIX, you must install the initialization
script and ensure that it is invoked properly at reboot, whether you have built
AFS into the kernel or used a dynamic loader such as ml.  The script includes
automatic tests for whether the machine has the R3000 or R4000 processor, NFS
support or no NFS support, and single processor (SP) or multiprocessor (MP).

The chkconfig commands you issue in the second step tell IRIX whether or not it
should run the afsml script to invoke ml, and that it should run the afsserver
script that initializes the BOS Server.

Step 1: Verify that the local /usr/vice/etc/sgiload directory contains
afs.rc.

Step 2: Copy the afs.rc initialization script from /usr/vice/etc/sgiload
to the IRIX initialization files directory (standardly, /etc/init.d), make sure
it is executable, link it to the two locations where IRIX expects to find it,
and issue the appropriate chkconfig commands.

------------------------------------------------------------------------
Note the removal of the .rc extension as you copy the initialization file to the
/etc/init.d directory.

	# cd  /etc/init.d                                                    

	# cp  /usr/vice/etc/sgiload/afs.rc  afs                              

	# chmod  555  afs                                                    

	# ln -s ../init.d/afs  /etc/rc0.d/K35afs                             

	# ln -s ../init.d/afs  /etc/rc2.d/S35afs                             

	# cd /etc/config                                                     

If the machine is configured to be an AFS server:                    

	# /etc/chkconfig  -f  afsserver on                                      

If you are using ml:                                                 

	# /etc/chkconfig  -f  afsml  on                                      

If you are using an AFS-modified kernel:                             

	# /etc/chkconfig  -f  afsml  off                                     
------------------------------------------------------------------------

 2.7.4. SETTING UP AFS PARTITIONS ON IRIX SYSTEMS

AFS volumes must reside on partitions associated with directories named /vicepx,
where x is a lowercase letter of the alphabet.  By convention, the first
directory created on a machine to house AFS volumes is called /vicepa, the
second directory /vicepb, etc.

Every file server machine must have at least one partition devoted exclusively
to storing AFS volumes (preferably associated with the directory /vicepa). You
cannot simply create a directory under an existing directory (for example,
/usr/vicepa is not legal).

You can also perform the steps in this section when you want to set up a new
/vicepx partition on an existing file server machine.  In that case, you must
restart the fs process to force recognition of the new partition.  Complete
instructions also appear in Chapter 11 of the AFS System Administrator's Guide.

Step 1: Decide how many partitions to devote to storage of AFS volumes.
There must be at least one.  Then create a directory called /vicepx for each
one.  The example instruction creates three directories.

-----------------------------------------------------------------------------
Create a directory for each partition to be used for storing AFS volumes. 

	# mkdir /vicepa                                                           
	# mkdir /vicepb                                                           
	# mkdir /vicepc                                                           
	and so on                                                             
-----------------------------------------------------------------------------

Step 2: For each /vicep directory just created, add a line to /etc/fstab,
the "file systems registry" file.

-------------------------------------------------------------------
Add the following line to /etc/fstab for each /vicep directory. 

	/dev/vicep<x> /vicep<x> efs rw,raw=/dev/rvicep<x> 0 0           

For example,                                                    

	/dev/vicepa /vicepa efs rw,raw=/dev/rvicepa 0 0                 
-------------------------------------------------------------------

Step 3: Create a file system on each partition.  The syntax shown should
be appropriate, but consult the IRIX documentation for more information.

------------------------------------------------------------------
Repeat this command to create a file system on each partition. 

	# mkfs /dev/rvicep<x>                                          
------------------------------------------------------------------

Step 4: Mount the partition(s) by issuing either the mount -a command to
mount all partitions at once or the mount command to mount each partition in
turn.

Step 5: Proceed to Section 2.12 (page 2-65).

 2.8. GETTING STARTED ON NCR UNIX SYSTEMS

On NCR UNIX systems, you must build AFS modifications into a new kernel (dynamic
loading is not possible).  Then continue by installing the initialization
script, creating partitions for storing AFS volumes, and replacing the standard
fsck program with an AFS-safe version.

 2.8.1. BUILDING AFS INTO THE KERNEL ON NCR UNIX SYSTEMS

For the sake of consistency with other system types (on which both loading and
building are possible,), the complete instructions for kernel building appear in
Chapter 5.

For this machine to remain an AFS machine, its initialization script must be
invoked each time it reboots.  Step 5 below explains how to install the script.

Step 1: Follow the instructions in Section 5.7 (page 5-26) to build AFS
modifications into a new NCR UNIX kernel.

Step 2: Move the existing kernel on the local machine to a safe location.

	-------------------------
	# mv /unix /unix.save 
	-------------------------

Step 3: Use a copying program (either cp or a remote program such as ftp
or NFS) to copy the AFS-modified kernel to the appropriate location.

Step 4: Reboot the machine to start using the new kernel.

	-------------------------
	# shutdown -i6 
	-------------------------

Step 5: Copy the initialization script that Transarc provides for NCR
UNIX systems as /usr/vice/etc/modload/afs.rc to the /etc/init.d directory, make
sure it is executable, and link it to the two locations where NCR UNIX expects
to find it.

	--------------------------------------------
	# cd  /etc/init.d                        
	# cp  /usr/vice/etc/modload/afs  afs     
	# chmod  555  afs                        
	# ln -s ../init.d/afs  /etc/rc3.d/S14afs 
	# ln -s ../init.d/afs  /etc/rc2.d/K66afs 
	--------------------------------------------

 2.8.2. SETTING UP AFS PARTITIONS ON NCR UNIX SYSTEMS

AFS volumes must reside on partitions associated with directories named /vicepx,
where x is a lowercase letter of the alphabet.  By convention, the first
directory created on a machine to house AFS volumes is called /vicepa, the
second directory /vicepb, etc.

Every file server machine must have at least one partition devoted exclusively
to storing AFS volumes (preferably associated with the directory /vicepa). You
cannot simply create a directory under an existing directory (for example,
/usr/vicepa is not legal).

You can also perform the steps in this section when you want to set up a new
/vicepx partition on an existing file server machine.  In that case, you must
restart the fs process to force recognition of the new partition.  Complete
instructions also appear in Chapter 11 of the AFS System Administrator's Guide.

Step 1: Decide how many partitions to devote to storage of AFS volumes.
There must be at least one.  Then create a directory called /vicepx for each
one.  The example instruction creates three directories.

-----------------------------------------------------------------------------
Create a directory for each partition to be used for storing AFS volumes. 

	# mkdir /vicepa
	# mkdir /vicepb
	# mkdir /vicepc

	and so on
-----------------------------------------------------------------------------

Step 2: For each /vicep directory just created, add a line to /etc/vfstab,
the "file systems registry" file.

--------------------------------------------------------------------
Add the following line to /etc/vfstab for each /vicep directory. 

	/dev/dsk/<disk> /dev/rdsk/<disk> /vicep<x> ufs  yes  

For example,                                                     

	/dev/dsk/c0t6d0s1 /dev/rdsk/c0t6d0s1 /vicepa ufs 3 yes           
--------------------------------------------------------------------

Step 3: Create a file system on each partition.  The syntax shown should
be appropriate, but consult the operating system documentation for more
information.

------------------------------------------------------------------
Repeat this command to create a file system on each partition. 

	# mkfs -v /dev/rdsk/<xxxxxxxx>                                 
------------------------------------------------------------------

Step 4: Mount the partition(s) by issuing the mountall command to mount
all partitions at once.

 2.8.3. REPLACING FSCK ON NCR UNIX SYSTEMS

You should never run the standard fsck program on an AFS file server machine.
It will discard the files that make up AFS volumes on the partitions associated
with the /vicepx directories, because they are not standard UNIX directories.
In this step, you replace standard fsck with a modified fsck provided by
Transarc.  It properly checks both AFS and standard UNIX partitions.
To repeat:

NEVER run the standard vendor-supplied fsck program on an AFS file server
machine.  It discards AFS volumes.

You can tell you are running the correct AFS version when it displays the
banner:

[Transarc AFS 3.4 fsck]

For NCR UNIX systems, /etc/fsck is a link to the driver program distributed as
/sbin/fsck.  Rather than replacing either of them, you replace the actual
binary distributed as /etc/fs/ufs/fsck.

Step 1: Move the distributed fsck to a save file, install the
AFS-modified fsck ("vfsck") to the standard location and link the distributed
fsck to it.

	---------------------------------------------------------
	# mv  /etc/fs/ufs/fsck  /etc/fs/ufs/fsck.orig 

	# cp  /usr/afs/bin/vfsck  /etc/fs/ufs/vfsck       

	# ln  -s  /etc/fs/ufs/vfsck  /etc/fs/ufs/fsck 
	---------------------------------------------------------

Step 2: Proceed to Section 2.12, (page 2-65).


 2.9. GETTING STARTED ON SOLARIS SYSTEMS

On Solaris systems, begin by using Sun's modload program to load AFS into the
kernel dynamically (kernel building is not possible).  Then create partitions
for storing AFS volumes, and replace the standard fsck program with an AFS-safe
version.

 2.9.1. USING MODLOAD ON SOLARIS SYSTEMS

The modload program is the dynamic kernel loader provided by Sun Microsystems
for Solaris systems. Transarc's dkload program is not available for this system
type, nor is it possible to add AFS during a kernel build.

For this machine to remain an AFS machine, modload must run each time the
machine reboots.  You can invoke the facility automatically in the machine's
initialization files, as explained in Step 6.

To invoke modload:

Step 1: Verify that

 - the modload binary is available on the local disk (standard location is
/usr/sbin)

 - the /usr/vice/etc/modload directory on the local disk contains libafs.o and
libafs.nonfs.o

Step 2: Create the file /kernel/fs/afs as a copy of the appropriate AFS
library file.

	------------------------------------------------------------
	# cd /usr/vice/etc/modload                               

	If the machine's kernel supports NFS server functionality:
	
		# cp libafs.o  /kernel/fs/afs                            

	If the machine's kernel does not support NFS server functionality:
	
		# cp libafs.nonfs.o  /kernel/fs/afs                      
	------------------------------------------------------------

Step 3: Create an entry for AFS in the /etc/name_to_sysnum file to allow
the kernel to make AFS system calls.

-------------------------------------------------------------------------------
In the file /etc/name_to_sysnum, create an "afs" entry in slot 105 (the slot
just before the "nfs" entry) so that the file looks like:

reexit          1                                                            
fork            2                                                            
.              .                                                            
.              .                                                            
.              .                                                            
sigpending      99                                                           
setcontext      100                                                          
statvfs         103                                                          
fstatvfs        104                                                          
afs             105                                                          
nfs             106                                                          
-------------------------------------------------------------------------------

Step 4: If you are running a Solaris 2.4 system, reboot the machine.

	------------------------
	# /usr/sbin/shutdown -i6
	------------------------

Step 5: Invoke modload.

	---------------------------------------
	# /usr/sbin/modload  /kernel/fs/afs 
	---------------------------------------

If you wish to verify that AFS loaded correctly, use the modinfo command.

	-------------------------------
	# /usr/sbin/modinfo | egrep afs
	-------------------------------

The appearance of two lines that mention afs in the output indicates that AFS
loaded successfully, as in the following example (the exact value of the numbers
in the first five columns is not relevant):

	69 fc71f000 4bc15 105   1  afs (afs syscall interface)
	69 fc71f000 4bc15  15   1  afs (afs file system)

Step 6: Copy the initialization script that Transarc provides for Solaris
systems as /usr/vice/etc/modload/afs.rc to the /etc/init.d directory, make sure
it is executable, and link it to the two locations where Solaris expects to find
it.

	--------------------------------------------
	# cd  /etc/init.d                        

	# cp  /usr/vice/etc/modload/afs.rc  afs     

	# chmod  555  afs                        

	# ln -s ../init.d/afs  /etc/rc3.d/S14afs 

	# ln -s ../init.d/afs  /etc/rc2.d/K66afs 
	--------------------------------------------


 2.9.2. SETTING UP AFS PARTITIONS ON SOLARIS SYSTEMS

AFS volumes must reside on partitions associated with directories named /vicepx,
where x is a lowercase letter of the alphabet.  By convention, the first
directory created on a machine to house AFS volumes is called /vicepa, the
second directory /vicepb, etc.

Every file server machine must have at least one partition devoted exclusively
to storing AFS volumes (preferably associated with the directory /vicepa). You
cannot simply create a directory under an existing directory (for example,
/usr/vicepa is not legal).

You can also perform the steps in this section when you want to set up a new
/vicepx partition on an existing file server machine.  In that case, you must
restart the fs process to force recognition of the new partition.  Complete
instructions also appear in Chapter 11 of the AFS System Administrator's Guide.

Step 1: Decide how many partitions to devote to storage of AFS volumes.
There must be at least one.  Then create a directory called /vicepx for each
one.  The example instruction creates three directories.

-----------------------------------------------------------------------------
Create a directory for each partition to be used for storing AFS volumes. 

	# mkdir /vicepa
	# mkdir /vicepb
	# mkdir /vicepc
	
	and so on
-----------------------------------------------------------------------------

Step 2: For each /vicep directory just created, add a line to /etc/vfstab,
the "file systems registry" file.

	--------------------------------------------------------------------
	Add the following line to /etc/vfstab for each /vicep directory. 

	/dev/dsk/<disk> /dev/rdsk/<disk> /vicep<x> ufs  yes  

	For example,                                                     

	/dev/dsk/c0t6d0s1 /dev/rdsk/c0t6d0s1 /vicepa ufs 3 yes           
	--------------------------------------------------------------------

Step 3: Create a file system on each partition.  The syntax shown should
be appropriate, but consult the operating system documentation for more
information.

	------------------------------------------------------------------
	Repeat this command to create a file system on each partition. 

	# newfs -v /dev/rdsk/<xxxxxxxx>                                
	------------------------------------------------------------------

Step 4: Mount the partition(s) by issuing the mountall command to mount
all partitions at once.

 2.9.3. REPLACING FSCK ON SOLARIS SYSTEMS

You should never run the standard fsck program on an AFS file server machine.
It will discard the files that make up AFS volumes on the partitions associated
with the /vicepx directories, because they are not standard UNIX directories.
In this step, you replace standard fsck with a modified fsck provided by
Transarc.  It properly checks both AFS and standard UNIX partitions.  To repeat:

NEVER run the standard vendor-supplied fsck program on an AFS file server
machine.  It discards AFS volumes.

You can tell you are running the correct AFS version when it displays the
banner:

[Transarc AFS 3.4 fsck]

For Solaris systems, /etc/fsck is a link to the driver program distributed as
/usr/sbin/fsck.  Rather than replacing either of them, you replace the actual
binary distributed as /usr/lib/fs/ufs/fsck.

Step 1: Move the distributed fsck to a save file, install the
AFS-modified fsck ("vfsck") to the standard location and link the distributed
fsck to it.

	---------------------------------------------------------
	# mv  /usr/lib/fs/ufs/fsck  /usr/lib/fs/ufs/fsck.orig 

	# cp  /usr/afs/bin/vfsck  /usr/lib/fs/ufs/vfsck       

	# ln  -s  /usr/lib/fs/ufs/vfsck  /usr/lib/fs/ufs/fsck 
	---------------------------------------------------------

Step 2: Proceed to Section 2.12 (page 2-65).


 2.10. GETTING STARTED ON SUNOS SYSTEMS

To load AFS into the kernel on SunOS systems, choose one of two methods:

 - dynamic loading using Transarc's dkload program (proceed to Section 2.10.1)

 - dynamic loading using Sun's modload program (proceed to Section 2.10.2

 - building a new kernel (proceed to Section 2.10.3, page 2-54)

After loading AFS, you will continue by creating partitions for storing AFS
volumes and replacing the standard fsck program with an AFS-safe version.

 2.10.1. USING DKLOAD ON SUNOS SYSTEMS

The dkload program is the dynamic kernel loader provided by Transarc for SunOS
systems.  For this machine to remain an AFS machine, dkload must run each time
the machine reboots.  You can invoke dkload automatically in the machine's
initialization file (/etc/rc or equivalent), as explained in Step 3.

The files containing the AFS kernel modifications are libafs.a and
libafs.nonfs.a (the latter is appropriate if this machine's kernel does not
include support for NFS server functionality).

To invoke dkload:

Step 1: Verify that

 - there is at least one spare megabyte of space in /tmp for temporary files
created as dkload runs

 - the following are in /bin on the local disk (not as symbolic links): as, ld,
and nm

 - the /usr/vice/etc/dkload directory on the local disk contains:
dkload (the binary), libafs.a, libafs.nonfs.a, libcommon.a, and
kalloc.o

Step 2: Invoke dkload after running ranlib.

-------------------------------------------------------------------------------
If the machine's kernel does not include support for NFS server functionality, 
you must substitute libafs.nonfs.a for libafs.a.  Either use the mv command    
to replace libafs.a with libafs.nonfs.a in the /usr/vice/etc/dkload directory  
(before issuing these commands), or make the substitution on the command       
line.                                                                          

	# cd /usr/vice/etc/dkload

	# ranlib libafs.a

	# ranlib libcommon.a

	# ./dkload libafs.a

-------------------------------------------------------------------------------

Step 3: Modify the machine's initialization file (/etc/rc or equivalent)
to invoke dkload by copying the contents of /usr/vice/etc/dkload/rc.dkload (the
contents appear in full in Section 5.10).  Place the commands after the commands
that mount the file systems.  If the machine's kernel does not include support
for NFS server functionality, remember to substitute libafs.nonfs.a for
libafs.a.

Step 4: Proceed to Section 2.10.4 (page 2-56).

 2.10.2. USING MODLOAD ON SUNOS SYSTEMS

The modload program is the dynamic kernel loader provided by Sun Microsystems
for SunOS systems. For this machine to remain an AFS machine, modload must run
each time the machine reboots.  You can invoke modload automatically in the
machine's initialization file (/etc/rc or equivalent), as explained in step 3.

To invoke modload:

Step 1: Verify that

 - the /usr/vice/etc/modload directory on the local disk contains libafs.o and
libafs.nonfs.o

 - the modload binary is available on the local disk (standard location is
/usr/etc)

Step 2: Invoke modload.

-------------------------------------------------------------------------------
	# cd /usr/vice/etc/modload

	If the machine's kernel supports NFS functionality:
	# /usr/etc/modload ./libafs.o 

	If the machine's kernel does not support for NFS server functionality:
	# /usr/etc/modload ./libafs.nonfs.o     

--------------------------------------------------------------------------------

Step 3: Modify the machine's initialization file (/etc/rc or equivalent)
to invoke modload, by copying in the contents of
/usr/vice/etc/modload/rc.modload (the contents appear in full in section 5.12).
Place the commands after the commands that mount the file systems.  If the
machine's kernel does not include support for NFS server functionality, remember
to substitute libafs.nonfs.o for libafs.o.

Step 4: Proceed to Section 2.10.4 (page 2-56).

 2.10.3. BUILDING AFS INTO THE KERNEL ON SUNOS SYSTEMS

For the sake of consistency with other system types, the complete instructions
for kernel building appear in Chapter 5.

Step 1: Follow the kernel building instructions in Section 5.8 (page
5-28).

Step 2: Move the existing kernel on the local machine to a safe location.

	-------------------------------
	# mv  /vmunix  /vmunix_save 
	-------------------------------

Step 3: Use a copying program (either cp or a remote program such as ftp
or NFS) to copy the AFS-modified kernel to the appropriate location.

Step 4: Reboot the machine to start using the new kernel.

	------------------
	# shutdown -r now
	------------------


 2.10.4. SETTING UP AFS PARTITIONS ON SUNOS SYSTEMS

AFS volumes must reside on partitions associated with directories named /vicepx,
where x is a lowercase letter of the alphabet.  By convention, the first
directory created on a machine to house AFS volumes is called /vicepa, the
second directory /vicepb, etc.

Every file server machine must have at least one partition devoted exclusively
to storing AFS volumes (preferably associated with the directory /vicepa). You
cannot simply create a directory under an existing directory (for example,
/usr/vicepa is not legal).

You can also perform the steps in this section when you want to set up a new
/vicepx partition on an existing file server machine.  In that case, you must
restart the fs process to force recognition of the new partition.  Complete
instructions also appear in Chapter 11 of the AFS System Administrator's Guide.

Step 1: Decide how many partitions to devote to storage of AFS volumes.
There must be at least one.  Then create a directory called /vicepx for each
one.  The example instruction creates three directories.

-----------------------------------------------------------------------------
Create a directory for each partition to be used for storing AFS volumes. 

	# mkdir /vicepa

	# mkdir /vicepb

	# mkdir /vicepc

	and so on
-----------------------------------------------------------------------------

Step 2: For each /vicep directory just created, add a line to /etc/fstab,
the "file systems registry" file.

-------------------------------------------------------------------
Add the following line to /etc/fstab for each /vicep directory. 

	/dev/<disk> /vicep<x> 4.2 rw 1 2                                

	For example,                                                    

	/dev/sd0g /vicepa 4.2 rw 1 2                                    
-------------------------------------------------------------------

Step 3: Create a file system on each partition.  The syntax shown should
be appropriate, but consult the SunOS documentation for more information.

------------------------------------------------------------------
Repeat this command to create a file system on each partition. 

	# newfs -v /dev/rsd<xx>                                        
------------------------------------------------------------------

Step 4: Mount the partition(s) by issuing either the mount -a command to
mount all partitions at once the mount command to mount each partition in turn.


 2.10.5. REPLACING FSCK ON SUNOS SYSTEMS

You should never run the standard fsck program on an AFS file server machine.
It will discard the files that make up AFS volumes on the partitions associated
with the /vicepx directories, because they are not standard UNIX directories.
In this step, you replace standard fsck with a modified fsck provided by
Transarc.  It properly checks both AFS and standard UNIX partitions.
To repeat:

NEVER run the standard vendor-supplied fsck program on an AFS file server
machine.  It discards AFS volumes.

You can tell you are running the correct AFS version when it displays the
banner:

[Transarc AFS 3.4 fsck]

Step 1: Move standard fsck to a save file, install the AFS-modified fsck
("vfsck") to the standard location and link standard fsck to it.

	--------------------------------------------
	# mv  /usr/etc/fsck  /usr/etc/fsck.orig  

	# cp  /usr/afs/bin/vfsck  /usr/etc/vfsck 

	# rm  /etc/fsck                          

	# ln  -s  /usr/etc/vfsck  /etc/fsck      

	# ln  -s  /usr/etc/vfsck  /usr/etc/fsck  
	--------------------------------------------

Step 2: Proceed to Section 2.12 (page 2-65).

 2.11. GETTING STARTED ON ULTRIX SYSTEMS

To load AFS into the kernel on Ultrix systems, choose one of two methods:

 - dynamic loading using Transarc's dkload program (proceed to Section 2.11.1)

 - building a new kernel, if you have an Ultrix source license (proceed to
Section 2.11.2, page 2-60)

After loading AFS, you will continue by creating partitions for storing AFS
volumes and replacing the standard fsck program with an AFS-safe version.

 2.11.1. USING DKLOAD ON ULTRIX SYSTEMS

The dkload program is the dynamic kernel loader provided by Transarc for Ultrix
systems.  For this machine to remain an AFS machine, dkload must run each time
the machine reboots.  You can invoke dkload automatically in the machine's
initialization file (/etc/rc or equivalent), as explained in Step 3.

The files containing the AFS kernel modifications are libafs.a and
libafs.nonfs.a (the latter is appropriate if this machine's kernel does not
include support for NFS server functionality).

To invoke dkload:

Step 1: Verify that

 - there is at least one spare megabyte of space in /tmp for temporary files
created as dkload runs

 - the following are in /bin on the local disk (not as symbolic links): as, ld,
and nm.

 - the /usr/vice/etc/dkload directory on the local disk contains: dkload (the
binary), libafs.a, libafs.nonfs.a, libcommon.a, and kalloc.o


Step 2: Invoke dkload after running ranlib.

--------------------------------------------------------------------------------
If the machine's kernel does not include support for NFS server functionality,
you must substitute libafs.nonfs.a for libafs.a.  Either use the mv command to
replace libafs.a with libafs.nonfs.a in the /usr/vice/etc/dkload directory
(before issuing these commands), or make the substitution on the command line.

	# cd /usr/vice/etc/dkload

	# ranlib libafs.a

	# ranlib libcommon.a

	# ./dkload libafs.a
--------------------------------------------------------------------------------

Step 3: Modify the machine's initialization file (/etc/rc or equivalent)
to invoke dkload by copying the contents of /usr/vice/etc/dkload/rc.dkload (the
contents appear in full in Section 5.10).  Place the commands after the commands
that mount the file systems.  If the machine's kernel does not include support
for NFS server functionality, remember to substitute libafs.nonfs.a for
libafs.a.

Step 4: Proceed to Section 2.11.3 (page 2-62).

 2.11.2. INSTALLING AN AFS-MODIFIED KERNEL ON AN ULTRIX SYSTEM

For the sake of consistency with other system types, the complete instructions
for kernel building appear in Chapter 5.

Step 1: Follow the kernel building instructions in Section 5.9 (page
5-35).

Step 2: Move the existing kernel on the local machine to a safe location.

	-------------------------------
	# mv  /vmunix  /vmunix_save 
	-------------------------------

Step 3: Use a copying program (either cp or a remote program such as ftp
or NFS) to copy the AFS-modified kernel to the appropriate location.

Step 4: Reboot the machine to start using the new kernel.

	---------------------------------
	# shutdown -r now
	---------------------------------


 2.11.3. SETTING UP AFS PARTITIONS ON ULTRIX SYSTEMS

AFS volumes must reside on partitions associated with directories named /vicepx,
where x is a lowercase letter of the alphabet.  By convention, the first
directory created on a machine to house AFS volumes is called /vicepa, the
second directory /vicepb, etc.

Every file server machine must have at least one partition devoted exclusively
to storing AFS volumes (preferably associated with the directory /vicepa). You
cannot simply create a directory under an existing directory (for example,
/usr/vicepa is not legal).

You can also perform the steps in this section when you want to set up a new
/vicepx partition on an existing file server machine.  In that case, you must
restart the fs process to force recognition of the new partition.  Complete
instructions also appear in Chapter 11 of the AFS System Administrator's Guide.

Step 1: Decide how many partitions to devote to storage of AFS volumes.
There must be at least one.  Then create a directory called /vicepx for each
one.  The example instruction creates three directories.

-----------------------------------------------------------------------------
Create a directory for each partition to be used for storing AFS volumes. 

	# mkdir /vicepa

	# mkdir /vicepb

	# mkdir /vicepc

	and so on                                                             
-----------------------------------------------------------------------------

Step 2: For each /vicep directory just created, add a line to /etc/fstab,
the "file systems registry" file.

-------------------------------------------------------------------
Add the following line to /etc/fstab for each /vicep directory. 

	/dev/<disk>:/vicep<x>:rw:1:2:ufs::                              

	For example,                                                    

	/dev/rz4a:/vicepa:rw:1:2:ufs::                                  
-------------------------------------------------------------------

Step 3: Create a file system on each partition.  The syntax shown should
be appropriate, but consult the Ultrix documentation for more
information.

------------------------------------------------------------------
Repeat this command to create a file system on each partition. 

	# newfs -v /dev/rhd<xx> <disk type>                            
------------------------------------------------------------------

Step 4: Mount the partition(s) by issuing either the mount -a command to
mount all partitions at once or the mount command to mount each partition in
turn.

 2.11.4. REPLACING FSCK ON ULTRIX SYSTEMS

You should never run the standard fsck program on an AFS file server machine.
It will discard the files that make up AFS volumes on the partitions associated
with the /vicepx directories, because they are not standard UNIX directories.
In this step, you replace standard fsck with a modified fsck provided by
Transarc.  It properly checks both AFS and standard UNIX partitions.  To repeat:

NEVER run the standard vendor-supplied fsck program on an AFS file server
machine.  It discards AFS volumes.

You can tell you are running the correct AFS version when it displays the
banner:

[Transarc AFS 3.4 fsck]

Step 1: Move standard fsck to a save file, install the AFS-modified fsck
("vfsck") to the standard location and link standard fsck to it.

	----------------------------------------
	# mv  /bin/fsck  /bin/fsck.orig      

	# cp  /usr/afs/bin/vfsck  /bin/vfsck 

	# rm  /etc/fsck                      

	# ln  -s  /bin/vfsck  /etc/fsck      
	----------------------------------------

 2.12. STARTING THE BOS SERVER

You are now ready to start the AFS server processes on this machine.  The Basic
OverSeer (BOS) Server monitors other AFS server processes on its file server
machine, controlling several aspects of their behavior.  Therefore, you must
start the BOS server first, using the bosserver command.

You will invoke bosserver with authorization checking turned off.  In this mode
anyone can issue any bos command, because the BOS Server does not check that
the issuer has the privileges normally required for issuing the commands.  You
must work with authorization checking turned off because you have not yet
created the Authentication Database entries necessary for authentication; you
will do this in Section 2.15.

Working with authorization checking turned off is a grave security risk,
because it means that anyone who can log onto the machine can issue any AFS
command.  Complete all steps in this chapter in one uninterrupted pass and do
not leave the machine unattended until you turn on authorization checking in
Section 2.30.

As it initializes for the first time, the BOS Server creates the following
directories and files.  It sets their owner to "root" and sets their mode bits
so that no one but the owner can write them; in some cases, it also disables
reading.  For explanations of the contents and function of these directories
and files, refer to Chapter 3 of the AFS System Administrator's Guide.  For
further discussion of the mode bit settings, see Section 2.35.3 at the end of
this chapter.

 - /usr/afs/db

 - /usr/afs/etc/CellServDB

 - /usr/afs/etc/ThisCell

 - /usr/afs/local

 - /usr/afs/logs

The BOS Server also creates /usr/vice/etc/ThisCell and /usr/vice/etc/CellServDB
as symbolic links to the corresponding files in /usr/afs/etc.  This is necessary
because the AFS command interpreters such as bos and kas, which generally run on
client machines, consult the CellServDB and ThisCell files in /usr/vice/etc for
information about which cell's server processes they should contact.  Because
you are installing a file server machine, these files currently reside only in
/usr/afs/etc rather than /usr/vice/etc; the links enable the command
interpreters to retrieve the information they need.  You will replace the links
with actual files when you make this machine into a client (starting
in Section 2.23).


Step 1: Transarc provides your AFS license number in a letter that comes
with the Binary Distribution.  Copy this number into /usr/afs/etc/License.  The
absence of that file prevents the BOS Server from starting, so if you do not
know your license number you must contact AFS Product Support before continuing.

	-----------------------------------------------------------
	Copy your AFS license number into /usr/afs/etc/License. 

	# mkdir /usr/afs/etc                                    

	# echo "<license number>" > /usr/afs/etc/License        

	Verify that the /usr/afs/etc/License file exists.
	-----------------------------------------------------------

Step 2: Start the BOS Server with authorization checking turned off by
invoking bosserver with the -noauth flag.

	--------------------------------------
	# /usr/afs/bin/bosserver -noauth & 
	--------------------------------------

Step 3: Verify that the BOS Server created /usr/vice/etc/ThisCell and
/usr/vice/etc/CellServDB as links to the corresponding files in /usr/afs/etc.
If not, create the links manually.

-------------------------------------------------------------------------------
	# ls  -l  /usr/vice/etc

If /usr/vice/etc/ThisCell and /usr/vice/etc/CellServDB do not exist, or are 
not links:                                                                  

	# ln  -s  /usr/afs/etc/ThisCell  /usr/vice/etc/ThisCell

	# ln  -s  /usr/afs/etc/CellServDB  /usr/vice/etc/CellServDB
-------------------------------------------------------------------------------


 2.13. DEFINING THE CELL NAME AND THE MACHINE'S CELL MEMBERSHIP

Now assign your cell's name.  You should already have selected a cell name that
follows the ARPA Internet Domain System conventions.  Select your cell name
carefully because it is very complicated to change once set.  Chapter 2 of the
AFS System Administrator's Guide discusses the issues surrounding cell name
selection and explains why the cell name is difficult to change.  That chapter
also lists important restrictions on the format of a cell name; two of the most
important restrictions are that the name cannot include uppercase letters or
more than 64 characters.

Use the bos setcellname command to assign the cell name.  It creates two files:

 - /usr/afs/etc/ThisCell, which defines this machine's cell membership and
determines which cell's server processes the command interpreters on this
machine contact by default

 - /usr/afs/etc/CellServDB file, which lists the cell's database server
machines.  This machine (being installed) is automatically placed on the list.

Note: Throughout this chapter, substitute the complete Internet host name of the
machine you are installing (the local machine) for machine name.  By convention,
machine names use their cell's name as a suffix (for example, fs1.transarc.com).

Step 1: Invoke bos setcellname to set the cell name.

--------------------------------------------------------------------------------
Issue the bos setcellname command, substituting this machine's complete        
Internet host name for machine name and the chosen cell name for cell name.    

	# cd /usr/afs/bin

	# bos setcellname <machine name> <cell name>

Until you authenticate as admin in Section 2.27, you may see error messages when
you issue bos commands.  You may safely ignore messages about bos being unable
to get tickets and/or running unauthenticated.
--------------------------------------------------------------------------------


Step 2: Check the cell name as it appears in CellServDB.  The output also
lists the local machine's name as the cell's only database server machine.

	----------------------------------
	# bos listhosts <machine name> 
	----------------------------------

 2.14. STARTING THE AUTHENTICATION SERVER

Start the Authentication Server, kaserver, by using the bos create command to
create an entry for it in /usr/afs/local/BosConfig. This process runs on
database server machines only.

 2.14.1. A NOTE ON KERBEROS

AFS's authentication and protection protocols are based on algorithms and other
procedures known as "Kerberos," as originally developed by Project Athena at the
Massachusetts Institute of Technology.  Some cells choose to replace AFS's
protocols with Kerberos as obtained directly from Project Athena or other
sources.  If you wish to do this, contact AFS Product Support now to learn about
necessary modifications to the installation.

 2.14.2. INSTRUCTIONS FOR INSTALLING THE AUTHENTICATION SERVER

The remaining instructions in these server installation sections direct you to
supply the -cell argument on all applicable commands.  Provide the cell name you
assigned in Section 2.13.  This is mostly a precaution, since the cell is
already defined in /usr/afs/etc/ThisCell.

The remaining instructions also assume that the current working directory is
/usr/afs/bin (you moved there in Section 2.13).

Step 1: Start the Authentication Server by issuing the bos create command
to create a BosConfig entry for the kaserver process.

---------------------------------------------------------------------
Type the following bos create command on a single line:           

# bos create <machine name> kaserver simple /usr/afs/bin/kaserver -cell <cell name>
---------------------------------------------------------------------


Messages may appear periodically on your console, indicating that
AFS's distributed database technology, Ubik, is electing a quorum
(quorum election is necessary even with a single server).  After a
few minutes, a final message indicates that the Authentication
Server is ready to process requests.

You can safely ignore any messages that tell you to add Kerberos to the
/etc/services file; AFS uses a default value that makes the addition
unnecessary.


 2.15. INITIALIZING SECURITY MECHANISMS

The Authentication Server process you just created maintains the AFS
Authentication Database, which records a password entry for each user in your
cell and an entry called afs for the AFS server processes.  In this section you
will create initial entries in the Authentication Database and in some related
configuration files.  These entries are crucial to security in your cell.

The two entries that you will create in the Authentication Database are:

 - admin, a "generic" administrative account.  You can name this account
something other than admin; if you do so, substitute that name throughout this
chapter.

Use of a generic administrative account means you do not need to grant
privileges separately to each system administrator.  Instead, each administrator
knows admin's password and authenticates under that identity when performing
tasks that require administrative privilege.

Like all user entries in the Authentication Database, admin's entry records the
user name (admin) and a password, scrambled into an encryption key.

 - afs, the entry for AFS server processes.  No one will ever log in as afs, but
the Authentication Server's Ticket Granting Service (TGS) module uses the
password field to encrypt the server tickets that AFS clients must present to
servers during mutual authentication. (See Chapter 1 in the AFS System
Administrator's Guide to review the role of server encryption keys in mutual
authentication).

After creating the accounts, you will then grant administrative privilege to
admin. Issuing the kas setfields command in Step 2 makes admin able to issue any
kas command. Issuing bos adduser in Step 4 adds admin to /usr/afs/etc/UserList,
which makes admin able to issue any bos or vos command (the BOS Server, Volume
Server, and Volume Location Server consult UserList to determine privilege).

In Step 5, you will place the AFS server encryption key into
/usr/afs/etc/KeyFile.  The AFS server processes refer to this file to learn the
server encryption key when they need to decrypt server tickets.

Note: Most kas commands always prompt for a password (in the following steps the
prompt is Password for root:).  Because authorization checking is currently
disabled, the Authentication Server does not check the validity of the password
provided.  You can type any character string, such as a series of spaces, and
press <Return>.

You can also ignore the error messages that appear after you provide the
password.  They appear only because authorization checking is disabled and do
not indicate actual errors.


Step 1: Create Authentication Database entries for afs and admin.

---------------------------------------------------------------------------------
Issue the kas create command to create Authentication Database entries for admin
and afs.  Substitute the desired passwords for afs_passwd and admin_passwd,
using any string longer than six characters.  Be sure to remember both
passwords.

In the following example commands, the passwords are not provided on the command
line, so kas prompts for them.  The advantage is that the passwords do not echo
visibly on the screen, but if you wish to type them on the command line, place
them between the account name and the -cell flag.

Note: At the Password for root: prompt, press space and <Return>.               

# kas create afs -cell <cell name>                                              
Password for root:                                                              
initial_password:                                                   
Verifying, please re-enter initial_password:                        

# kas create admin -cell <cell name>                                            
Password for root:                                                              
initial_password:                                                 
Verifying, please re-enter initial_password:                      

You may safely ignore any error messages indicating that authentication failed. 
---------------------------------------------------------------------------------

Step 2: Define admin to be a privileged issuer of kas commands.

---------------------------------------------------------------------------------
Issue the kas setfields command to designate admin as a privileged administrator 
with respect to the Authentication Database.                                     

Note: At the Password for root: prompt, press space and <Return>.                

	# kas setfields admin -flags admin -cell <cell name>

	Password for root:

You may safely ignore any error messages indicating that authentication failed.  
--------------------------------------------------------------------------------

Step 3: Verify that admin has the ADMIN flag in its Authentication
Database entry.

--------------------------------------------------------------------------------
Issue the kas examine command.  The ADMIN flag should appear near the top of the
output, indicating that admin is privileged.

Note: at the Password for root: prompt, press space and <Return>.               

	# kas examine admin -cell <cell name>

	Password for root:

You may safely ignore any error messages indicating that authentication failed. 
--------------------------------------------------------------------------------


Step 4: Define admin to be a privileged issuer of bos and vos commands.
Doing so has no immediate consequence, but becomes important after you enable
authorization checking (as described in Section 2.30).

	------------------------------------------------------------
	Issue bos adduser to add admin to /usr/afs/etc/UserList. 

	# bos adduser <machine name> admin -cell <cell name>     
	------------------------------------------------------------

Step 5: Add the AFS server encryption key (afs_passwd) to
/usr/afs/etc/KeyFile.

--------------------------------------------------------------------------------
Issue the bos addkey command to define afs_passwd as the first server encryption 
key, with key version number 0. Be sure to use the same afs_passwd as in Step 1. 

In the following example commands, afs_passwd is not provided on the command     
line, so bos prompts for it.  The advantage is that the password does not echo   
visibly on the screen, but if you wish to type it on the command line, place it  
between machine name and the -kvno switch.                                       

	# bos addkey <machine name> -kvno 0 -cell <cell name>

	Input key: 

	Retype input key: 
--------------------------------------------------------------------------------

Step 6: Use kas examine and bos listkeys to verify that the key derived
from afs_passwd is the same in the Authentication Database afs entry and in the
KeyFile.  Note that you can see the actual octal numbers making up the key only
because authorization checking is temporarily disabled.

Note: You should change the server encryption key at least once per month from
now on.  When you do, follow the instructions in Chapter 10 of the AFS System
Administrator's Guide, being sure to change the key both in the KeyFile and in
the Authentication Database.


------------------------------------------------------------------------------
Note: at the Password for root: prompt, press space and <Return>.          

	# kas examine afs                                                          
	Password for root:                                                         
	# bos listkeys <machine name> -cell <cell name> 

You may safely ignore any error messages indicating that bos failed to get
tickets and/or that authentication failed.
------------------------------------------------------------------------------

Note: If the keys are different, issue the following commands, making sure that
afs_passwd is the same in both cases.

# kas setpassword afs -kvno 1 -cell <cell name>
Password for root:
new_password: 
Verifying, please re-enter initial_password: 

# bos addkey <machine name> -kvno 1 -cell <cell name>
Input key: 
Retype input key: 

Reissue the kas examine and bos listkeys commands to verify that the keys with
key version number 1 now match.  Repeat the instructions in this box as
necessary.

 2.16. STARTING THE PROTECTION SERVER

Start the Protection Server, ptserver, which maintains the Protection Database.
This process runs on database server machines only.

As it initializes for the first time, the Protection Server automatically
creates "system" entries in the Protection Database, including the
system:administrators group, and assigns them AFS UIDs.  You will add admin to
system:administrators to enable it to issue privileged pts and fs commands.

Step 1: Start the Protection Server by issuing the bos create command to
create a BosConfig entry for it.

---------------------------------------------------------------------
Type the following bos create command on a single line:           

# bos create <machine name> ptserver simple /usr/afs/bin/ptserver -cell <cell name>
---------------------------------------------------------------------

Messages may appear periodically on your console, indicating that AFS's
distributed database technology, Ubik, is electing a quorum (quorum election is
necessary even with a single server).  Wait a few minutes after the last
message, to guarantee that the Protection Server is ready to execute requests.

Step 2: Add admin to the system:administrators group.

-----------------------------------------------------------------------
Create a Protection Database entry for admin and add its name to the
system:administrators group.  If you substituted a name other than admin in
Section 2.15, use that name here.  Assuming admin is the first Protection
Database entry you have created, it will be assigned the AFS UID 1.  If you wish
to assign a different AFS UID, use the -id argument on pts createuser.  If your
cell's local password file (/etc/passwd or equivalent) already has an entry for
admin, you should use the -id argument to make admin's AFS UID the same as its
UNIX UID.  See Chapters 16 and 17 of the AFS System Administrator's Guide for
further discussion of matching AFS and UNIX UIDs.

	# pts createuser -name admin -cell <cell name> [-id ]
	
	# pts adduser admin system:administrators -cell <cell name>

For these and the following pts command, you may safely ignore error messages
indicating that entries or tokens are missing or that authentication failed.
-------------------------------------------------------------------------


Step 3: Verify that admin was correctly added to system:administrators.

-------------------------------------------------------------------------
Check that the output lists system:administrators as a group to which 
admin belongs.                                                        

	# pts membership admin -cell <cell name>                              
-------------------------------------------------------------------------


 2.17. STARTING THE VOLUME LOCATION SERVER

Start the Volume Location Server, vlserver, by using the bos create command to
create an entry for it in /usr/afs/local/BosConfig.  This process maintains the
Volume Location Database (VLDB), and runs on database server machines only.

Step 1: Start the Volume Location Server by issuing the bos create
command to create a BosConfig entry for it.

---------------------------------------------------------------------
Type the following bos create command on a single line:           

# bos create <machine name> vlserver simple /usr/afs/bin/vlserver -cell <cell name>                                     
---------------------------------------------------------------------

Messages may appear periodically on your console, indicating that AFS's
distributed database technology, Ubik, is electing a quorum (quorum election is
necessary even with a single server).  Wait a few minutes after the last
message, to guarantee that the VL Server is ready to execute requests.

 2.18. STARTING THE BACKUP SERVER

Start the Backup Server, buserver, by using the bos create command to create an
entry for it in /usr/afs/local/BosConfig.  This process maintains the Backup
Database, and runs on database server machines only.  The chapter entitled
"Backing Up the System" in the AFS System Administrator's Guide details the
other instructions you must perform before actually using the Backup System.

Step 1: Start the Backup Server by issuing the bos create command to
create a BosConfig entry for it.

---------------------------------------------------------------------
Type the following bos create command on a single line:           

	# bos create <machine name> buserver simple /usr/afs/bin/buserver -cell <cell name>                                     

---------------------------------------------------------------------

Messages may appear periodically on your console, indicating that AFS's
distributed database technology, Ubik, is electing a quorum (quorum election is
necessary even with a single server).  Wait a few minutes after the last
message, to guarantee that the Backup Server is ready to execute requests.

 2.19. STARTING THE FILE SERVER, VOLUME SERVER, AND SALVAGER

Start the fs process, which "binds together" the File Server, Volume Server and
Salvager (fileserver, volserver and salvager processes).

Step 1: Start the fs process by issuing the bos create command to create a
BosConfig entry for it.

------------------------------------------------------------------------------
Type the following bos create command on a single line:                

	# bos create <machine name> fs fs /usr/afs/bin/fileserver
	/usr/afs/bin/volserver /usr/afs/bin/salvager -cell <cell name>  
------------------------------------------------------------------------------

You may see a message regarding the initialization of the VLDB and a few error
messages of the form

fsync_clientinit failed (will sleep and retry): connection refused

These messages appear because the volserver process cannot start operation until
the fileserver process initializes.  Initialization can take a while, especially
if you already have a large number of existing AFS files.  Wait a few minutes
after the last such message, to guarantee that both component processes have
started successfully.

One way to check that the fs process has started successfully is to issue the
following command.  Check that the entry for fs in the output reports two "proc
starts."

# bos status <machine name> -long

Step 2: What you do in this step depends on whether you have previously
run AFS on this machine or not.  If you have not run AFS before, you will create
your cell's root volume.  If you have run AFS before, you will synchronize the
VLDB with the volumes that already exist.

-------------------------------------------------------------------------------
If you have not previously run AFS on this file server machine, create the root
volume for AFS, root.afs. For <partition name>, substitute the name of one of
the machine's AFS partitions (such as /vicepa).

# vos create <machine name> <partition name> root.afs -cell <cell name>         

Messages may appear indicating that the Volume Server created root.afs on the
indicated partition on the local machine, and created a VLDB entry for it. You
may safely ignore error messages indicating that tokens are missing, or that
authentication failed.
-------------------------------------------------------------------------------


OR

-------------------------------------------------------------------------------
If you have previously run AFS on this file server machine, synchronize the VLDB
to reflect the state of volumes on the local machine.  The synchronization may
take several minutes; to follow its progress, use the -verbose flag.

	# vos syncvldb <machine name> -cell <cell name> -verbose

	# vos syncserv <machine name> -cell <cell name> -verbose
-------------------------------------------------------------------------------

You may safely ignore error messages indicating that tokens are missing, or that
authentication failed.


 2.20. STARTING THE SERVER PORTION OF THE UPDATE SERVER

Start the server portion of the Update Server, upserver, to distribute this
machine's copies of files from

 - /usr/afs/bin to other file server machines of its system type.

Distributing its copies of the binary files in /usr/afs/bin is what makes this
machine the binary distribution machine for its system type.  The other file
server machines of its system type run the appropriate client portion of the
Update Server, upclientbin.  This means that the administrator should install
updated binaries only on the binary distribution machine of each system type.

 - /usr/afs/etc to all other file server machines, if you are running the United
States edition of AFS.  AFS customers in the United States and Canada receive
this edition.

Distributing its copies of the configuration files in /usr/afs/etc is what makes
this machine the system control machine.  The other file server machines in the
cell run the appropriate client portion of the Update Server, upclientetc.  This
means that administrators at sites using the United States edition of AFS should
update the files in /usr/afs/etc only on the system control machine.

Some of the files in /usr/afs/etc, particularly the KeyFile, are very sensitive
(crucial to cell security), so it is important that the upserver process encrypt
them before distribution across the network.  To guarantee this, use the -crypt
flag as indicated on the following upserver initialization command.

Sites using the international edition of AFS should not use the Update Server to
distribute the contents of /usr/afs/etc (thus international cells do not run a
system control machine).  Due to United States government export restrictions,
the international edition of AFS does not include the encryption routines that
the Update Server uses to encrypt information before sending it across the
network.  Instead of using the upserver process to distribute the contents of
/usr/afs/etc, you must update the configuration files on each file server
machine individually (for example, when you update the KeyFile). The bos
commands you use to update the configuration files are able to encrypt data to
protect it as it crosses the network.  Instructions in the AFS System
Administrator's Guide explain how to update configuration files on individual
machines.

The binaries in /usr/afs/bin are not important to cell security, so it is not
necessary to encrypt them before distribution across the network.  With both
editions of AFS, you will mark /usr/afs/bin with the upserver initialization
command's -clear flag, to indicate that the upserver process should encrypt the
contents of /usr/afs/bin only if an upclientbin process requests them that way.

Note that the server and client portions of the Update Server always mutually
authenticate with one another, in both the United States and international
versions of AFS, and regardless of whether you use the -clear or -crypt flag.
This protects their communications from eavesdropping to some degree.

See the AFS Command Reference Manual entries on upclient and upserver for more
information on these commands.

Step 1: Start the server portion of the Update Server by issuing the bos
create command to create a BosConfig entry for it.

-------------------------------------------------------------------------------
If using the United States edition of AFS, type the following command on a
single line:

	# bos create <machine name> upserver simple "/usr/afs/bin/upserver
	-crypt /usr/afs/etc -clear /usr/afs/bin"  -cell <cell name>           

If using the international edition of AFS, type the following command on a
single line:

	# bos create <machine name> upserver simple  "/usr/afs/bin/upserver
	-clear /usr/afs/bin"  -cell <cell name>                               
--------------------------------------------------------------------------------

 2.21. STARTING THE CONTROLLER FOR NTPD

In this section you start the AFS process, runntp, that controls the Network
Time Protocol Daemon (NTPD).  This daemon runs on all of on your cell's file
server machines, and keeps their internal clocks synchronized.  As the cell's
system control machine, this machine is special: it is the only one that refers
to a machine outside the cell as its time standard or source.  All other file
server machines in the cell will refer to this machine as their time source.

Note: Do not create the runntp process if ntpd is already running on this
machine; attempting to run multiple instances of ntpd causes an error.
Similarly, you can skip this section if some other time synchronization protocol
is running on this machine; running ntpd does not cause an error in this case,
but is unnecessary.

Keeping the clocks on your cell's file server machines synchronized is crucial
to the correct operation of AFS's distributed database technology (Ubik).
Chapter 2 of the AFS System Administrator's Guide explains in some detail how
unsynchronized clocks can disturb Ubik's performance and cause service outages
in your cell.

Choosing an appropriate external time source is important, but involves more
considerations than can be discussed in the space available here.  If you need
help in selecting a source, contact AFS Product Support.  The AFS Command
Reference Manual provides more information on the runntp command's arguments.

As the runntp process initializes NTPD, trace messages may appear on the
console.  You may safely ignore these messages, but might find them interesting
if you understand how NTPD works.

Step 1: Verify that ntpd and ntpdc exist in /usr/afs/bin.

	---------------------
	# ls /usr/afs/bin 
	---------------------

Step 2: Start the runntp process on the system control machine to control
NTPD.

--------------------------------------------------------------------------------
Initialize runntp by issuing the following command.  For time server machine(s),
substitute the IP address of one or more machines that will serve as an external
time source, separating each name with a space.  Include this argument on this
machine only.  For help in selecting appropriate machine(s), contact AFS Product
Support.

The -localclock flag tells NTPD to rely on its internal clock during periods
when the machine loses contact with its external time source(s).  You should
only use it on this machine, and only if your cell is subject to frequent
network outages that might separate this machine from its external time source.

Type one of the following commands on a single line, depending on your cell's
network connectivity:

 - If your cell usually has network connectivity to an external time source, use
the following command:

	# bos create <system_control_machine> runntp simple
	"/usr/afs/bin/runntp <time_server_machine(s)>"

- If your cell does not have network connectivity to an external time source, use
the following command:

	# bos create <system_control_machine> runntp simple
	"/usr/afs/bin/runntp -localclock"

- If your cell has network connectivity to an external time source, but the
network connection is frequently broken, use the following command:

	# bos create <system_control_machine> runntp simple
	"/usr/afs/bin/runntp -localclock <time_server_machine(s)>"


--------------------------------------------------------------------------------

 2.22. COMPLETING THE INSTALLATION OF SERVER FUNCTIONALITY

You have now started all the AFS server processes that will run on this machine.
In this section, you will reboot the machine to make sure that all processes
will restart successfully.

The first step is to make sure that the bosserver initialization command appears
in the machine's initialization file.  At each reboot, the BOS Server starts up
in response to this command, and starts all processes in
/usr/afs/local/BosConfig that have status flag Run (each bos create command you
issued in this chapter set the process' status flag to Run).

Step 1: Verify that the machine's initialization file invokes bosserver,
so that the BOS Server starts automatically at each file server reboot.

--------------------------------------------------------------------------------
On system types other than Digital UNIX, IRIX, NCR UNIX and Solaris, add the
following lines to /etc/rc or equivalent, after the lines that configure the
network, mount all file systems, and invoke a kernel dynamic loader.

	if [ -f /usr/afs/bin/bosserver ]; then
	echo 'Starting bosserver' > /dev/console
	/usr/afs/bin/bosserver &
	fi

If the machine runs Digital UNIX, IRIX, NCR UNIX or Solaris, no action is
necessary.  The "init.d" initialization script includes tests that result in
automatic BOS Server startup if appropriate.
-------------------------------------------------------------------------------


Step 2: Shutdown AFS server processes in preparation for reboot.  You may
wish to check if any users are logged on and notify them first.

Using the -wait flag on the bos shutdown command guarantees that all
processes shut down before the command line prompt returns.

---------------------------------------------------------------------------
Shutdown the server processes on the machine in preparation for reboot. 

	# cd /usr/afs/bin                                                       

	# bos shutdown <machine name>  -wait                                    
---------------------------------------------------------------------------

Step 3: Reboot the machine and log in again as "root."

The BOS Server starts automatically because of the bosserver command
you added to /etc/rc or its equivalent in Step 1.

	---------------------------
	# reboot           

	login: root             
	Password:               
	---------------------------

Step 4: Verify that all server processes are running normally.  The output
for each process should read "Currently running normally."

	-------------------------------
	# bos status <machine name> 
	-------------------------------

Step 5: Turn off authorization checking once again, in preparation for
installing the client functionality on this machine.  Authorization checking is
currently on because the bosserver command in /etc/rc (or equivalent) does not,
and should not, use the -noauth flag.

Remember that turning off authorization checking is a grave security risk.
Perform the remaining steps in this chapter in one uninterrupted session.  You
will turn on authorization checking in Section 2.30.

-----------------------------------------------------------------------------
Turn off authorization checking, and then proceed immediately through the
remainder of this chapter.

	# echo "" > /usr/afs/local/NoAuth
-----------------------------------------------------------------------------


 2.23. OVERVIEW: INSTALLING CLIENT FUNCTIONALITY

If you have following the instructions presented thus far, the machine is
currently an AFS file server machine, database server machine, system control
machine (if you are using the United States edition of AFS), and binary
distribution machine.  It is ready for the installation of client functionality,
making it a client machine as well.

In the following sections, you will:

1. Define the machine's cell membership for client processes.

2. Create the client version of CellServDB.

3. Define cache location and size.

4. Create the /afs directory and start the Cache Manager.

 2.24. DEFINING THE CLIENT MACHINE'S CELL MEMBERSHIP

Every AFS client machine must have a copy of the /usr/vice/etc/ThisCell file on
its local disk.  This file defines the machine's cell membership for the AFS
client programs that run on it.  (The /usr/afs/etc/ThisCell file you created in
Section 2.13 is used only by server processes.)

Among other functions, the file ThisCell on a client machine determines:

 - the cell in which machine users authenticate by default

 - the cell whose file server processes are contacted by this machine's AFS
command interpreters by default

Step 1: Remove the symbolic link created in Section 2.12.

	--------------------------------
	# rm  /usr/vice/etc/ThisCell 
	--------------------------------

Step 2: Create /usr/vice/etc/ThisCell as a copy of /usr/afs/etc/ThisCell.

	-------------------------------------------------------
	# cp  /usr/afs/etc/ThisCell  /usr/vice/etc/ThisCell 
	-------------------------------------------------------

 2.25. CREATING THE CLIENT VERSION OF CellServDB

Every client machine's /usr/vice/etc/CellServDB file lists the database server
machines in each cell that the Cache Manager can contact.  If a cell is not
listed in this file, or if its list of database server machines is wrong, then
users working on this machine will be unable to access the cell's file tree.
For the Cache Manager to perform properly, the CellServDB file must be accurate
at all times.  Refer to the AFS System Administrator's Guide for instructions on
keeping this file up-to-date after initial creation.

The Cache Manager consults /usr/vice/etc/CellServDB only once per reboot. As
afsd initializes the Cache Manager, it copies the contents of CellServDB into
the kernel; until the next reboot, the Cache Manager consults this in-kernel
list of database server machines.  Section 2.34 and the AFS System
Administrator's Guide explain how to use the fs newcell command to update the
kernel list without rebooting.

A sample CellServDB file is included in the AFS Binary Distribution as
/usr/vice/etc/CellServDB.sample.  It includes all of the cells that had agreed
to be advertised at the time your Binary Distribution Tape was made; the
majority of the list is ordered by Internet address.  Follow the instructions in
this section to add your local cell's database server machines to this machine's
CellServDB.  Section 2.33 provides instructions for making all of the cells
accessible, and Section 2.34 shows how to add more foreign cells in the future.

Each cell's entry in CellServDB must have the following format.

 - The first line must begin with the > character, followed by the cell's
Internet domain name.  The domain name can be followed by a # sign (indicating a
comment) and a comment that explains the name.

 - Each subsequent line lists one database server machine in the cell.  Each
database server machine line must contain the Internet address of the server in
the standard four-component decimal form (e.g., 158.98.3.2), followed by a #
sign and the machine's Internet host name.  In this case, the # sign does not
indicate a comment; the hostname that follows is a required field.  The Cache
Manager attempts to contact the cell using the specified name, referring to the
Internet address only if contacting by name fails.

An example extract from CellServDB.sample appears after the instructions.

Step 1: Remove the symbolic link created in Section 2.12.

	----------------------------------
	# rm  /usr/vice/etc/CellServDB 
	----------------------------------

Step 2: Working in the /usr/vice/etc directory, rename CellServDB.sample
to CellServDB.

	-------------------------------------
	# cd /usr/vice/etc                

	# mv CellServDB.sample CellServDB 
	-------------------------------------

Step 3: Use a text editor to add an entry for your cell to CellServDB.
It does not matter where you put the entry, but for ease of future access you
may wish to place it either at the beginning of the file or in Internet address
order.

Be sure to format your cell's entry as described above.  The Cache Manager
cannot read CellServDB if it contains formatting errors (such as extra blank
lines between the database server machine lines).

Here is an example for a cell called school.edu with a single database server
machine called first.fs.school.edu at address 128.2.9.7:

>school.edu     #our home cell
128.2.9.7       #first.fs.school.edu

Step 4: If CellServDB includes cells that you do not wish users of this
machine to access, remove their entries.

To make all of the cells in CellServDB accessible to users of this machine, see
the instructions in Section 2.33.

The following extract from CellServDB.sample illustrates correct format.
(However, the machine names and addresses are subject to change.  Section 2.34
explains how to obtain current information about database server machines.)

>athena.mit.edu           #MIT/Athena cell
18.72.0.43                      #orf.mit.edu
18.80.0.2                       #maeander.mit.edu
18.70.0.6                       #prill.mit.edu
>andrew.cmu.edu           #Carnegie Mellon University - Campus
128.2.10.2                      #vice2.fs.andrew.cmu.edu
128.2.10.7                      #vice7.fs.andrew.cmu.edu
128.2.10.11                     #vice11.fs.andrew.cmu.edu
>cs.cmu.edu               #Carnegie Mellon University - SCS
128.2.242.86                    #lemon.srv.cs.cmu.edu
128.2.217.45                    #apple.srv.cs.cmu.edu
128.2.222.199                   #papaya.srv.cs.cmu.edu
>umich.edu                #University of Michigan - Campus
141.211.168.24                  #beachhead.ifs.umich.edu
141.211.168.25                  #toehold.ifs.umich.edu
141.211.168.28                  #bastion.ifs.umich.edu
>transarc.com             #Transarc Corporation
158.98.3.2                      #ernie.transarc.com
158.98.3.3                      #bigbird.transarc.com
158.98.14.3                     #oscar.transarc.com


 2.26. SETTING UP THE CACHE

Every AFS client must have a cache in which to store local copies of files
brought over from file server machines.  The Cache Manager can cache either on
disk or in machine memory.

For both types of caching, afsd consults the /usr/vice/etc/cacheinfo file as it
initializes the Cache Manager and cache to learn the defaults for cache size and
where to mount AFS locally. For disk caches, it also consults the file to learn
cache location.  You must create this file for both types of caching.

The file has three fields:

1. The first field specifies where to mount AFS on the local disk.  The standard
choice is /afs.

2. The second field defines the local disk directory to be used for caching, in
the case of a disk cache.  The standard choice is /usr/vice/cache, but you could
specify a different directory to take advantage of more space on other
partitions.  Something must appear in this field even if the machine uses memory
caching.

3. The third field defines cache size as a number of kilobyte (1024 byte)
blocks.  Make it as large as possible, but do not make the cache larger than 90%
to 95% of the space available on the partition housing /usr/vice/cache or in
memory: the cache implementation itself requires a small amount of space.  For
AIX systems using a disk cache, cache size cannot exceed 85% of the disk
capacity reported by the df command.  This difference between AIX and other
systems is because the output of AIX df shows actual disk capacity and use,
whereas most other versions "hide" about 10% of disk capacity to allow for over
usage.

Violating this restriction on cache size can cause errors or worsened
performance.

Transarc recommends using an AFS cache size up to 1 GB.  Although it is possible
to have an AFS cache size as large as the size of the underlying file system,
Transarc does not recommend caches this large for routine use.

Disk caches smaller than 5 megabytes do not generally perform well, and you may
find the performance of caches smaller than 10 megabytes unsatisfactory,
particularly on system types that have large binary files.  Deciding on a
suitable upper limit is more difficult.  The point at which enlarging the cache
does not really improve performance depends on the number of users on the
machine, the size of the files they are accessing, and other factors.  A cache
larger than 40 megabytes is probably unnecessary on a machine serving only a few
users accessing files that are not huge.  Machines serving multiple users may
perform better with a cache of at least 60 to 70 megabytes.

Memory caches smaller than 1 megabyte are nonfunctional, and most users find the
performance of caches smaller than 5 megabytes to be unsatisfactory.  Again,
this depends on the number of users working on the machine and the number of
processes running.  Machines running only a few processes may be able to use a
smaller memory cache.

 2.26.1. SETTING UP A DISK CACHE

This section explains how to configure a disk cache.

Step 1: Create the cache directory.  This example instruction shows the
standard location, /usr/vice/cache.

	--------------------------------------------
	# mkdir /usr/vice/cache                  
	--------------------------------------------

Step 2: Create the cacheinfo file to define the boot-time defaults
discussed above.  This example instruction shows the standard mount location,
/afs, and the standard cache location, /usr/vice/cache.

---------------------------------------------------------------------
	# echo "/afs:/usr/vice/cache:<#blocks>" > /usr/vice/etc/cacheinfo 
---------------------------------------------------------------------

For example, to devote 10000 one-kilobyte blocks to the cache directory on this
machine, type:

	# echo "/afs:/usr/vice/cache:10000" > /usr/vice/etc/cacheinfo

 2.26.2. SETTING UP A MEMORY CACHE

This section explains how to configure a memory cache.

Step 1: Create the cacheinfo file to define the boot-time defaults
discussed above.  This example instruction shows the standard mount location,
/afs, and the standard cache location, /usr/vice/cache.  The location specified
is irrelevant for a memory cache, but a value must be provided.

---------------------------------------------------------------------
	# echo "/afs:/usr/vice/cache:<#blocks>" > /usr/vice/etc/cacheinfo 
---------------------------------------------------------------------

For example, to devote 10000 one-kilobytes of memory to caching on this client
machine, type:

	# echo "/afs:/usr/vice/cache:10000" > /usr/vice/etc/cacheinfo


 2.27. CREATING /AFS AND STARTING THE CACHE MANAGER

As mentioned previously, the Cache Manager mounts AFS at the local /afs
directory.  In this section you create that directory and then run afsd to
initialize the Cache Manager.

You should also add afsd to the machine's initialization file (/etc/rc or its
equivalent), so that it runs automatically at each reboot.  If afsd does not run
at each reboot, the Cache Manager will not exist on this machine, and it will
not function as an AFS client.

The afsd program sets several cache configuration parameters as it initializes,
and starts up daemons that improve performance.  As described completely in the
AFS Command Reference Manual, you can use the afsd command's arguments to alter
these parameters and/or the number of daemons.  Depending on the machine's cache
size, its amount of RAM, and how many people work on it, you may be able to
improve its performance as a client by overriding default values.

AFS also provides a simpler alternative to setting afsd's arguments
individually.  You can set groups of parameters based on the size (small,
medium, and large) of the client machine.  These groups are defined in scripts,
the names of which depend upon the client machine's system type.  For system
types other than Digital UNIX, IRIX, NCR UNIX and Solaris, the parameter
settings are specified in three initialization scripts distributed in
/usr/vice/etc/dkload.  The scripts are appropriate only for machines with a disk
cache.  Both the AFS Command Reference Manual description of afsd and Chapter 13
of the AFS System Administrator's Guide discuss these scripts in more detail.
The scripts are:

 - rc.afsd.small, which configures the Cache Manager appropriately for a "small"
machine with a single user, about 8 megabytes of RAM and a 20-megabyte cache.
It sets -stat to 300, -dcache to 100, -daemons to 2, and -volumes to 50.

 - rc.afsd.medium, which configures the Cache Manager appropriately for a
"medium" machine with 2 to 6 users, about 16 megabytes of RAM and a 40-megabyte
cache.  It sets -stat to 2000, -dcache to 800, -daemons to 3, and -volumes to
70.
 - rc.afsd.large, which configures the Cache Manager appropriately for "large"
machine with 5 to 10 users, about 32 megabytes of RAM and a 100-megabyte cache.
It sets -stat to 2800, -dcache to 2400, -daemons to 5, and -volumes to 128.

For Digital UNIX, IRIX, NCR UNIX and Solaris systems, the parameter settings are
defined in the initialization script that you installed as the final part of
incorporating AFS into the machine's kernel.  The script defines LARGE, MEDIUM,
and SMALL values for the OPTIONS variable, which is then included on the afsd
command line in the script.  The script is distributed with the OPTIONS set to
$MEDIUM, but you may change this as desired.

Step 1: Create the /afs directory.  If it already exists, verify that the
directory is empty.

	---------------------------------
	# mkdir /afs                  
	---------------------------------

Step 2: Invoke afsd.  Use the -nosettime flag because this is a file
server machine that is also a client.  The flag prevents the machine from
picking a file server machine in the cell as its source for the correct time,
which client machines normally do.  (File server machines synchronize their
clocks using the runntp process instead.)

With a disk cache, starting up afsd for the first time on a machine can take up
to ten minutes, because the Cache Manager has to create all of the structures
needed for caching (V files).  Starting up afsd at future reboots does not take
nearly this long, since the structures already exist.

For a memory cache, use the -memcache flag to indicate that the cache should be
in memory rather than on disk.  With a memory cache, memory structures must be
allocated at each reboot, but the process is equally quick each time.

Because of the potentially long start up, you may wish to put the following
commands in the background.  Even if you do, afsd must initialize completely
before you continue to the next step.  Console messages will trace the progress
of the initialization and indicate when it is complete.

For a disk cache:

-------------------------------------------------------------------------------
Invoke afsd with the -nosettime flag.                                           

On system types other than DEC OSF/1, IRIX, and Solaris, you can substitute one 
of the configuration scripts (such as rc.afsd.medium) for afsd in the following 
command if desired, but you still must type the -verbose flag.                  

On DEC OSF/1, IRIX, and Solaris systems, you must type on the command line      
any additional configuration parameters you wish to set, since the three        
configuration scripts are not available.                                        

	# /usr/vice/etc/afsd -nosettime -verbose &
-------------------------------------------------------------------------------

For a memory cache:

------------------------------------------------------------------------------- 
Invoke afsd with the -nosettime and -memcache flags.  You may specify values for
other parameters if desired.

	# /usr/vice/etc/afsd -nosettime -memcache -verbose &
-------------------------------------------------------------------------------
WARNING: Do not attempt to access /afs (including by issuing a cd or ls command)
until after step 6.  Doing so will cause error messages.

Step 3: If you intend for the machine to remain an AFS client after you
complete its installation, invoke afsd in its initialization file.

For a disk cache:

---------------------------------------------------------------------------------
On system types other than Digital UNIX, IRIX, NCR UNIX or Solaris, add the
following command to the initialization file (/etc/rc or its equivalent), after
the commands that invoke a dynamic kernel loader and the BOS Server. You may
specify additional configuration parameters, or substitute for afsd one of the
configuration scripts described in the introduction to this section (such as
rc.afsd.medium).

	/usr/vice/etc/afsd -nosettime > /dev/console                                  

On Digital UNIX, IRIX, NCR UNIX and Solaris systems, verify that the OPTIONS
variable in the initialization script is set to the appropriate value; as
distributed, it is $MEDIUM.
---------------------------------------------------------------------------------

For a memory cache:

------------------------------------------------------------------------------
On system types other than Digital UNIX, IRIX, NCR UNIX or Solaris, add the
following command to the initialization file (/etc/rc or its equivalent), after
the commands that invoke a dynamic kernel loader and the BOS Server. You may
specify additional configuration parameters, but remember that the "large,"
"medium" and "small" scripts cannot be used with a memory cache.

/usr/vice/etc/afsd -nosettime -memcache > /dev/console                     

On Digital UNIX, IRIX, NCR UNIX and Solaris systems, verify that the OPTIONS
variable in the initialization script is not set to any of $LARGE, $MEDIUM or
$SMALL; these values cannot be used with a memory cache.

On Digital UNIX, NCR UNIX and Solaris systems, remember to add the -nosettime
flag to the afsd command line.  On IRIX systems, there is a test for the
TIMESYNC variable that automatically adds the -nosettime flag if appropriate.

On IRIX systems, also issue the chkconfig command to activate the afsclient
configuration file.

	---------------------------------------
	#cd /etc/config                     

	# /etc/chkconfig  -f  afsclient  on 
	---------------------------------------

For system types other than Digital UNIX, IRIX, NCR UNIX and Solaris, the
following should now appear in the machine's initialization file(s) in the
indicated order. (Digital UNIX, IRIX, NCR UNIX and Solaris systems use an
"init.d" initializtion file that is organized differently.)

 - NFS commands, if appropriate (for example, if the machine will act as an
NFS/AFS translator). For AIX version 3.2.2 or lower loading the NFS kernel
extensions (nfs.ext) should appear here; with AIX version 3.2.3 and higher, NFS
is already loaded into the kernel. Then invoke nfsd if the machine is to be an
NFS server.

 - dynamic kernel loader command(s), unless AFS was built into the kernel

 - bosserver

 - afsd (if the machine will remain a client after you complete this
installation)

Step 4: To verify that the Cache Manager and other client programs and
processes are functioning correctly, authenticate as admin, using the password
(admin_passwd) you defined in Section 2.15.  If you named your generic
administrative account something other than admin, substitute that name here.

Wait to issue this command until console messages indicate that afsd has
finished initializing.

	-----------------------------
	# /usr/afs/bin/klog admin 
	Password:   
	-----------------------------

Step 5: Use the tokens command to verify that klog worked correctly.  The
output should list tokens for admin's AFS UID that are good for your cell.  If
either klog or tokens seems to have malfunctioned, you may wish to contact AFS
Product Support before proceeding.

	-------------------------
	# /usr/afs/bin/tokens 
	-------------------------

Step 6: Issue the fs checkvolumes command.

	----------------------------------
	# /usr/afs/bin/fs checkvolumes 
	----------------------------------

 2.28. OVERVIEW: COMPLETING THE INSTALLATION OF THE FIRST AFS MACHINE

The machine is now a fully functional AFS file server and client machine.
However, your cell's file tree does not exist yet.  In this section you will
create the upper levels of the tree, among other procedures.  The procedures
are:

1. Create and mount top-level volumes.

2. Turn on authorization checking.

3. Create and mount volumes to store system binaries in AFS.

4. Enable access to Transarc and other cells.

5. Institute additional security measures.

6. Replace the standard login binary with a version that both authenticates with
AFS and logs in to the local UNIX file system, if the machine will remain an AFS
client machine.

7. Alter file system clean-up scripts on some system types, if the machine will
remain an AFS client machine.

8. Remove client functionality if desired.


 2.29. SETTING UP THE TOP LEVELS OF THE AFS TREE

If this is the first time you have installed AFS, you should now set up the top
levels of your cell's AFS file tree.

If you have run a previous version of AFS in your cell before, your file tree
should already exist.  Proceed to Section 2.30.

You created the root.afs volume in Section 2.19, and the Cache Manager mounted
it automatically at /afs on this machine when you ran afsd in Section 2.27.  You
now set the access control list (ACL) on /afsMcreation, mounting, and setting
the ACL are the three steps required in creating any volume.  The default ACL on
a new volume grants all seven access rights to system:administrators.  In this
section you add the READ and LOOKUP rights for system:anyuser; users must have
these rights for the Cache Manager to "pass through" root.afs when accessing
volumes mounted lower in the file tree.

After setting the ACL on root.afs, you will create your cell's root.cell volume,
mount it at the second level in the file tree and set the ACL.  Actually, you
will create both a ReadWrite and a regular mount point for root.cell, so that
Cache Managers can access your cell's file tree via both a "ReadWrite" and a
"ReadOnly" path.  This essentially creates separate read-only and read-write
trees. Chapter 5 of the AFS System Administrator's Guide further explains the
concept of ReadOnly and ReadWrite paths (mount point traversal).

After that, you will replicate both root.afs and root.cell.  This is required if
you want to replicate any other volumes in your cell, because all volumes
mounted above a replicated volume must themselves be replicated in order for the
Cache Manager to access the replica.

Note: Once you replicate root.afs, the Cache Manager will access the ReadOnly
version of the volumeMroot.afs.readonlyMif it is available.  Whenever you want
to make changes to the contents of this volume (when, for example, you mount
another cell's root.cell volume at the second level in your file tree), you must
create a temporary mount point for the ReadWrite version (root.afs), make the
changes, re-release the volume and remove the temporary mount point.  See
Section 2.33 for instructions.


Step 1: Grant system:anyuser the READ and LOOKUP rights on the ACL for
/afs.

	-------------------------------------------------------
	# /usr/afs/bin/fs  setacl  /afs  system:anyuser  rl 
	-------------------------------------------------------

Step 2: Create the root.cell volume and mount it at the second level in
the file tree to serve as the root of your cell's tree.  Then grant
system:anyuser the READ and LOOKUP rights on the ACL.

-----------------------------------------------------------------------
Type each of the following commands on a single line.               

	# /usr/afs/bin/vos create <machine name> <partition name> root.cell 
	-cell <cellname>                           

	# /usr/afs/bin/fs mkmount /afs/<cellname> root.cell                 

	# /usr/afs/bin/fs setacl /afs/<cellname> system:anyuser rl          
-----------------------------------------------------------------------

Step 3: (Optional).  You may wish to link your full cell name to a
shorter cell name, so that users have less to type on the command line.  For
example, in Transarc's file tree, /afs/transarc.com is linked to the shorter
/afs/tr.  Do this for your home cell only.

-----------------------------------------------------------------------
Working in the /afs directory, link /afs/cellname to an abbreviated name, if
desired. 

	# cd /afs                                                           

	# ln -s <full cellname> <short cellname>                            
-----------------------------------------------------------------------

Step 4: Mount root.cell with a ReadWrite mount point (you already mounted
it with a regular mount point in step 2).  This mount point tells the Cache
Manager to access only the ReadWrite version of root.cell.

By convention, a ReadWrite mount point has a period in front of the directory
name, as shown in the instruction (i.e., /afs/.cellname).

----------------------------------------------------------------
Create a ReadWrite mount point to root.cell.                 

	# /usr/afs/bin/fs  mkmount  /afs/.<cellname>  root.cell  -rw 
----------------------------------------------------------------

Step 5: Define a replication site for both root.afs and root.cell.  Since
this is your cell's first machine, the sites will presumably be on this machine.
You may wish to place each site on a partition other than the one where the
ReadWrite version resides, but that is not necessary. It is recommended that you
also replicate root.afs and root.cell on the next few additional file server
machines you install.

------------------------------------------------------------------------
Define replication sites for root.afs and root.cell.                 

	# /usr/afs/bin/vos addsite <machine name> <partition name> root.afs  

	# /usr/afs/bin/vos addsite <machine name> <partition name> root.cell 
------------------------------------------------------------------------

Step 6: Make sure that root.afs and root.cell exist and are accessible
before you attempt to replicate them.  The output from fs examine should list
their name, volumeID number, quota, and size, and then the size of the
partition.  If you get an error message instead, do not continue before taking
corrective action.

---------------------------------------------------------------
Use fs examine to verify that root.afs and root.cell exist. 

	# /usr/afs/bin/fs examine /afs                              

	# /usr/afs/bin/fs examine /afs/<cellname>                   
---------------------------------------------------------------

Step 7: Release replicas of root.afs and root.cell to the sites you
defined in step 5.

-------------------------------------------------------------------- 
Use vos release to release replicas of root.afs and root.cell to the ReadOnly
sites. 

	# /usr/afs/bin/vos release root.afs                              

	# /usr/afs/bin/vos release root.cell                             
--------------------------------------------------------------------

Step 8: Verify that both volumes are now released.  First issue fs
checkvolumes to force the Cache Manager to notice that you have released
ReadOnly versions of the volumes, then issue fs examine again.  This time the
output should mention root.afs.readonly and root.cell.readonly instead of the
ReadWrite versions, since the Cache Manager has a built-in bias to access the
ReadOnly version of root.afs if it exists.

------------------------------------------------------------------------- 
Verify that the local Cache Manager now sees the ReadOnly versions of root.afs
and root.cell.

	# /usr/afs/bin/fs checkvolumes                                        

	# /usr/afs/bin/fs examine /afs                                        

	# /usr/afs/bin/fs examine /afs/<cellname>                             
-------------------------------------------------------------------------

 2.30. TURNING ON AUTHORIZATION CHECKING

Now turn on authorization checking so that the server processes on this machine
will only perform privileged actions for authorized users.  You should already
have tokens as admin, from having issued klog in Section 2.27.  Before turning
on authorization checking, use the tokens command to verify that you have
tokens.  If you do not have tokens, or they are expired, reissue klog.

Step 1: Verify that you are authenticated as admin.  The output of the
tokens command should indicate that there are tokens for admin's AFS UID in your
cell.  If not, reissue klog as shown in Section 2.27.

	-------------------------
	# /usr/afs/bin/tokens 
	-------------------------

Step 2: Turn on authorization checking on the file server machine.

-----------------------------------------------------------------
Issue bos setauth to turn on authorization checking.          

	# /usr/afs/bin/bos setauth <machine name> on -cell <cellname> 
-----------------------------------------------------------------

Step 3: Restart all server processes on the machine, including the BOS
Server, to ensure that they establish new connections with one another that obey
the new authorization checking requirement.  This command can take several
seconds to complete.  You may see several messages about the BOS Server and
other processes restarting.

Use bos restart to restart all server processes on this machine. The -bosserver  
flag stops and restarts the BOS Server, which then starts up all other processes 
listed with status flag Run in /usr/afs/local/BosConfig.                         

	# /usr/afs/bin/bos restart <machine name> -bosserver -cell <cellname>
-------------------------------------------------------------------------------

 2.31. SETTING UP VOLUMES TO HOUSE AFS BINARIES

AFS client binaries and configuration files must be available in the
subdirectories under /usr/afsws on each client machine (afsws is an acronym for
"AFS workstation binaries").  To save disk space, you should create /usr/afsws
on the local disk as a link to the volume that houses the AFS client binaries
and configuration files for this system type.

In this section you will create and mount volumes for housing the binaries. The
recommended location for mounting the volume is /afs/cellname/sysname/usr/afsws,
where sysname is the Transarc system name for this machine's system type (as
listed inSection 1.3).  The instructions in Chapter 4 for installing additional
client machines of this machine type assume that you have followed the
instructions in this section.

As you install client machines of different system types, you will need to
create new volumes and directories for each type. Follow the instructions in
Chapter 4 for installing additional clients.

If you have previously run a version of AFS in your cell, you may already have
created volumes for housing AFS binaries for this machine type. If so, the only
step you need to perform in this section is the last step in Section 2.31.1
(creating the /usr/afsws link).

Step 1: Create volumes to store AFS client binaries for this machine's
system type.  The following example instructions create three volumes called
"sysname," "sysname.usr," and "sysname.usr.afsws."  These volumes are enough to
permit loading AFS client binaries into /afs/cellname/sysname/usr/afsws. Refer
to Section 1.3 to learn the proper value of sysname for this machine type.

--------------------------------------------------------------------
	# cd /usr/afs/bin                                                

	# vos create <machine name> <partition name> <sysname>           

	# vos create <machine name> <partition name> <sysname>.usr       

	# vos create <machine name> <partition name> <sysname>.usr.afsws 
--------------------------------------------------------------------

Step 2: Mount the newly created volumes at the indicated place in the AFS
file tree.  Because root.cell is now replicated, you must make the
mount points in its ReadWrite version, by preceding cellname with a
period as shown.  You then issue the vos release command to release
new replicas of root.cell, and the fs checkvolumes command to force
the local Cache Manager to access them.

----------------------------------------------------------------------------
	# fs  mkmount  /afs/.<cellname>/<sysname>  <sysname>                     

	# fs  mkmount  /afs/.<cellname>/<sysname>/usr  <sysname>.usr             

	# fs  mkmount  /afs/.<cellname>/<sysname>/usr/afsws  <sysname>.usr.afsws 

	# vos  release  root.cell                                                

	# fs  checkvolumes                                                       
----------------------------------------------------------------------------

Step 3: Set the ACL on the newly created mount points to grant the READ
and LOOKUP rights to system:anyuser.

-----------------------------------------------------------------------
	# cd  /afs/.<cellname>/<sysname>                                    

	# fs  setacl  -dir  .  ./usr  ./usr/afsws  -acl  system:anyuser  rl 
-----------------------------------------------------------------------

Step 4: Set the quota on /afs/cellname/sysname/usr/afsws according to the
following chart.  The values include a safety margin.

Operating system         Quota in kilobyte blocks

AIX                      30000

Digital UNIX             40000

HP-UX                    35000

IRIX                     60000

NCR UNIX		 40000

Solaris                  35000

SunOS                    25000

Ultrix                   45000

----------------------------------------------------------------------------
  # /usr/afs/bin/fs setquota /afs/.<cellname>/<sysname>/usr/afsws  <quota> 
----------------------------------------------------------------------------


 2.31.1. LOADING AFS BINARIES INTO A VOLUME AND
CREATING A LINK TO THE LOCAL DISK

Now load the AFS binaries from the fifth tar set on the AFS Binary Distribution
Tape into the volume mounted at /afs/cellname/sysname/usr/afsws.

If disk space (on the file server machine housing the volume) allows, you should
load the complete contents of the /usr/afsws into AFS.  If space does not
permit, you should at minimum load the client binaries found under
/usr/afsws/root.client.

Step 1: Load the AFS binaries into /afs/cellname/sysname/usr/afsws,
either directly from the Binary Distribution Tape if the local machine has a
tape drive, or from a remote machine's /usr/afsws directory.

If the local machine has a tape drive:

Mount the Binary Distribution Tape and load the fifth tar set into
/afs/cellname/sysname/usr/afsws.

The appropriate subdirectories are created automatically and have their ACL set
to match that on /afs/cellname/sysname/usr/afsws (which at this point grants all
rights to system:administrators and READ and LOOKUP to system:anyuser).

---------------------------------------------------------------------------------
On AIX systems: Before reading the tape, verify that block size is set to 0
(meaning variable block size); if necessary, use SMIT to set block size to 0.
Also, substitute tctl for mt.

On HP-UX systems: Substitute mt -t for mt -f.                                 

On all system types: For <device>, substitute the name of the tape device for
your system that does not rewind after each operation.

	# cd /afs/<cellname>/<sysname>/usr/afsws
	# mt -f /dev/<device> rewind
	# mt -f /dev/<device> fsf 4
	# tar xvf /dev/<device>
--------------------------------------------------------------------------------

If loading the binaries from a remote machine's /usr/afsws
directory:

On the local machine, change directory (cd) to /afs/cellname/sysname/usr/afsws. 
Use ftp, NFS, or another network transfer program to copy in the contents of    
the remote machine's /usr/afsws directory.                                      
--------------------------------------------------------------------------------

Step 2: You may make AFS software available to users only in accordance
with the terms of your AFS License agreement.  To prevent access by unauthorized
users, you should change the ACL on some of the subdirectories of
/afs/cellname/sysname/usr/afsws, granting the READ and LOOKUP rights to
system:authuser instead of system:anyuser.  This way, only users who are
authenticated in your cell can access AFS binaries.  The ACL on the bin
subdirectory must continue to grant the READ and LOOKUP rights to
system:anyuser, because unauthenticated users must be able to access the klog
binary stored there.

To be sure that unauthorized users are not accessing AFS software,
you should periodically check that the ACL on these directories is
set properly.

----------------------------------------------------------------------------------
To limit access to AFS binaries to users authenticated in your cell, issue the
following commands.  The ACL on the bin subdirectory must continue to grant the
READ and LOOKUP rights to system:anyuser.

	# cd  /afs/.<cellname>/<sysname>/usr/afsws
	# fs  setacl  -dir  ./*  -acl  system:anyuser rl
	# fs  setacl  -dir  bin  -acl  system:anyuser  rl
----------------------------------------------------------------------------------

Step 3: Create a symbolic link from /usr/afsws (a local directory) to
/afs/cellname/@sys/usr/afsws.  You could also substitute the machine's Transarc
system name for @sys (make the link to /afs/cellname/sysname/usr/afsws). The
advantage of using @sys is that it automatically adjusts in case you upgrade
this machine to a different system type.

	---------------------------------------------------------
	# ln  -s  /afs/<cellname>/@sys/usr/afsws  /usr/afsws 
	---------------------------------------------------------

You should include /usr/afsws/bin and /usr/afsws/etc in the PATH variable for
each user account so that users can issue commands from the AFS suites (such as
fs).

 2.32. STORING SYSTEM BINARIES IN AFS

In addition to AFS binaries, you may wish to store other system binaries in AFS
volumes, such as the standard UNIX programs found under /etc, /bin, and /lib.
This kind of central storage eliminates the need to keep copies of common
programs and files on every machine's local disk.

This section summarizes the recommended scheme for storing system binaries in
AFS.  A more extended discussion appears in Chapter 2 of the AFS System
Administrator's Guide.  No instructions appear here, since you may not wish to
create the necessary volumes at this time.  If you do wish to create them, use
the instructions in Section 2.31 (which are for AFS-specific binaries) as a
template.

You cannot link all system binaries to AFS directories; some files must remain
on the local disk for use during times when AFS is inaccessible (bootup and
outages).  They include:

 - a basic text editor, network commands, etc.

 - boot sequence files executed before AFS is accessible (before afsd is
executed), such as startup command files, mount commands, and configuration
files

 - files needed by dynamic kernel loaders.  For example, the dkload process
requires /bin/nm, /bin/as, /bin/ld to be on the local disk.

 2.32.1. SETTING THE ACL ON SYSTEM BINARY VOLUMES

It is recommended that you restrict access to most system binaries by granting
the READ and LOOKUP rights to system:authuser instead of system:anyuser.  This
limits access to users who are authenticated in your cell.  However, the ACL on
binaries that users must access while unauthenticated should grant READ and
LOOKUP to system:anyuser.

 2.32.2. VOLUME AND DIRECTORY NAMING SCHEME

It is recommended that you create a volume called sysname for each supported
system type you use as an AFS machine, where sysname is a system name listed in
Section 1.3.  Mount each such volume at /afs/cellname/sysname; you already did
so for this machine's system type, in Section 2.31.  You must use the value for
sysname listed in Section 1.3 if you wish to use the @sys variable in pathnames.

Then create a volume for each system distribution directory (such as /bin, /etc,
/lib, /usr) and mount it under /afs/cellname/sysname.  As an example, you would
name the volume containing /bin files as sysname.bin and mount it at
/afs/cellname/sysname/bin.

You can name volumes in any way you wish, and mount them at other locations than
those suggested here.  However, the following scheme is recommended because it:

 - clearly identifies the volume contents

 - makes it easy to back up related volumes together because the AFS Backup
System uses a wild card notation to take advantage of volume name prefixes
common to multiple volumes

 - makes it easy to track related volumes, keeping them together on the same
file server machine if desired

 - correlates volume name and mount point name

The following is a suggested scheme for naming volumes and directories to house
system binaries. (Certain of these volumesMsuch as sysname.usr.afswsMalready
exist as a result of previous instructions in this chapter.)

Volume Name         Mount Point
sysname             /afs/cellname/sysname
sysname.bin         /afs/cellname/sysname/bin
sysname.etc         /afs/cellname/sysname/etc
sysname.usr         /afs/cellname/sysname/usr
sysname.usr.afsws   /afs/cellname/sysname/usr/afsws
sysname.usr.bin     /afs/cellname/sysname/usr/bin
sysname.usr.etc     /afs/cellname/sysname/usr/etc
sysname.usr.inc     /afs/cellname/sysname/usr/include
sysname.usr.lib     /afs/cellname/sysname/usr/lib
sysname.usr.loc     /afs/cellname/sysname/usr/local
sysname.usr.man     /afs/cellname/sysname/usr/man
sysname.usr.sys     /afs/cellname/sysname/usr/sys

 2.33. ENABLING ACCESS TO TRANSARC AND OTHER CELLS

At this point, the client version of CellServDB (in /usr/vice/etc) on this
machine lists the local cell and all of the foreign cells from the sample
CellServDB that you decided to retain.  The cells are still not accessible,
however, because their root.cell volumes are not yet mounted in your file tree.
You create the mount points in this section.

Repeat these instructions for each foreign cell that you wish to enable your
cell's client machines to access.  You do not have to perform these instructions
now if you do not wish to, but the foreign cell(s) will not be accessible until
you do.  Keep in mind that a client machine's CellServDB file must also list the
database server machines for any cell it is to access.  Instructions for making
new cells accessible in the future appear in Section in.first.newcell in Chapter
15 of the AFS System Administrator's Guide.

Step 1: Verify that /usr/vice/etc/CellServDB includes an entry for the
cell.

Step 2: Mount the cell's root.cell volume at /afs/cellname in your file
tree.  The following series of steps is necessary because root.afs is
replicated.  You must alter its ReadWrite version by temporarily mounting it in
a writable directory (/afs/.cellname) and making your changes. Then release the
changed volume to the ReadOnly sites, where client machines normally access it.
The fs checkvolumes command forces the local Cache Manager to notice the release
of the new replica.  The final ls command is optional, but allows you to verify
that the new cell's mount point is visible in your file tree.  The output should
show the directories at the top level of the new cell's tree.

-----------------------------------------------------------------------------
Substitute your cell's name for cellname in the cd commands.              

	# cd /afs/.<cellname>

	# /usr/afs/bin/fs mkmount temp root.afs

Repeat the following fs mkmount command for each foreign cell you wish to mount
at this time; issue the subsequent commands only once.

	# /usr/afs/bin/fs mkmount temp/<foreign cell> root.cell -c <foreign cell>

	# /usr/afs/bin/fs rmmount temp

	# /usr/afs/bin/vos release root.afs

	# /usr/afs/bin/fs checkvolumes

	# ls /afs/<foreign cell>
-----------------------------------------------------------------------------

 2.34. ENABLING ACCESS TO NEW CELLS IN THE FUTURE

The instructions in this section enable your cell's client machines to access a
foreign cell that does not already appear in CellServDB, without having to
reboot.  You can follow these instructions any time you wish to add access to a
new foreign cell.  You do not have to follow them now if there are no new
foreign cells you want to make accessible from this machine.

Transarc Corporation maintains the file
/afs/transarc.com/service/etc/CellServDB.export as a list of all cells that have
agreed to advertise their database server machines.  Every effort is made to
keep this file updated with the most current information available.  You may
wish to check the file periodically for the existence of new cells.

Transarc also maintains a CellServDB of test and private cells, which it does
not make available to other cells.  Even if you do not wish your cell's database
server machines to be advertised to everyone, please register your cell with
Transarc for inclusion in this file.

There are three things you can do to facilitate maintenance of CellServDB:

 - You may want to store in your local AFS tree a central copy of the CellServDB
appropriate for your cell's client machines, perhaps as
/afs/cellname/common/etc/CellServDB, rather than relying on the global version
exported by Transarc.  Whenever you install a new client, you can copy this file
over to it.

 - Transarc requests that you list your cell's database server machines in a
file called /afs/cellname/service/etc/CellServDB.local, and make it readable by
foreign users.  This is similar to Transarc's global
/afs/transarc.com/service/etc/CellServDB.export, but allows foreign cells to get
information about your cell's database server machines directly from you.
Include only the information that can be exposed to other cells.  Private or
test cells you administer and do not wish to make accessible to foreign users
should be listed in another version of CellServDB not readable by foreign users
(perhaps as /afs/cellname/common/etc/CellServDB.private).

 - It is important for Transarc to have up-to-date information about AFS cells.
When you finish setting up your cell, or whenever you change your cell's
database server machines, please inform your AFS Product Support Representative
of the names and Internet addresses of all of your database server machines.  By
default, this information is made available to other cells.

Step 1: If you maintain a central file such as
/afs/cellname/common/etc/CellServDB, update it to include the new cell.  It may
be easiest to copy it from /afs/transarc.com/service/etc/CellServDB.export.  If
you type it yourself, be sure to maintain proper file and entry format, as
described in Section 2.25.

Step 2: Add the foreign cell's entry to the local
/usr/vice/etc/CellServDB file.  If you maintain a central copy of CellServDB,
you can use AFS to copy it from there.  If not, it may be easiest to copy the
cell's entry directly from /afs/transarc.com/service/etc/CellServDB.export.  If
you type it yourself, be sure to maintain proper file and entry format, as
described in Section 2.25.

Step 3: Add the cell to the in-kernel list that the Cache Manager
consults for cell information (recall that it only looks at
/usr/vice/etc/CellServDB as afsd runs at reboot, to transfer the contents into
the kernel record).

-------------------------------------------------------------------------------
Issue fs newcell for the cell you want to add to the Cache Manager's in-kernel  
list.  Provide the complete Internet host name of each database server machine. 

# /usr/afs/bin/fs newcell <cellname> <dbserver1> [<dbserver2>] [<dbserver3>]    
-------------------------------------------------------------------------------

For example, to add the list of database server machines for the Transarc
Corporation cell, you would issue on a single line

# /usr/afs/bin/fs newcell transarc.com bigbird.transarc.com ernie.transarc.com
oscar.transarc.com 

Step 4: Mount the cell's root.cell volume at /afs/cellname in your file
tree.  Because root.afs is replicated, you must first temporarily mount the
ReadWrite version of root.afs in a writable directory (such as your cell's
/afs/.cellname directory).  Make your changes and then release new replica to
the ReadOnly sites.  This is the only way to make the change visible to client
machines when they access the ReadOnly version of root.afs, as they normally do.

Note: You only need to mount a cell's root.cell volume once, not on each client
machine.

The fs checkvolumes command forces the local Cache Manager to notice the release
of new replica.
---------------------------------------------------------------------------------
Substitute your cell's name for cellname in the cd commands.                  

	# cd /afs/.<cellname>

	# /usr/afs/bin/fs mkmount temp root.afs

Repeat the following fs mkmount command for each foreign cell you wish to mount
at this time; issue the subsequent commands only once.

	# /usr/afs/bin/fs mkmount temp/<foreign cell> root.cell -c <foreign cell>

	# /usr/afs/bin/fs rmmount temp

	# /usr/afs/bin/vos release root.afs

	# /usr/afs/bin/fs checkvolumes
---------------------------------------------------------------------------------

Step 5: Verify that the new cell's mount point is visible in your file
tree.  The output should show the directories at the top level of the new cell's
tree.

	------------------------
	# ls /afs/<cellname> 
	------------------------

Step 6: Repeat steps 2 and 3 on every client machine in your cell that
needs to access the new cell's file tree.

 2.35. IMPROVING YOUR CELL'S SECURITY

This section discusses some measures you should implement immediately to improve
your cell's security.  For additional information, refer to Chapter 2 of the AFS
System Administrator's Guide, especially the sections titled "Setting Up File
Server Machines" and "Security and Authorization in AFS."

 2.35.1. CONTROLLING ROOT ACCESS

As in standard UNIX administration, you should not allow unauthorized users to
learn the "root" password.  Although "root" does not have special access to
files in AFS, it does have

 - the privilege necessary to issue certain commands that affect workstation
performance (these are in the fs suite)

 - the ability to turn off authorization checking on a file server machine,
allowing anyone to perform otherwise privileged actions

 2.35.2. CONTROLLING SYSTEM ADMINISTRATOR ACCESS

You should also limit the number of people who have system administrator
privilege.  Follow these guidelines:

 - Create generic administrative accounts like admin.  Only authenticate under
these identities when performing administrative tasks, and destroy the
administrative tokens immediately after finishing the task (either by issuing
unlog or issuing klog to adopt your regular identity).

 - Set a short ticket lifetime for system administrator accounts (for example,
20 minutes), using the -lifetime argument to the kas setfields command (see the
AFS Command Reference Manual).  You should not make the lifetime this short on
the account used for system backups.

 - Restrict the number of people with administrative privilege, especially the
number of members in system:administrators.

Members of system:administrators by default have all ACL rights on all
directories in the file tree, and therefore must be trusted not to examine
private files.  The members of this group can also issue all pts commands,
create users and groups, and change volume quotas.

 - Limit the use of system administrator accounts on public workstations.

 2.35.3. PROTECTING SENSITIVE AFS DIRECTORIES

The subdirectories of /usr/afs on a file server machine contain sensitive files
(such as UserList and KeyFile in the etc subdirectory).  Users should not see or
be able to write to these files, since they could potentially figure out how to
use them to gain access to many administrator privileges.

The first time it initializes on a file server machine, the BOS Server creates
several of these directories and files.  It sets their owner to "root" and sets
their mode bits so that no one but the owner can write them; in some cases, it
also disables reading.

At each subsequent restart, the BOS Server checks that the permissions on these
files are still set appropriately and that "root" owns each file or directory.
In case of incorrect mode bits or ownership, the BOS Server adds a warning to
/usr/afs/logs/BosLog.  This warning also appears in the output of the
bos status command when the -long flag is used.  However, the BOS Server does
not reset the mode bits to their original settings.  (This gives you the
opportunity to alter their settings if you so desire.)

The following files are checked by the BOS Server for the specified permissions.
A hyphen in the permission set indicates the absence of that permission; a
question mark indicates that the permission is not checked by the BOS Server.

File      Permissions         File             Permissions

/usr/afs   drwxr?xr-x          /usr/afs/etc/KeyFile                            

/usr/afs/backup                drwx???---       /usr/afs/etc/UserList          

/usr/afs/bin                   drwxr?xr-x       /usr/afs/local                 

/usr/afs/db                    drwx???---       /usr/afs/logs                  

/usr/afs/etc                   drwxr?xr-x

To reset the mode bits to their original settings, you could issue one or more
of the following commands.

	-------------------------------------
	# chmod 755 /usr/afs              

	# chmod 700 /usr/afs/backup       

	# chmod 755 /usr/afs/bin          

	# chmod 700 /usr/afs/db           

	# chmod 755 /usr/afs/etc          

	# chmod 600 /usr/afs/etc/KeyFile  

	# chmod 600 /usr/afs/etc/UserList 

	# chmod 700 /usr/afs/local        

	# chmod 755 /usr/afs/logs         
	-------------------------------------

You should never need to modify files in these directories directly.  Instead,
use the appropriate AFS commands (such as the bos addkey and bos removekey
commands to add and remove server encryption keys in /usr/afs/etc/KeyFile).

 2.36. ENABLING LOGIN

Note: If you plan to remove the client functionality from this file server
machine, skip this section and proceed to Section 2.38.

Transarc provides a version of login that both authenticates the issuer with AFS
and logs him or her in to the local UNIX file system.  It is strongly
recommended that you replace standard login with the AFS-authenticating version
so that your cell's users automatically receive PAG-based tokens when they log
in.  If you do not replace the standard login, then your users must use the
two-step login procedure (log in to the local UNIX file system followed by pagsh
and klog to authenticate with AFS).  For more details, see the Section titled
"Login and Authentication in AFS" in Chapter 2 of the AFS System Administrator's
Guide.

Note: AIX 4.1 does not require that you replace the login program with the
Transarc version.  Instead, you can configure the AIX 4.1 login program so that
it calls the AFS authentication program, allowing users to authenticate with AFS
and log in to AIX in the same step.

If you are using Kerberos authentication rather than AFS's protocols, you must
install AFS's login.krb instead of regular AFS login.  Contact AFS Product
Support for further details.

You can tell you are running AFS login if the following banner appears after you
provide your password:

AFS 3.4  login

To enable AFS login, follow the instructions appropriate for your system type:

 - For AIX 3.2 systems, see Section 2.36.1.

 - For AIX 4.1 systems, see Section 2.36.2

 - For IRIX systems, see Section 2.36.

 - For all other system types, see Section 2.36.4.

 2.36.1. ENABLING LOGIN ON AIX 3.2 SYSTEMS

Follow the instructions in this section to replace login on AIX 3.2 systems.

For this system type, Transarc supplies both login.noafs, which is invoked when
AFS is not running on the machine, and login.afs, which is invoked when AFS is
running. If you followed the instructions for loading the AFS rs_aix32 binaries
into an AFS directory and creating a local disk link to it, these files are
found in /usr/afsws/bin. Note that standard AIX login is normally installed as
/usr/sbin/login, with links to /etc/tsm, /etc/getty, and /bin/login.  You will
install the replacement AFS binaries into the /bin directory.

Step 1: Replace the link to standard login in /bin with login.noafs.

	------------------------------------------------
	# mv  /bin/login  /bin/login.orig            

	# cp  /usr/afsws/bin/login.noafs  /bin/login 
	------------------------------------------------

Step 2: Replace the links from /etc/getty and /etc/tsm to standard login
with links to /bin/login.

	-------------------------------------
	# mv  /etc/getty  /etc/getty.orig 

	# mv  /etc/tsm  /etc/tsm.orig     

	# ln -s  /bin/login  /etc/getty   

	# ln -s  /bin/login  /etc/tsm     
	-------------------------------------

Step 3: Install login.afs into /bin and create a symbolic link to
/etc/afsok.

	--------------------------------------------------
	# cp  /usr/afsws/bin/login.afs  /bin/login.afs 

	# ln -s  /bin/login.afs  /etc/afsok            
	--------------------------------------------------

 2.36.2. ENABLIN AFS LOGIN ON AIX 4.1 SYSTEMS

Follow the instructions in this section to configure login on AIX 4.1 systems.
Before beginning, check to be sure that the afs_dynamic_auth program has been
installed in the local /usr/vice/etc directory.

Step 1: Set the registry variable in the /etc/security/user to DCE on the
local client machine.  Note that you must set this variable to DCE
(not AFS).

	------------------
	registry = DCE 
	------------------

Step 2: Set the registry variable for the user root to files in the same
file (/etc/security/user) on the local client machine.  This allows
the user root to authenticate by using the local password "files" on
the local machine.

	---------------------------
	root:                   
	registry = files 
	---------------------------

Step 3: Set the SYSTEM variable in the same file (/etc/security/user).
The setting depends upon whether the machine is an AFS client only
or both an AFS and a DCE client.

	---------------------------------------------------------------
	If the machine is an AFS client only, set SYSTEM to be:     

	SYSTEM = "AFS OR AFS [UNAVAIL] AND compat [SUCCESS]"        

	If the machine is both an AFS and a DCE client, set SYSTEM: 

	SYSTEM = "DCE OR DCE [UNAVAIL] OR AFS OR AFS [UNAVAIL]
	AND compat [SUCCESS]"                                       
	---------------------------------------------------------------

Step 4: Define DCE in the /etc/security/login.cfg file on the local
client machine. In this definition and the following on for AFS the program
attribute specifies the path of the program to be invoked.

	-------------------------------------------------
	DCE:                                          
	program = /usr/vice/etc/afs_dynamic_auth 
	-------------------------------------------------

Step 5: Define the AFS authentication program in the
/etc/security/login.cfg file on the local client machine as follows:

	-------------------------------------------------
	AFS:                                          
	program = /usr/vice/etc/afs_dynamic_auth 
	retry = 3                                
	timeout = 30                             
	retry_delay = 10                         
	-------------------------------------------------

In both of the preceding definitions (DCE and AFS), the attributes have the
following meanings: program specifies the path of the program to be invoked,
retry specifies the maximum number of times users can fail consecutively to
enter the correct password before being locked out of their accounts, and
retry_delay specifies the number of minutes that users are locked out of their
accounts if they exceed the value of the retry variable when attempting to log
in.

 2.36.3. ENABLING LOGIN ON IRIX SYSTEMS

For IRIX systems, you do not need to replace the login binary. Silicon Graphics,
Inc. has modified IRIX login to operate the same as AFS login when the machine's
kernel includes AFS. However, you do need to verify that the local /usr/vice/etc
directory contains the two libraries provided with AFS and required by IRIX
login, afsauthlib.so and afskauthlib.so.


	Output should include afsauthlib.so and afskauthlib.so. 

	# ls  /usr/vice/etc                                     
	-----------------------------------------------------------

 2.36.4. ENABLING LOGIN ON OTHER SYSTEM TYPES

For system types other than AIX and IRIX, the replacement AFS login binary
resides in /usr/afsws/bin, if you followed the instructions for loading the AFS
binares into an AFS directory and creating a local disk link to it. Install the
AFS login as /bin/login.

Step 1: Replace standard login with AFS login.

	------------------------------------------
	# mv  /bin/login  /bin/login.orig      

	# cp  /usr/afsws/bin/login  /bin/login 
	------------------------------------------

 2.37. ALTERING FILE SYSTEM CLEAN-UP SCRIPTS ON SUN SYSTEMS

Note: If you plan to remove the client functionality from this file server
machine, skip this section and proceed to Section 2.38.

Many SunOS and Solaris systems are distributed with a crontab file that contains
a command for removing unneeded files from the file system (it usually begins
with the find(1) command).  The standard location for the file on SunOS systems
is /usr/spool/cron/crontabs/root, and on Solaris systems is
/usr/lib/fs/nfs/nfsfind.

Once this machine is an AFS client, you must modify the pathname specification
in this cron command to exclude /afs.  Otherwise, the command will traverse the
entire portion of the AFS tree accessible from this machine, which includes
every cell whose database server machines appear in the machine's kernel list
(derived from /usr/vice/etc/CellServDB).  The traversal could take many hours.

Use care in altering the pathname specification, so that you do not accidentally
exclude directories that you wish to be searched.  The following may be suitable
alterations, but are suggestions onlyMyou must verify that they are appropriate
for your system.

The first possible alteration requires that you list all file system directories
to be searched.

On SunOS systems, use:

find / /usr /<other partitions> -xdev remainder of existing command

On Solaris systems, add the -local flag to the existing command in
/usr/lib/fs/nfs/nfsfind, so that it looks like:

find $dir -local -name .nfs\* +7 -mount -exec rm -f {} \;

Another possibility for either system type excludes any directories whose names
begin with "a" or a non-alphabetic character.

find /[A-Zb-z]*  remainder of existing command

Note that you should not use the following, because it still searches under
/afs, looking for a subdirectory of type "4.2".

find / -fstype 4.2     /* do not use */

 2.38. REMOVING CLIENT FUNCTIONALITY

Follow the instructions in this section only if you do not wish this machine to
remain an AFS client. Removing client functionality will make the machine unable
to access files in AFS.

Step 1: On systems other than IRIX, remove the call to afsd from the
machine's initialization file (/etc/rc or equivalent).

On IRIX systems, issue the following command to deactivate the afsclient
initialization file.

	----------------------------------------
	# cd /etc/config                     

	# /etc/chkconfig  -f  afsclient  off 
	----------------------------------------

Step 2: Empty and remove all subdirectories of the /usr/vice directory.

Step 3: Create the /usr/vice/etc directory again and create symbolic
links in it to the ThisCell and CellServDB files in /usr/afs/etc.  This makes it
possible to issue commands from the AFS command suites (bos, fs, etc.) on this
machine.

	-------------------------------------------------
	# mkdir  /usr/vice/etc                        

	# cd  /usr/vice/etc                           

	# ln  -s  /usr/afs/etc/ThisCell  ThisCell     

	# ln  -s  /usr/afs/etc/CellServDB  CellServDB 
	-------------------------------------------------

Step 4: Reboot the machine, and login again as "root."  The following
sequence is appropriate on most machine types.

	----------------------------------
	# reboot

	login:  root                   
	Password:                      
	----------------------------------