NFS is pretty fast protocol for accessing files.
It uses less overhead than SMB/CIFS and therefore acchieves greater performance.
method1) rsyncing to a nfs-mounted qnap nas went with 20,7 MBytes/sec.
method2) while directly ssh-rsyncing to qnap works only with 3-4 MByte/sec (server is CPU not very performant „Feroceon 88F6281 rev 1 (v5l) @ 1.2 GHz with BogoMIPS : 1196.85“ according to cat /proc/cpu, can’t decrypt ssh-traffic any faster).
While i can’t really judge it from a reliability and security perspective – it is pretty easy and fast to get going with NFS.
my experiences with method1:
i lost data because something seems to go wrong during the rsync-to-nfs-mount process. Data gets corrupted. I have to investigate why, i suspect the nfs server used by QNAP is some evil hack, but it could also be NFS itself.
reading on that topic: https://research.cs.wisc.edu/wind/Publications/NFSCorruption-storagess07.pdf
server: QNAP TS-219 QTS 4.1.4 Build 20150522
Version of NFS
Linux QNAP 3.4.6 #1 Fri May 22 07:56:30 CST 2015 armv5tel unknown
mount; # DATA is ext4
/dev/md0 on /share/MD0_DATA type ext4 (rw,usrjquota=aquota.user,jqfmt=vfsv0,user_xattr,data=ordered,nodelalloc,noacl)
nfsd on /proc/fs/nfsd type nfsd (rw)
+2 +3 -4 -4.1
# Version 1.1
# Path Client(Flags) # IPs
[ 66.638725] Installing knfsd (copyright (C) 1996 firstname.lastname@example.org).
[ 214.072501] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
lsmod | grep nfs
nfsd 231700 12 fnotify, Live 0xbf349000
exportfs 2885 1 nfsd, Live 0xbf345000
nfs 251340 0 - Live 0xbf2f2000
auth_rpcgss 30572 2 nfsd,nfs, Live 0xbf2e4000
lockd 59814 2 nfsd,nfs, Live 0xbf2cd000
sunrpc 167759 14 nfsd,nfs,auth_rpcgss,lockd, Live 0xbf291000
Linux debian 3.16.0-4-686-pae #1 SMP Debian 3.16.36-1+deb8u1 (2016-09-03) i686 GNU/Linux
mount; # client is using ext3
/dev/sda5 on / type ext3 (rw,relatime,errors=remount-ro,data=ordered)
[ 2.517324] FS-Cache: Netfs 'nfs' registered for caching
[ 2.522955] Installing knfsd (copyright (C) 1996 email@example.com).
[ 2.509406] RPC: Registered tcp NFSv4.1 backchannel transport module.
[13274.540947] NFS: Registering the id_resolver key type
How share was mounted:
mount 192.168.1.123:/DATA /mnt/qnap;
192.168.1.123:/DATA on /mnt/qnap type nfs (rw,relatime,vers=3,rsize=32768,wsize=32768,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=192.168.1.123,mountvers=3,mountport=48394,mountproto=udp,local_lock=none,addr=192.168.1.123)
binfmt_misc on /proc/sys/fs/binfmt_misc type binfmt_misc (rw,relatime)
„This is a much-improved Linux NFS server with support for NFSv3 as well as NFSv2. NFSv4 is being worked on. These patches are considered stable and are indeed shipping with most distributions. The stock Linux 2.2 NFS server can’t be used as a cross-platform file server“
So QNAP seems to turn OpenSource into LOW „TAIWANESE Quality“ software. „GOOD JOB GUYS“.
QNAP Systems, Inc. (Chinese: 威聯通科技) is a Taiwanese corporation that specializes in providing networked solutions for file sharing, virtualization, storage management and surveillance applications to address corporate, SMB (NOOO!), SOHO and home user needs.
QNAP seems to pack all kind of useless features into their firmware which bloads the whole thing up, increases complexity and probability of failure and errors and wastes resources.
I REALLY HATE THAT „Video-Transcoding“ feature beeing enabled by default, which CONSUMES A LOT OF RESSOURCES, USES A LOT OF CPU AND DECREASES LIFETIME OF YOUR HARDDISKS.
I WANT MY NAS TO RELIABLY STORE FILES – I DO NOT CARE IF I CAN PLAY TETRIS ON IT!
The QNAP hardware might be okay – but the software is surely NOT.
Testing Transfer methods: RSync via NFS vs SSH – SSH is slower but WON
rsync -r -vvv --progress /home/username/Downloads/2016-09-20_President_Obama_Delivers_Remarks_to_the_United_Nations_General_Assembly.mp4 /mnt/qnap/test/2016-09-20_President_Obama_Delivers_Remarks_to_the_United_Nations_General_Assembly.mp4
sending incremental file list
server_recv(2) starting pid=10655
received 1 names
get_local_name count=1 /mnt/qnap/test/2016-09-20_President_Obama_Delivers_Remarks_to_the_United_Nations_General_Assembly.mp4
generator starting pid=10655
delta-transmission disabled for local transfer or --whole-file
send_files mapped /home/username/Downloads/2016-09-20_President_Obama_Delivers_Remarks_to_the_United_Nations_General_Assembly.mp4 of size 612752227
calling match_sums /home/username/Downloads/2016-09-20_President_Obama_Delivers_Remarks_to_the_United_Nations_General_Assembly.mp4
583,892,992 95% 14.48MB/s 0:00:01
false_alarms=0 hash_hits=0 matches=0
612,752,227 100% 22.04MB/s 0:00:26 (xfr#1, to-chk=0/1)
sender finished /home/username/Downloads/2016-09-20_President_Obama_Delivers_Remarks_to_the_United_Nations_General_Assembly.mp4
recv mapped 2016-09-20_President_Obama_Delivers_Remarks_to_the_United_Nations_General_Assembly.mp4 of size 612752227
renaming .2016-09-20_President_Obama_Delivers_Remarks_to_the_United_Nations_General_Assembly.mp4.Ioy6xm to 2016-09-20_President_Obama_Delivers_Remarks_to_the_United_Nations_General_Assembly.mp4
send files finished
total: matches=0 hash_hits=0 false_alarms=0 data=612752227
sent 612,901,985 bytes received 1,106 bytes 21,505,371.61 bytes/sec
total size is 612,752,227 speedup is 1.00
[sender] _exit_cleanup(code=0, file=main.c, line=1183): about to call exit(0)
# RSYNC-NFS transfered file IS CORRUPT, VLC CAN NOT PLAY IT!!!!
md5sum /home/username/Downloads/2016-09-20_President_Obama_Delivers_Remarks_to_the_United_Nations_General_Assembly.mp4 /mnt/qnap/test/2016-09-20_President_Obama_Delivers_Remarks_to_the_United_Nations_General_Assembly.mp4
# RSYNC-SSH transfered file, FILE WAS CORRECTLY TRANSFERED!!!!
rsync -r -vvv --progress /home/username/Downloads/2016-09-20_President_Obama_Delivers_Remarks_to_the_United_Nations_General_Assembly.mp4 firstname.lastname@example.org:/share/MD0_DATA/DATA/test/2016-09-20_President_Obama_Delivers_Remarks_to_the_United_Nations_General_Assembly.mp4
md5sum /home/username/Downloads/2016-09-20_President_Obama_Delivers_Remarks_to_the_United_Nations_General_Assembly.mp4 /mnt/qnap/test/2016-09-20_President_Obama_Delivers_Remarks_to_the_United_Nations_General_Assembly.mp4
nfsd – man page
nfsd – special filesystem for controlling Linux NFS server
mount -t nfsd nfsd /proc/fs/nfsd
The nfsd filesytem is a special filesystem which provides access to the Linux NFS server.
The filesystem consists of a single directory which contains a number of files.
These files are actually gateways into the NFS server.
Writing to them can affect the server.
Reading from them can provide information about the server.
This file system is only available in Linux 2.6 and later series kernels (and in the later parts of the 2.5 development series leading up to 2.6).
This man page does not apply to 2.4 and earlier.
As well as this filesystem, there are a collection of files in the procfs filesystem (normally mounted at /proc) which are used to control the NFS server.
This manual page describes all of these files.
The exportfs and mountd programs (part of the nfs-utils package) expect to find this filesystem mounted at
cat /proc/fs/nfsd or
If it is not mounted, they will fall-back on 2.4 style functionality.
This involves accessing the NFS server via a systemcall.
This systemcall is scheduled to be removed after the 2.6 kernel series.
The three files in the nfsd filesystem are:
- This file contains a list of filesystems that are currently exported and clients that each filesystem is exported to, together with a list of export options for that client/filesystem pair. This is similar to the /proc/fs/nfs/exports file in 2.4. One difference is that a client doesn’t necessarily correspond to just one host. It can respond to a large collection of hosts that are being treated identically.Each line of the file contains a path name, a client name, and a number of options in parentheses. Any space, tab, newline or back-slash character in the path name or client name will be replaced by a backslash followed by the octal ASCII code for that character.
- This file represents the number of nfsd thread currently running. Reading it will show the number of threads. Writing an ASCII decimal number will cause the number of threads to be changed (increased or decreased as necessary) to achieve that number.
- This is a somewhat unusual file in that what is read from it depends on what was just written to it. It provides a transactional interface where a program can open the file, write a request, and read a response. If two separate programs open, write, and read at the same time, their requests will not be mixed up.The request written to filehandle should be a client name, a path name, and a number of bytes. This should be followed by a newline, with white-space separating the fields, and octal quoting of special characters.
On writing this, the program will be able to read back a filehandle for that path as exported to the given client. The filehandles length will be at most the number of bytes given.
The filehandle will be represented in hex with a leading ‚\x‘.
- The directory /proc/net/rpc in the procfs filesystem contains a number of files and directories. The files contain statistics that can be display using the nfsstat program. The directories contain information about various caches that the NFS server maintains to keep track of access permissions that different clients have for different filesystems. The caches are:
- This cache maps the name of a client (or domain) to an internal data structure. The only access that is possible is to flush the cache.
- This cache contains a mapping from IP address to the name of the authentication domain that the ipaddress should be treated as part of.
- This cache contains a mapping from directory and domain to export options.
- This cache contains a mapping from domain and a filesystem identifier to a directory. The filesystem identifier is stored in the filehandles and consists of a number indicating the type of identifier and a number of hex bytes indicating the content of the identifier.
- Each directory representing a cache can hold from 1 to 3 files. They are:
- flushWhen a number of seconds since epoch (1 Jan 1970) is written to this file, all entries in the cache that were last updated before that file become invalidated and will be flushed out. Writing 1 will flush everything. This is the only file that will always be present.
- This file, if present, contains a textual representation of ever entry in the cache, one per line. If an entry is still in the cache (because it is actively being used) but has expired or is otherwise invalid, it will be presented as a comment (with a leading hash character).
- This file, if present, acts a channel for request from the kernel-based nfs server to be passed to a user-space program for handling.When the kernel needs some information which isn’t in the cache, it makes a line appear in the channel file giving the key for the information. A user-space program should read this, find the answer, and write a line containing the key, an expiry time, and the content. For example the kernel might make
- nfsd 127.0.0.1
- appear in the auth.unix.ip/content file. The user-space program might then write
- nfsd 127.0.0.1 1057206953 localhost
- to indicate that 127.0.0.1 should map to localhost, atleast for now.If the program uses select(2) or poll(2) to discover if it can read from the channel then it will never see and end-of-file but when all requests have been answered, it will block until another request appears.
- In the /proc filesystem there are 4 files that can be used to enabled extra tracing of nfsd and related code. They are:
- They control tracing for the NFS client, the NFS server, the Network Lock Manager (lockd) and the underlying RPC layer respectively. Decimal numbers can be read from or written to these files. Each number represents a bit-pattern where bits that are set cause certain classes of tracing to be enabled. Consult the kernel header files to find out what number correspond to what tracing.
Wiki: History of NFS
Network File System (NFS) is a distributed file system protocol originally developed by Sun Microsystems in 1984, allowing a user on a client computer to access files over a computer network much like local storage is accessed. NFS, like many other protocols, builds on the Open Network Computing Remote Procedure Call (ONC RPC) system. The NFS is an open standard defined in Request for Comments (RFC), allowing anyone to implement the protocol.
Versions and variations
Sun used version 1 only for in-house experimental purposes. When the development team added substantial changes to NFS version 1 and released it outside of Sun, they decided to release the new version as v2, so that version interoperation and RPC version fallback could be tested.
|This section needs expansion. You can help by adding to it. (December 2009)|
Version 2 of the protocol (defined in RFC 1094, March 1989) originally operated only over User Datagram Protocol (UDP). Its designers meant to keep the server side stateless, with locking (for example) implemented outside of the core protocol. People involved in the creation of NFS version 2 include Russel Sandberg, Bob Lyon, Bill Joy, Steve Kleiman, and others.
The Virtual File System interface allowed a modular implementation, reflected in a simple protocol.
NFSv2 only allowed the first 2 GB of a file to be read due to 32-bit limitations.
Version 3 (RFC 1813, June 1995) added:
- support for 64-bit file sizes and offsets, to handle files larger than 2 gigabytes (GB);
- support for asynchronous writes on the server, to improve write performance;
- additional file attributes in many replies, to avoid the need to re-fetch them;
- a READDIRPLUS operation, to get file handles and attributes along with file names when scanning a directory;
- assorted other improvements.
The first NFS Version 3 proposal within Sun Microsystems was created not long after the release of NFS Version 2. The principal motivation was an attempt to mitigate the performance issue of the synchronous write operation in NFS Version 2. By July 1992, implementation practice had solved many shortcomings of NFS Version 2, leaving only lack of large file support (64-bit file sizes and offsets) a pressing issue. This became an acute pain point for Digital Equipment Corporation with the introduction of a 64-bit version of Ultrix to support their newly released 64-bit RISC processor, the Alpha 21064. At the time of introduction of Version 3, vendor support for TCP as a transport-layer protocol began increasing. While several vendors had already added support for NFS Version 2 with TCP as a transport, Sun Microsystems added support for TCP as a transport for NFS at the same time it added support for Version 3. Using TCP as a transport made using NFS over a WAN more feasible, and allowed the use of larger read and write transfer sizes beyond the 8 KB limit imposed by User Datagram Protocol (UDP).
Version 4 (RFC 3010, December 2000; revised in RFC 3530, April 2003 and again in RFC 7530, March 2015), influenced by Andrew File System (AFS) and Server Message Block (SMB, also termed CIFS), includes performance improvements, mandates strong security, and introduces a stateful protocol.
NFS version 4.1 (RFC 5661, January 2010) aims to provide protocol support to take advantage of clustered server deployments including the ability to provide scalable parallel access to files distributed among multiple servers (pNFS extension).
NFS version 4.2 is currently being developed.
WebNFS, an extension to Version 2 and Version 3, allows NFS to integrate more easily into Web-browsers and to enable operation through firewalls.
In 2007, Sun Microsystems open-sourced their client-side WebNFS implementation.
Various side-band protocols have become associated with NFS, including:
- The byte-range advisory Network Lock Manager (NLM) protocol (added to support UNIX System V file locking APIs).
- The remote quota reporting (RQUOTAD) protocol; to allow NFS users to view their data-storage quotas on NFS servers.
- NFS over RDMA is an adaptation of NFS that uses remote direct memory access (RDMA) as a transport.
|This section does not cite any sources. (December 2011) (Learn how and when to remove this template message)|
NFS is often used with Unix operating systems (such as Solaris, AIX and HP-UX) and Unix-like operating systems (such as Linux and FreeBSD). It is also available to operating systems such as the classic Mac OS, OpenVMS, Microsoft Windows, Novell NetWare, and IBM AS/400. Alternative remote file access protocols include the Server Message Block (SMB, also termed CIFS), Apple Filing Protocol (AFP), NetWare Core Protocol (NCP), and OS/400 File Server file system (QFileSvr.400).
SMB and NetWare Core Protocol (NCP) occur more often than NFS on systems running Microsoft Windows; AFP occurs more often than NFS in Apple Macintosh systems; and QFileSvr.400 occurs more often in AS/400 systems. Haiku recently[when?] added NFSv4 support as part of a Google Summer of Code project.
|This section does not cite any sources. (February 2010) (Learn how and when to remove this template message)|
- The server implements NFS daemon processes, running by default as
, to make its data generically available to clients.
- The server administrator determines what to make available, exporting the names and parameters of directories, typically using the
configuration file and theexportfs
- The server security-administration ensures that it can recognize and approve validated clients.
- The server network configuration ensures that appropriate clients can negotiate with it through any firewall system.
- The client machine requests access to exported data, typically by issuing a
command. (The client asks the server (rpcbind) which port the NFS server is using, the client connects to the NFS server (nfsd), nfsd passes the request to mountd)
- If all goes well, users on the client machine can then view and interact with mounted filesystems on the server within the parameters permitted.
Note that automation of the NFS mounting process may take place — perhaps using
and/or automounting facilities.
NFS and ONC figured prominently in the network-computing war between Sun Microsystems and Apollo Computer, and later the UNIX wars (ca 1987-1996) between AT&T Corporation and Sun on one side, and Digital Equipment, HP, and IBM on the other.
Two competing groups developed over fundamental differences in the two remote procedure call systems.
even if both peers of the connection had little-endian machine-architectures, whereas NCS’s method attempted to avoid byte-swap whenever two peers shared a common endianness in their machine-architectures.
An industry-group called the Network Computing Forum formed (March 1987) in an (ultimately unsuccessful) attempt to reconcile the two network-computing environments.
Later,[when?] Sun and AT&T announced they would jointly develop AT&T’s UNIX System V Release 4.
This caused many of AT&T’s other licensees of UNIX System V to become concerned that this would put Sun in an advantaged position, and ultimately led to Digital Equipment, HP, IBM, and others forming the Open Software Foundation (OSF) in 1988.
Ironically, Sun and AT&T had formerly competed over Sun’s NFS versus AT&T’s Remote File System (RFS), and the quick adoption of NFS over RFS by Digital Equipment, HP, IBM, and many other computer vendors tipped the majority of users in favor of NFS.
DFS used DCE as the RPC, and DFS derived from the Andrew File System (AFS);
Sun Microsystems and the Internet Society (ISOC) reached an agreement to cede „change control“ of ONC RPC so that the ISOC’s engineering-standards body, the Internet Engineering Task Force (IETF), could publish standards documents (RFCs) related to ONC RPC protocols and could extend ONC RPC.
OSF attempted to make DCE RPC an IETF standard, but ultimately proved unwilling to give up change control.
Later, the IETF chose to extend ONC RPC by adding a new authentication flavor based on Generic Security Services Application Program Interface (GSSAPI), RPCSEC GSS, to meet IETF requirements that protocol standards have adequate security.
Later, Sun and ISOC reached a similar agreement to give ISOC change control over NFS, although writing the contract carefully to exclude NFS version 2 and version 3.
Instead, ISOC gained the right to add new versions to the NFS protocol, which resulted in IETF specifying NFS version 4 in 2003.
By the 21st century, neither DFS nor AFS had achieved any major commercial success as compared to SMB-CIFS or NFS.
The OpenAFS project lives on.
In early 2005, IBM announced end of sales for AFS and DFS.
The NFSv4.1 protocol defines a method of separating the filesystem meta-data from file data location;
it goes beyond the simple name/data separation by striping the data amongst a set of data servers.
This differs from the traditional NFS server which holds the names of files and their data under the single umbrella of the server.
Some products are multi-node NFS servers, but the participation of the client in separation of meta-data and data is limited.
The NFSv4.1 pNFS server is a set of server resources or components; these are assumed to be controlled by the meta-data server.
The pNFS client still accesses one meta-data server for traversal or interaction with the namespace;
when the client moves data to and from the server it may directly interact with the set of data servers belonging to the pNFS server collection.
The NFSv4.1 client can be enabled to be a direct participant in the exact location of file data and to avoid solitary interaction with one NFS server when moving data.
In addition to pNFS, NFSv4.1 provides:
- Directory Delegation and Notifications
- Multi-server Namespace
- access control lists and discretionary access control
- Retention Attributions
- Root squash
- Shared resource
- TCP Wrapper
- Kerberos (protocol)
- Network Information Service
- Remote File System
- Samba (software)
- Server Message Block
- Andrew File System
- BeeGFS, the parallel file system
- Secure Shell Filesystem – mount a remote directory using only a ssh login on the remote computer
- 9P (protocol) – Plan 9 Filesystem Protocol
- CacheFS – a caching mechanism for Linux NFS clients
- Hadoop Distributed File System (HDFS)
- Russel Sandberg; David Goldberg; Steve Kleiman; Dan Walsh; Bob Lyon (1985). „Design and Implementation of the Sun Network Filesystem“. USENIX.
- NFS Illustrated (2000) by Brent Callaghan – ISBN 0-201-32570-5
- Russel Sandberg. „The Sun Network Filesystem: Design, Implementation and Experience“ (PDF). Technical Report. Sun Microsystems.
- Brian Pawlowski; Chet Juszczak; Peter Staubach; Carl Smith; Diane Lebel; David Hitz (1994). „NFS Version 3 Design and Implementation“. USENIX.
- „NFS Version 4“. USENIX. 2005-04-14.
- Haynes, Thomas (2013-03-14). „NFS Version 4 Minor Version 2“.
- Tom Talpey (February 28, 2006). „NFS/RDMA Implementation(s) Update“ (PDF). Network Appliance, Inc.
- Brent Callaghan (January 28, 2002). „NFS over RDMA“ (PDF). Sun Microsystems.
- „What is Connectathon?“. Original Connectathon.Org web site. Archived from the original on January 28, 1999.
- „pNFS“. Panasas. Retrieved August 4, 2013.
What is Connectathon ?
In 1986, Sun Microsystems sponsored the first ConnectathonTM event, a unique forum for testing software and hardware interoperability. Connectathon is a network proving ground allowing vendors to test their interoperability solutions, with special emphasis on NFSTM and Internet protocols.
Over the years, the vendor-neutral Connectathon has attracted a large number of development engineers from all major computer systems companies and a wide variety of software vendors. All have the common goal of making heterogeneous multivendor networking a reality. Now plans are being drawn to celebrate Connectathon’s 13th year.
Connectathon is an excellent opportunity for vendors to verify that their distributed computing software interoperates with a wide range of client/server implementations on different operating systems. Everything from laptops to supercomputers can be linked together under one roof, encouraging interaction among vendors, engineers and developers in a confidential atmosphere. Implementations are tested and debugged at Connectathon. There are panel discussions as well as open sessions on the latest developments in technologies and solutions by Connectathon participants.
Connectathon is a place where engineers can gather without marketing hype and can exchange ideas and information.
Now plans are being drawn to celebrate Connectathon’s 13th year. At Connectathon 99 we are expanding testing to include Y2K compatibility as well as Gigabit Ethernet based on vendor interest.
The Connectathon 99 technologies offered for testing are listed below along with their test coordinator and Email address. Those with a TBD coordinator are still being considered for testing and may be added if there is enough interest.
Note: The test suites available for download are those that were used for Connectathon `98. Some of these test suites will be updated prior to Connectathon `99. Contact the coordinator for information about the availability of updated test suites.
| NFS versions 2 and 3
& Lock Manager
|Mike Kupfer and Rob Thurlow||nfstests|
|NFS Version 4||Spencer Shepler||(no tests yet)|
|WebNFS||Agnes Jacob||Test Suite|
|Network Computers||Steve Drach||The Open Group tests
available at Connectathon
| Service Location
|Charles Perkins||Test Suites|
|ATM||Ed Von Adelung||(no tests)|
|Gigabit Ethernet||Mohan Srinivasan||(no tests)|
|Y2K Compatibility||TBD||(no tests yet)|
|Fiber Channel||TBD||(no tests)|
Technology testing coordinators will moderate the testing processes of a specific technology. If you are interested in moderating the testing of a technology, or would simply like to see a technology listed above be included for testing, please send mail to email@example.com.
Connectathon Network Information
Connectathon ’99’s network is a 10/100baseT network, with a full complement of hubs, switches, and routers that allow any-to-any, any-to-many, or point-to-point connections.
(For Connectathon veterans, please note we will no longer provide converter to 10base2 or AUI. Please remember to bring your own converter.)
Each drop in every booth is a home run to a large patch panel in the Network Operations Center (NOC). Design goals call for a test suite server for every six booths. Each test server contains the test suites (hence its name) for all protocols being tested. In addition, each server has a floor map that allows for ease in locating other participants.
At a minimum, RIP will be supported on the network, with DNS, NIS+, and NIS running throughout.
Diagnostic equipment will be provided to aid in protocol troubleshooting. Although not directly connected to the Internet, access to external web servers is permitted from the Connectathon network, via an ISDN line. The NOC will be staffed during regular business hours.
If you have further questions, please send e-mail to firstname.lastname@example.org
A limited number of rooms are being held for Connectathon 99 registrants only until February 17, 1999. To receive a special discounted rate at the hotel below, just mention that you will be attending Sun Microsystems‘ Connectathon and make your reservations quickly. Rooms are being held on a first-come basis.
Downtown San Jose
(408) 998 0400
The Crowne Plaza (formerly Holiday Inn) is adjacent to Parkside Hall.
Documents / RFCs related to NFS
Where did it come from?
How does it work? Structure of NFS server and client.
What are NFS’s advantages and shortfalls?