Samba3/SMB2: Difference between revisions

From SambaWiki
Line 211: Line 211:


* [[Samba3/SMB2/ReauthTests|Reauth Tests]]
* [[Samba3/SMB2/ReauthTests|Reauth Tests]]

state: done


== Leases ==
== Leases ==

Revision as of 09:25, 24 September 2012

Introduction

This page describes the plan, design and work in progress of the efforts to implement SMB2 in Samba3.

  • SMB 2.0 was introduces with Windows Vista/2008. Samba 3.6 added support for SMB2.0. This support is essentially complete except for one big item:
    • durable file handles
  • SMB 2.1 was introduced with Windows 7/Windows 2008R2. The major features to implement are:
    • multi credit/large MTU
    • reauthentication
    • leases
    • resilient file handles
    • branch cache
    • unbuffered write
  • SMB 2.2 will be introduced with Windows 8, which is currently (April 2012) available in a beta version. The features include:
    • directory leases
    • persistent file handles
    • multi channel
    • witness notification protocol (a new RPC service)
    • interface discovery (a new FSCTL)
    • SMB direct (SMB 2.2 over RDMA)
    • remote shadow copy support
    • branch cache v2

Prerequisite / accompanying work

VFS layering: introduce a NT-FSA-layer

Samba3's current VFS is a mixture of NT/SMB level calls (e.g. SMB_VFS_CREATE_FILE, SMB_VFS_GET_NT_ACL) and POSIX calls (e.g. SMB_VFS_OPEN, SMB_VFS_CHOWN). There are even lower level pluggable structures for specific POSIX ACL implementations. The implementations of the NT level VFS calls also call out into the POSIX level calls. The idea of this part is to split up the layers, so that the layering is clean: A NT-Layer on top that implements only the NT/SMB style calls. This should be guided by the FSA description from the Microsoft documentation ([MS-FSA]). Some of the NT/SMB-level calls are not present in the current SMB_VFS yet at all so these would have to be abstracted out of the smbd code. The current implementation of the SMB_VFS calls and some portion of smbd code would become the default "POSIX" backend to the FSA vfs layer.

This step is technically not strictly necessary, but a desired foundation for the SMB2 and future changes. When we touch the code anyways, we have a chance to improve the structure and untangle the layers. We don't need to do it in one step and we don't need to implement all of FSA right away, but we can tryp to improve the layering as we go along and touch calls.

dependence

  • does not depend on other work
  • accompanies work on the whole project
  • The splitting out of NTFSA calls can be made a prerequisite for further work on the corresponding calls in work on SMB 2.0 durable handles and SMB 2.1 (e.g. leases and resilient file handles).

steps

  • define VFS structures:
    • NTFSA layer
    • POSIX backend to call into the current SMB_VFS
  • first implement NTFSA by calling directly into current SMB_VFS code (or move code from smbd into the default NTFSA backend implementation) and have smbd call out into the FSA layer instead
  • start with one call after another, e.g. smb2_create and use NTFSA calls in the implementation.
  • Move logic from the smbd/ code to new NTFSA calls. These call the lower layer SMB_VFS calls.
  • Once the NTFSA calls are used everywhere, one can start to split up and fix the vfs layering underneath, i.e. remove the FSA-style calls from the SMB_VFS etc.
  • data structures: split up files / connections into smbXsrv layer and fsa layer, e.g.:
      smb level       |       ntfsa      |   ntfsa_posix level
      smbXsrv_session -->  ntfsa_context --> users_struct
      smbXsrv_tcon    -->  ntfsa_context --> connections_struct
      smbXsrv_open    -->  ntfsa_open    --> files_struct

SMB 2.0

durable handles

These steps describe the necessary steps towards the implementation of durable handles. For now for a single, non-clustered Samba-Server. For details on durable handles in a CTDB+Samba-cluster, see below.

dbwrap work

This is prerequisite work to avoid code duplication in record watching and so on:

  • clean up locking order
  • add dbwrap record watch mechanisms to abstract the mechanims for waiting for lock records to become available

state: essentially done(?)

rewrite messaging

For the implementation of durable handles, the smbd processes will need to communicate more than before: When a client reconnects to Samba after a network outage, it will end up at a different smbd. The new smbd will need to work on the files that had been in use as durable handles in the original client. There are two possible approaches: keep files open or reopen files. Depending on the approach, it might become necessary to pass open files from one smbd to another using fd passing. For this, we need to change our messaging. But also for the generally more demanding messaging, it would be extremely useful to get rid of the tdb+signal based messaging and replace it by an asynchronous mechanism based on sockets and in a second step have the messaging infrastructure IDL-generated.

add new tevent_req based API

dependence: This is independent of other tasks.

  • in order to simplify the higher layers a new tevent_req based messaging api is needed.

rewrite messaging with sockets

dependence: This is independent of other tasks.

  • raw messaging: unix domain datagram sockets.
  • if there are too large packets, then we need stream in addition
  • if possible: keep s3 api messaging_send/receive for a start in order to reduce scope of change

implement messaging based on iRPC

dependence: Based on the two previous steps

  • do "irpc" over this raw messaging
  • rpc services defined by idl, generated by pidl
  • write rpc services for fd-passing


Define New Data Structures

locking/open files (fs layer)

  • define data structures (idl)
  • identify various databases
  • design goal API for each such database or structure

state: essentially done?


sessions/tcons/opens (smb layer)

  • define data structures (idl):
    • struct smbXsrv_session*
    • struct smbXsrv_tcon*
    • struct smbXsrv_open*
  • identify various databases
  • design goal API for each such DB or structure

state: in progress/largely done


Use New Data Structures In The Server

use in FS layer

  • refactor locking code etc: create corresponding APIs with current backend code, use in server
  • extend current structures to match targeted structures
  • change code beneath APIs to use new marshalled databases
  • add logic to use new parts of the structures

state: essentially done?

use in smb layer

  • cleanup/simplify core smbd code
  • make use of new structures

state: essentially done?

Implement durable open and reconnect

Session reconnect with previous session id

  • if previous session exists, tear it down and thereby close tcons and (non-durable) open files
  • open new session

state: done

implement durable open

  • Interpret durable flag in smb2_create call
    • Mark the file handle durable in the database record.
    • confirm durable open in the response to the client
  • change cleanup routines to not delete open file entries for durable handles, even when the opening process does not exist any more

state: essentially done

  • implement scavanger mechanism to clean durable handles without corresponding smbd process after the scavenger timeout (maybe simply as part of cleanup routine?)

state: in progress

implement durable reconnect with reopening files (CIFS only)

  • implement reconnect for durable handles at SMB2 level after session reconnect and tcon:
    • new smbd looks for file info by persistent ID.
    • smbd should reopen the file based on the information from the databases.
  • fine-tuning of lock/oplock(/lease) behaviour under durable reopen
  • fencing against conflicting opens (==> CIFS only?! - need to keep files open for shell / nfs interop)

state: progress/largely done

improve nfs/shell interop for conflicting opens

Note: may be implemented later as an add-on

  • write tests to trigger the problem between a connection loss and a non-cifs open of the file that is still a durable handle
  • possiblity: create extra process that reopens the closed files to be able to catch opens from shell or nfs while cifs client is disconnected (==> there is still a race condition here)


implement durable reconnect with fd-passing

Note: may be implemented later as an add-on

  • have smbd keep files open which are durable when the client is disconnected
  • implement reopen:
    • requests fd-handle (implemented by fd-passing for posix) via irpc messaging

Durable handle cross-node

(To be filled)

SMB 2.1

Unbuffered Write

Multi Credit / Large MTU

Reauthentication

state: done

Leases

Resilient File Handles

Branch Cache

SMB 2.2

Directory Leases

Persistent File Handles

Multi Channel

Witness Notification Protocol

Interface Discovery

SMB Direct (SMB 2.2 over RDMA)

Remote Shadow Copy (FSRVP)

Not an SMB 2.2 specific feature per se.

Branch Cache v2

Cluster-Wide Durable Handles

Work in progress branches

Note: this is really work in progress!!! And some branches might be outdated!

Talks

Demos