LDB Greater than and Less than indexing: Difference between revisions

From SambaWiki
 
(2 intermediate revisions by the same user not shown)
Line 11: Line 11:
For new installations, nothing is required (newly joined DC). The speed-up should just be present for schema attributes which have indexing enabled e.g. uSNChanged.
For new installations, nothing is required (newly joined DC). The speed-up should just be present for schema attributes which have indexing enabled e.g. uSNChanged.


For old installations, a re-index is required but this should be done automatically (more details below).
For older installations that have been upgraded directly to the new version, a re-index is required but this should be done automatically (more details below).


To enable indexing on attributes not indexed by default, you must add the index flag on the appropriate schema object in the database as per standard Active Directory.
To enable indexing on attributes not indexed by default, you must add the index flag on the appropriate schema object in the database as per standard Active Directory.
Line 27: Line 27:
===== (Downgrading) New AD database using older code =====
===== (Downgrading) New AD database using older code =====


Databases will have to be re-indexed, but running <code>samba-tool dbcheck</code> will not work directly. You must run the <code>source4/scripting/sambadowngradedatabase</code> script in order to downgrade the format (taking it to pre-4.8, which will auto-upgrade to the correct format depending on what version of Samba you run afterwards).
Databases will have to be re-indexed, but running <code>samba-tool dbcheck</code> will not work directly. You must run the installed <code>samba_downgrade_db</code> script in order to downgrade the format (taking it to pre-4.8, which will auto-upgrade to the correct format depending on what version of Samba you run afterwards).


As an alternatives, downgrades can be done by re-joining with a different Samba version rather than changing the packaged version in-place. This will generally cause less data integrity issues, but may be more difficult operationally to do.
As an alternatives, downgrades can be done by re-joining with a different Samba version rather than changing the packaged version in-place. This will generally cause less data integrity issues, but may be more difficult operationally to do.

Latest revision as of 02:10, 31 July 2019

Overview

Added in Samba version: 4.11

Up until now, LDAP queries containing expression with >= or <= have had to iterate through the entire database in order to check if a record matches a given expression. This is despite the fact that a number of these attributes, like uSNChanged are meant to be indexed (prior to 4.11 we indexed only on equality). On Windows, such queries are significantly faster while in Samba, these queries get worse the larger the database gets.

With our new LMDB database backend, these indexes are much easier to implement and they now exist for attributes stored as integers (both 32-bit and 64-bit). Where a greater than or less than expression might return much less data than is contained in the entire database, these indexes now allow Samba a significant speed improvement. DRS replication which uses this internally is now significantly more responsive as a result.

How to configure it

For new installations, nothing is required (newly joined DC). The speed-up should just be present for schema attributes which have indexing enabled e.g. uSNChanged.

For older installations that have been upgraded directly to the new version, a re-index is required but this should be done automatically (more details below).

To enable indexing on attributes not indexed by default, you must add the index flag on the appropriate schema object in the database as per standard Active Directory.

Known issues and limitations

This only currently supports indexing of integers, both 32-bit and 64-bit stored in the database. Supporting string indexing is much more difficult when considering unicode charsets and potentially differing sort orders in different locales. Certain attributes might only be ASCII and so could be extended in the future, but there isn't anything planned so far.

(Upgrading) Old AD databases using the new code

Active Directory domain controller databases using the older indexing scheme will need to be re-indexed, however, upon starting the Samba daemon or opening a write transaction locally (e.g. ldbedit) this should happen on its own.

Re-indexing manually should also be possible using samba-tool dbcheck --reindex. Without re-indexing occurring on the database, queries which rely on integers stored in the index will no longer be returned. This may have critical impacts on tools and services relying on these attributes!

(Downgrading) New AD database using older code

Databases will have to be re-indexed, but running samba-tool dbcheck will not work directly. You must run the installed samba_downgrade_db script in order to downgrade the format (taking it to pre-4.8, which will auto-upgrade to the correct format depending on what version of Samba you run afterwards).

As an alternatives, downgrades can be done by re-joining with a different Samba version rather than changing the packaged version in-place. This will generally cause less data integrity issues, but may be more difficult operationally to do.

Further details on downgrades can be found on Downgrading an Active Directory DC.

Troubleshooting

An ldbdump (and additionally mdb_dump for LMDB) of the database might help to determine which indexing format is currently being used by your database.

Basic timings can be used to determine if the index is in effect and noticeably the search should be proportional to the actual data asked for. e.g. 1000 entries and >= should only return 100, will be much faster than without the indexing code. Without the indexing code, whether >= returns 900 records or 100 records, the timings should be similar. Additionally there is a secret option for disabling full scans, but this is mostly for developer usage.

For Developers

How to test it

make test TESTS="ldb samba4.ldap.index.python samba.tests.complex_expressions"

lib/ldb/tests/python/index.py and lib/ldb-samba/tests/index.py contain a similar test for indexing, but lib/ldb contains the testing for the new "Ordered Integer" 64-bit integer schema syntax, while lib/ldb-samba contains the testing for the Samba AD syntax for 32-bit integer syntax (which pre-dates this work). ORDERED_INTEGER was added to the proto-schema handling in @ATTRIBUTES in order to expose the syntax for the tests.

Where the code is located

Test files are referenced above, but the code lives in three different sections:

  • LDB key value layer
  • LDB indexing layer
  • Schema syntax formatting (a default ordered INT64 in LDB and INT32 in DSDB)

LDB key value layer

In order to implement range indexing, we needed to implement range queries on the underlying key value store. This only works in LMDB with (lexicographically) sorted keys. This works like the standard iterate function, but we supply a start and end key (inclusive). This layer is agnostic to all types of indexing formats, storage formats etc.

LDB indexing layer

At the LDB indexing layer, we need to create a GUID list to return to the overall indexing code which is in the same format as a single index fetch to the database (which reads out the packed GUIDs). In our case, we need to construct our GUID list artificially from multiple fetches to the database.

At this layer we need to define our start and end keys. In order to avoid defining a start and end key for the search, we notice that each index key is of the form: DN=@INDEX:<ATTRIBUTE>:<VALUE>\0. We can simply make our start key DN=@INDEX:<ATTRIBUTE>: and our end key DN=@INDEX:<ATTRIBUTE>; to return all index entries for a particular attribute.

But then, a canonicalization function which returns a lexicographically sorted key is required (and returns the right order with a memcmp). Originally we wanted to use the existing canonicalization functions, but these appear to be used elsewhere in the code (for uses other than creating index keys). This is why we introduced the index_format_fn into the schema syntax to offload that work.

Schema syntax formatting

Using the schema syntax, we define the index format function to return a string form of a value belonging that syntax. That form must be such that it can be sorted in lexicographically which matches the natural sort order of that particular schema syntax class.

Integers

The general idea is that the integer range is mapped to nXXXXXXXX, o00000000, pXXXXXXXX, where 'n' represents negative numbers, 'o' is zero and 'p' represents the positive numbers. 'n' < 'o' < 'p' happens to hold conveniently, so the string sort order arranges numbers in the right order. Numbers must be padded out to the max possible length of the string representation of the integer (either 64-bit = 19 characters, or 32-bit = 10 characters).

There's just one caveat: negative numbers will naturally sort in reverse order as -1 (n00000001) < -2 (n00000002) lexicographically. But that's obviously wrong as -2 < -1 as actual numbers, so depending on the integer width, we subtract from the integer max and map the range in reverse.

Future improvements

Currently, the GUID lists are sent to the normal indexing code and this involves a number of intermediary allocations and other unnecessary work particularly if the GUID is the only attribute of interest. In this case, it should be possible to skip the entire secondary fetch and message matching by just returning the GUIDs. This case exists in the DRS server in GetNCChanges. One way to implement this would be to implement an extended operation and bypass these checks. In fact, this could probably be done generically for a much larger part of the indexing code (although we'd need to make sure that the superset behaviour which returns some items which may not much, no longer occurs).

Similar to this technique, implementation of subtree indices should be much easier and one-level indices could be converted to using range iteration.

(Note that another generic way to improve this entire DRS flow would be to pipeline requests to the DRS server so that client and server processing can be done simultaneously.)

Reference Docs

Nothing in particular.