0% found this document useful (0 votes)
357 views

Ldap

The document discusses replication in OpenLDAP. It describes how OpenLDAP supports a variety of replication topologies using provider and consumer roles, which are more fluid than the previous rigid master/slave model. It then provides details on LDAP Sync replication technology, how the syncrepl engine maintains shadow copies of directories, and how it uses the LDAP Content Synchronization protocol for pull-based and push-based synchronization between providers and consumers.

Uploaded by

Ashok Oruganti
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
357 views

Ldap

The document discusses replication in OpenLDAP. It describes how OpenLDAP supports a variety of replication topologies using provider and consumer roles, which are more fluid than the previous rigid master/slave model. It then provides details on LDAP Sync replication technology, how the syncrepl engine maintains shadow copies of directories, and how it uses the LDAP Content Synchronization protocol for pull-based and push-based synchronization between providers and consumers.

Uploaded by

Ashok Oruganti
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 47

18.

Replication
Replicated directories are a fundamental requirement for delivering a resilient
enterprise deployment.
OpenLDAP has various configuration options for creating a replicated directory. In
previous releases, replication was discussed in terms of a master server and some
number of slave servers. A master accepted directory updates from other clients, and a
slave only accepted updates from a (single) master. The replication structure was
rigidly defined and any particular database could only fulfill a single role, either
master or slave.
As OpenLDAP now supports a wide variety of replication topologies, these terms
have been deprecated in favor of provider and consumer: A provider replicates
directory updates to consumers; consumers receive replication updates from
providers. Unlike the rigidly defined master/slave relationships, provider/consumer
roles are quite fluid: replication updates received in a consumer can be further
propagated by that consumer to other servers, so a consumer can also act
simultaneously as a provider. Also, a consumer need not be an actual LDAP server; it
may be just an LDAP client.
The following sections will describe the replication technology and discuss the
various replication options that are available.

18.1. Replication Technology
18.1.1. LDAP Sync Replication
The LDAP Sync Replication engine, syncrepl for short, is a consumer-side replication
engine that enables the consumer LDAP server to maintain a shadow copy of
a DIT fragment. A syncrepl engine resides at the consumer and executes as one of
the slapd(8) threads. It creates and maintains a consumer replica by connecting to the
replication provider to perform the initial DIT content load followed either by periodic
content polling or by timely updates upon content changes.
Syncrepl uses the LDAP Content Synchronization protocol (or LDAP Sync for short)
as the replica synchronization protocol. LDAP Sync provides a stateful replication
which supports both pull-based and push-based synchronization and does not mandate
the use of a history store. In pull-based replication the consumer periodically polls the
provider for updates. In push-based replication the consumer listens for updates that
are sent by the provider in realtime. Since the protocol does not require a history store,
the provider does not need to maintain any log of updates it has received (Note that
the syncrepl engine is extensible and additional replication protocols may be
supported in the future.).
Syncrepl keeps track of the status of the replication content by maintaining and
exchanging synchronization cookies. Because the syncrepl consumer and provider
maintain their content status, the consumer can poll the provider content to perform
incremental synchronization by asking for the entries required to make the consumer
replica up-to-date with the provider content. Syncrepl also enables convenient
management of replicas by maintaining replica status. The consumer replica can be
constructed from a consumer-side or a provider-side backup at any synchronization
status. Syncrepl can automatically resynchronize the consumer replica up-to-date with
the current provider content.
Syncrepl supports both pull-based and push-based synchronization. In its basic
refreshOnly synchronization mode, the provider uses pull-based synchronization
where the consumer servers need not be tracked and no history information is
maintained. The information required for the provider to process periodic polling
requests is contained in the synchronization cookie of the request itself. To optimize
the pull-based synchronization, syncrepl utilizes the present phase of the LDAP Sync
protocol as well as its delete phase, instead of falling back on frequent full reloads. To
further optimize the pull-based synchronization, the provider can maintain a per-scope
session log as a history store. In its refreshAndPersist mode of synchronization, the
provider uses a push-based synchronization. The provider keeps track of the consumer
servers that have requested a persistent search and sends them necessary updates as
the provider replication content gets modified.
With syncrepl, a consumer server can create a replica without changing the provider's
configurations and without restarting the provider server, if the consumer server has
appropriate access privileges for the DIT fragment to be replicated. The consumer
server can stop the replication also without the need for provider-side changes and
restart.
Syncrepl supports partial, sparse, and fractional replications. The shadow DIT
fragment is defined by a general search criteria consisting of base, scope, filter, and
attribute list. The replica content is also subject to the access privileges of the bind
identity of the syncrepl replication connection.
18.1.1.1. The LDAP Content Synchronization Protocol
The LDAP Sync protocol allows a client to maintain a synchronized copy of a DIT
fragment. The LDAP Sync operation is defined as a set of controls and other protocol
elements which extend the LDAP search operation. This section introduces the LDAP
Content Sync protocol only briefly. For more information, refer to RFC4533.
The LDAP Sync protocol supports both polling and listening for changes by defining
two respective synchronization operations: refreshOnly and refreshAndPersist.
Polling is implemented by the refreshOnly operation. The consumer polls the provider
using an LDAP Search request with an LDAP Sync control attached. The consumer
copy is synchronized to the provider copy at the time of polling using the information
returned in the search. The provider finishes the search operation by
returning SearchResultDone at the end of the search operation as in the normal search.
Listening is implemented by the refreshAndPersist operation. As the name implies, it
begins with a search, like refreshOnly. Instead of finishing the search after returning
all entries currently matching the search criteria, the synchronization search remains
persistent in the provider. Subsequent updates to the synchronization content in the
provider cause additional entry updates to be sent to the consumer.
The refreshOnly operation and the refresh stage of the refreshAndPersist operation
can be performed with a present phase or a delete phase.
In the present phase, the provider sends the consumer the entries updated within the
search scope since the last synchronization. The provider sends all requested
attributes, be they changed or not, of the updated entries. For each unchanged entry
which remains in the scope, the provider sends a present message consisting only of
the name of the entry and the synchronization control representing state present. The
present message does not contain any attributes of the entry. After the consumer
receives all update and present entries, it can reliably determine the new consumer
copy by adding the entries added to the provider, by replacing the entries modified at
the provider, and by deleting entries in the consumer copy which have not been
updated nor specified as being present at the provider.
The transmission of the updated entries in the delete phase is the same as in the
present phase. The provider sends all the requested attributes of the entries updated
within the search scope since the last synchronization to the consumer. In the delete
phase, however, the provider sends a delete message for each entry deleted from the
search scope, instead of sending present messages. The delete message consists only
of the name of the entry and the synchronization control representing state delete. The
new consumer copy can be determined by adding, modifying, and removing entries
according to the synchronization control attached to theSearchResultEntry message.
In the case that the LDAP Sync provider maintains a history store and can determine
which entries are scoped out of the consumer copy since the last synchronization time,
the provider can use the delete phase. If the provider does not maintain any history
store, cannot determine the scoped-out entries from the history store, or the history
store does not cover the outdated synchronization state of the consumer, the provider
should use the present phase. The use of the present phase is much more efficient than
a full content reload in terms of the synchronization traffic. To reduce the
synchronization traffic further, the LDAP Sync protocol also provides several
optimizations such as the transmission of the normalized entryUUIDs and the
transmission of multiple entryUUIDs in a single syncIdSet message.
At the end of the refreshOnly synchronization, the provider sends a synchronization
cookie to the consumer as a state indicator of the consumer copy after the
synchronization is completed. The consumer will present the received cookie when it
requests the next incremental synchronization to the provider.
When refreshAndPersist synchronization is used, the provider sends a synchronization
cookie at the end of the refresh stage by sending a Sync Info message with
refreshDone=TRUE. It also sends a synchronization cookie by attaching it
to SearchResultEntry messages generated in the persist stage of the synchronization
search. During the persist stage, the provider can also send a Sync Info message
containing the synchronization cookie at any time the provider wants to update the
consumer-side state indicator.
In the LDAP Sync protocol, entries are uniquely identified by the entryUUID attribute
value. It can function as a reliable identifier of the entry. The DN of the entry, on the
other hand, can be changed over time and hence cannot be considered as the reliable
identifier. The entryUUID is attached to
each SearchResultEntry or SearchResultReference as a part of the synchronization
control.
18.1.1.2. Syncrepl Details
The syncrepl engine utilizes both the refreshOnly and
the refreshAndPersist operations of the LDAP Sync protocol. If a syncrepl
specification is included in a database definition, slapd(8) launches a syncrepl engine
as aslapd(8) thread and schedules its execution. If the refreshOnly operation is
specified, the syncrepl engine will be rescheduled at the interval time after a
synchronization operation is completed. If the refreshAndPersistoperation is specified,
the engine will remain active and process the persistent synchronization messages
from the provider.
The syncrepl engine utilizes both the present phase and the delete phase of the refresh
synchronization. It is possible to configure a session log in the provider which stores
the entryUUIDs of a finite number of entries deleted from a database. Multiple replicas
share the same session log. The syncrepl engine uses the delete phase if the session
log is present and the state of the consumer server is recent enough that no session log
entries are truncated after the last synchronization of the client. The syncrepl engine
uses the present phase if no session log is configured for the replication content or if
the consumer replica is too outdated to be covered by the session log. The current
design of the session log store is memory based, so the information contained in the
session log is not persistent over multiple provider invocations. It is not currently
supported to access the session log store by using LDAP operations. It is also not
currently supported to impose access control to the session log.
As a further optimization, even in the case the synchronization search is not associated
with any session log, no entries will be transmitted to the consumer server when there
has been no update in the replication context.
The syncrepl engine, which is a consumer-side replication engine, can work with any
backends. The LDAP Sync provider can be configured as an overlay on any backend,
but works best with the back-bdb or back-hdbbackend.
The LDAP Sync provider maintains a contextCSN for each database as the current
synchronization state indicator of the provider content. It is the largest entryCSN in the
provider context such that no transactions for an entry having smaller entryCSN value
remains outstanding. The contextCSN could not just be set to the largest
issued entryCSN because entryCSN is obtained before a transaction starts and
transactions are not committed in the issue order.
The provider stores the contextCSN of a context in the contextCSN attribute of the
context suffix entry. The attribute is not written to the database after every update
operation though; instead it is maintained primarily in memory. At database start time
the provider reads the last saved contextCSN into memory and uses the in-memory
copy exclusively thereafter. By default, changes to the contextCSN as a result of
database updates will not be written to the database until the server is cleanly shut
down. A checkpoint facility exists to cause the contextCSN to be written out more
frequently if desired.
Note that at startup time, if the provider is unable to read a contextCSN from the suffix
entry, it will scan the entire database to determine the value, and this scan may take
quite a long time on a large database. When acontextCSN value is read, the database
will still be scanned for any entryCSN values greater than it, to make sure
the contextCSN value truly reflects the greatest committed entryCSN in the database.
On databases which support inequality indexing, setting an eq index on
the entryCSN attribute and configuring contextCSN checkpoints will greatly speed up
this scanning step.
If no contextCSN can be determined by reading and scanning the database, a new value
will be generated. Also, if scanning the database yielded a greater entryCSN than was
previously recorded in the suffix entry'scontextCSN attribute, a checkpoint will be
immediately written with the new value.
The consumer also stores its replica state, which is the provider's contextCSN received
as a synchronization cookie, in the contextCSN attribute of the suffix entry. The replica
state maintained by a consumer server is used as the synchronization state indicator
when it performs subsequent incremental synchronization with the provider server. It
is also used as a provider-side synchronization state indicator when it functions as a
secondary provider server in a cascading replication configuration. Since the
consumer and provider state information are maintained in the same location within
their respective databases, any consumer can be promoted to a provider (and vice
versa) without any special actions.
Because a general search filter can be used in the syncrepl specification, some entries
in the context may be omitted from the synchronization content. The syncrepl engine
creates a glue entry to fill in the holes in the replica context if any part of the replica
content is subordinate to the holes. The glue entries will not be returned in the search
result unless ManageDsaIT control is provided.
Also as a consequence of the search filter used in the syncrepl specification, it is
possible for a modification to remove an entry from the replication scope even though
the entry has not been deleted on the provider. Logically the entry must be deleted on
the consumer but in refreshOnly mode the provider cannot detect and propagate this
change without the use of the session log on the provider.
For configuration, please see the Syncrepl section.

18.2. Deployment Alternatives
While the LDAP Sync specification only defines a narrow scope for replication, the
OpenLDAP implementation is extremely flexible and supports a variety of operating
modes to handle other scenarios not explicitly addressed in the spec.
18.2.1. Delta-syncrepl replication
Disadvantages of LDAP Sync replication:
LDAP Sync replication is an object-based replication mechanism. When any attribute
value in a replicated object is changed on the provider, each consumer fetches and
processes the complete changed object, includingboth the changed and unchanged
attribute values during replication. One advantage of this approach is that when
multiple changes occur to a single object, the precise sequence of those changes need
not be preserved; only the final state of the entry is significant. But this approach may
have drawbacks when the usage pattern involves single changes to multiple objects.
For example, suppose you have a database consisting of 102,400 objects of 1 KB
each. Further, suppose you routinely run a batch job to change the value of a single
two-byte attribute value that appears in each of the 102,400 objects on the master. Not
counting LDAP and TCP/IP protocol overhead, each time you run this job each
consumer will transfer and process 100 MB of data to process 200KB of changes!
99.98% of the data that is transmitted and processed in a case like this will be
redundant, since it represents values that did not change. This is a waste of valuable
transmission and processing bandwidth and can cause an unacceptable replication
backlog to develop. While this situation is extreme, it serves to demonstrate a very
real problem that is encountered in some LDAP deployments.
Where Delta-syncrepl comes in:
Delta-syncrepl, a changelog-based variant of syncrepl, is designed to address
situations like the one described above. Delta-syncrepl works by maintaining a
changelog of a selectable depth in a separate database on the provider. The replication
consumer checks the changelog for the changes it needs and, as long as the changelog
contains the needed changes, the consumer fetches the changes from the changelog
and applies them to its database. If, however, a replica is too far out of sync (or
completely empty), conventional syncrepl is used to bring it up to date and replication
then switches back to the delta-syncrepl mode.

Note: since the database state is stored in both the changelog DB and the main DB on
the provider, it is important to backup/restore both the changelog DB and the main
DB using slapcat/slapadd when restoring a DB or copying it to another machine.

For configuration, please see the Delta-syncrepl section.
18.2.2. N-Way Multi-Master replication
Multi-Master replication is a replication technique using Syncrepl to replicate data to
multiple provider ("Master") Directory servers.
18.2.2.1. Valid Arguments for Multi-Master replication
If any provider fails, other providers will continue to accept updates
Avoids a single point of failure
Providers can be located in several physical sites i.e. distributed across the
network/globe.
Good for Automatic failover/High Availability
18.2.2.2. Invalid Arguments for Multi-Master replication
(These are often claimed to be advantages of Multi-Master replication but those
claims are false):
It has NOTHING to do with load balancing
Providers must propagate writes to all the other servers, which means the
network traffic and write load spreads across all of the servers the same as for
single-master.
Server utilization and performance are at best identical for Multi-Master and
Single-Master replication; at worst Single-Master is superior because indexing
can be tuned differently to optimize for the different usage patterns between the
provider and the consumers.
18.2.2.3. Arguments against Multi-Master replication
Breaks the data consistency guarantees of the directory model
http://www.openldap.org/faq/data/cache/1240.html
If connectivity with a provider is lost because of a network partition, then
"automatic failover" can just compound the problem
Typically, a particular machine cannot distinguish between losing contact with
a peer because that peer crashed, or because the network link has failed
If a network is partitioned and multiple clients start writing to each of the
"masters" then reconciliation will be a pain; it may be best to simply deny
writes to the clients that are partitioned from the single provider
For configuration, please see the N-Way Multi-Master section below
18.2.3. MirrorMode replication
MirrorMode is a hybrid configuration that provides all of the consistency guarantees
of single-master replication, while also providing the high availability of multi-master.
In MirrorMode two providers are set up to replicate from each other (as a multi-
master configuration), but an external frontend is employed to direct all writes to only
one of the two servers. The second provider will only be used for writes if the first
provider crashes, at which point the frontend will switch to directing all writes to the
second provider. When a crashed provider is repaired and restarted it will
automatically catch up to any changes on the running provider and resync.
18.2.3.1. Arguments for MirrorMode
Provides a high-availability (HA) solution for directory writes (replicas handle
reads)
As long as one provider is operational, writes can safely be accepted
Provider nodes replicate from each other, so they are always up to date and can
be ready to take over (hot standby)
Syncrepl also allows the provider nodes to re-synchronize after any downtime
18.2.3.2. Arguments against MirrorMode
MirrorMode is not what is termed as a Multi-Master solution. This is because
writes have to go to just one of the mirror nodes at a time
MirrorMode can be termed as Active-Active Hot-Standby, therefore an external
server (slapd in proxy mode) or device (hardware load balancer) is needed to
manage which provider is currently active
Backups are managed slightly differently
o If backing up the Berkeley database itself and periodically backing up
the transaction log files, then the same member of the mirror pair needs
to be used to collect logfiles until the next database backup is taken
For configuration, please see the MirrorMode section below
18.2.4. Syncrepl Proxy Mode
While the LDAP Sync protocol supports both pull- and push-based replication, the
push mode (refreshAndPersist) must still be initiated from the consumer before the
provider can begin pushing changes. In some network configurations, particularly
where firewalls restrict the direction in which connections can be made, a provider-
initiated push mode may be needed.
This mode can be configured with the aid of the LDAP Backend (Backends and slapd-
ldap(8)). Instead of running the syncrepl engine on the actual consumer, a slapd-ldap
proxy is set up near (or collocated with) the provider that points to the consumer, and
the syncrepl engine runs on the proxy.
For configuration, please see the Syncrepl Proxy section.
18.2.4.1. Replacing Slurpd
The old slurpd mechanism only operated in provider-initiated push mode. Slurpd
replication was deprecated in favor of Syncrepl replication and has been completely
removed from OpenLDAP 2.4.
The slurpd daemon was the original replication mechanism inherited from UMich's
LDAP and operated in push mode: the master pushed changes to the slaves. It was
replaced for many reasons, in brief:
It was not reliable
o It was extremely sensitive to the ordering of records in the replog
o It could easily go out of sync, at which point manual intervention was
required to resync the slave database with the master directory
o It wasn't very tolerant of unavailable servers. If a slave went down for a
long time, the replog could grow to a size that was too large for slurpd to
process
It only worked in push mode
It required stopping and restarting the master to add new slaves
It only supported single master replication
Syncrepl has none of those weaknesses:
Syncrepl is self-synchronizing; you can start with a consumer database in any
state from totally empty to fully synced and it will automatically do the right
thing to achieve and maintain synchronization
o It is completely insensitive to the order in which changes occur
o It guarantees convergence between the consumer and the provider
content without manual intervention
o It can resynchronize regardless of how long a consumer stays out of
contact with the provider
Syncrepl can operate in either direction
Consumers can be added at any time without touching anything on the provider
Multi-master replication is supported

18.3. Configuring the different replication types
18.3.1. Syncrepl
18.3.1.1. Syncrepl configuration
Because syncrepl is a consumer-side replication engine, the syncrepl specification is
defined in slapd.conf(5) of the consumer server, not in the provider server's
configuration file. The initial loading of the replica content can be performed either by
starting the syncrepl engine with no synchronization cookie or by populating the
consumer replica by loading an LDIF file dumped as a backup at the provider.
When loading from a backup, it is not required to perform the initial loading from the
up-to-date backup of the provider content. The syncrepl engine will automatically
synchronize the initial consumer replica to the current provider content. As a result, it
is not required to stop the provider server in order to avoid the replica inconsistency
caused by the updates to the provider content during the content backup and loading
process.
When replicating a large scale directory, especially in a bandwidth constrained
environment, it is advised to load the consumer replica from a backup instead of
performing a full initial load using syncrepl.
18.3.1.2. Set up the provider slapd
The provider is implemented as an overlay, so the overlay itself must first be
configured in slapd.conf(5) before it can be used. The provider has only two
configuration directives, for setting checkpoints on the contextCSNand for configuring
the session log. Because the LDAP Sync search is subject to access control, proper
access control privileges should be set up for the replicated content.
The contextCSN checkpoint is configured by the
syncprov-checkpoint <ops> <minutes>
directive. Checkpoints are only tested after successful write operations.
If <ops> operations or more than <minutes> time has passed since the last
checkpoint, a new checkpoint is performed.
The session log is configured by the
syncprov-sessionlog <size>
directive, where <size> is the maximum number of session log entries the session log
can record. When a session log is configured, it is automatically used for all LDAP
Sync searches within the database.
Note that using the session log requires searching on the entryUUID attribute. Setting
an eq index on this attribute will greatly benefit the performance of the session log on
the provider.
A more complete example of the slapd.conf(5) content is thus:
database bdb
suffix dc=Example,dc=com
rootdn dc=Example,dc=com
directory /var/ldap/db
index objectclass,entryCSN,entryUUID eq

overlay syncprov
syncprov-checkpoint 100 10
syncprov-sessionlog 100
18.3.1.3. Set up the consumer slapd
The syncrepl replication is specified in the database section of slapd.conf(5) for the
replica context. The syncrepl engine is backend independent and the directive can be
defined with any database type.
database hdb
suffix dc=Example,dc=com
rootdn dc=Example,dc=com
directory /var/ldap/db
index objectclass,entryCSN,entryUUID eq

syncrepl rid=123
provider=ldap://provider.example.com:389
type=refreshOnly
interval=01:00:00:00
searchbase="dc=example,dc=com"
filter="(objectClass=organizationalPerson)"
scope=sub
attrs="cn,sn,ou,telephoneNumber,title,l"
schemachecking=off
bindmethod=simple
binddn="cn=syncuser,dc=example,dc=com"
credentials=secret
In this example, the consumer will connect to the provider slapd(8) at port 389
of ldap://provider.example.com to perform a polling (refreshOnly) mode of
synchronization once a day. It will bind ascn=syncuser,dc=example,dc=com using
simple authentication with password "secret". Note that the access control privilege
of cn=syncuser,dc=example,dc=com should be set appropriately in the provider to
retrieve the desired replication content. Also the search limits must be high enough on
the provider to allow the syncuser to retrieve a complete copy of the requested
content. The consumer uses the rootdn to write to its database so it always has full
permissions to write all content.
The synchronization search in the above example will search for the entries whose
objectClass is organizationalPerson in the entire subtree rooted at dc=example,dc=com.
The requested attributes are cn, sn, ou,telephoneNumber, title, and l. The schema
checking is turned off, so that the consumer slapd(8) will not enforce entry schema
checking when it processes updates from the provider slapd(8).
For more detailed information on the syncrepl directive, see the syncrepl section of The
slapd Configuration File chapter of this admin guide.
18.3.1.4. Start the provider and the consumer slapd
The provider slapd(8) is not required to be restarted. contextCSN is automatically
generated as needed: it might be originally contained in the LDIF file, generated
by slapadd (8), generated upon changes in the context, or generated when the first
LDAP Sync search arrives at the provider. If an LDIF file is being loaded which did
not previously contain the contextCSN, the -w option should be used with slapadd (8)
to cause it to be generated. This will allow the server to startup a little quicker the first
time it runs.
When starting a consumer slapd(8), it is possible to provide a synchronization cookie
as the -c cookie command line option in order to start the synchronization from a
specific state. The cookie is a comma separated list of name=value pairs. Currently
supported syncrepl cookie fields are csn=<csn> and rid=<rid>. <csn> represents the
current synchronization state of the consumer replica. <rid> identifies a consumer
replica locally within the consumer server. It is used to relate the cookie to the
syncrepl definition in slapd.conf(5) which has the matching replica identifier.
The <rid> must have no more than 3 decimal digits. The command line cookie
overrides the synchronization cookie stored in the consumer replica database.
18.3.2. Delta-syncrepl
18.3.2.1. Delta-syncrepl Provider configuration
Setting up delta-syncrepl requires configuration changes on both the master and
replica servers:
# Give the replica DN unlimited read access. This ACL needs to be
# merged with other ACL statements, and/or moved within the scope
# of a database. The "by * break" portion causes evaluation of
# subsequent rules. See slapd.access(5) for details.
access to *
by dn.base="cn=replicator,dc=symas,dc=com" read
by * break

# Set the module path location
modulepath /opt/symas/lib/openldap

# Load the hdb backend
moduleload back_hdb.la

# Load the accesslog overlay
moduleload accesslog.la

#Load the syncprov overlay
moduleload syncprov.la

# Accesslog database definitions
database hdb
suffix cn=accesslog
directory /db/accesslog
rootdn cn=accesslog
index default eq
index entryCSN,objectClass,reqEnd,reqResult,reqStart

overlay syncprov
syncprov-nopresent TRUE
syncprov-reloadhint TRUE

# Let the replica DN have limitless searches
limits dn.exact="cn=replicator,dc=symas,dc=com" time.soft=unlimited
time.hard=unlimited size.soft=unlimited size.hard=unlimited

# Primary database definitions
database hdb
suffix "dc=symas,dc=com"
rootdn "cn=manager,dc=symas,dc=com"

## Whatever other configuration options are desired

# syncprov specific indexing
index entryCSN eq
index entryUUID eq

# syncrepl Provider for primary db
overlay syncprov
syncprov-checkpoint 1000 60

# accesslog overlay definitions for primary db
overlay accesslog
logdb cn=accesslog
logops writes
logsuccess TRUE
# scan the accesslog DB every day, and purge entries older than 7 days
logpurge 07+00:00 01+00:00

# Let the replica DN have limitless searches
limits dn.exact="cn=replicator,dc=symas,dc=com" time.soft=unlimited
time.hard=unlimited size.soft=unlimited size.hard=unlimited
For more information, always consult the relevant man pages (slapo-accesslog(5)
and slapd.conf(5))
18.3.2.2. Delta-syncrepl Consumer configuration
# Replica database configuration
database hdb
suffix "dc=symas,dc=com"
rootdn "cn=manager,dc=symas,dc=com"

## Whatever other configuration bits for the replica, like indexing
## that you want

# syncrepl specific indices
index entryUUID eq

# syncrepl directives
syncrepl rid=0
provider=ldap://ldapmaster.symas.com:389
bindmethod=simple
binddn="cn=replicator,dc=symas,dc=com"
credentials=secret
searchbase="dc=symas,dc=com"
logbase="cn=accesslog"
logfilter="(&(objectClass=auditWriteObject)(reqResult=0))"
schemachecking=on
type=refreshAndPersist
retry="60 +"
syncdata=accesslog

# Refer updates to the master
updateref ldap://ldapmaster.symas.com
The above configuration assumes that you have a replicator identity defined in your
database that can be used to bind to the provider. In addition, all of the databases
(primary, replica, and the accesslog storage database) should also have properly
tuned DB_CONFIG files that meet your needs.
18.3.3. N-Way Multi-Master
For the following example we will be using 3 Master nodes. Keeping in line
with test050-syncrepl-multimaster of the OpenLDAP test suite, we will be
configuring slapd(8) via cn=config
This sets up the config database:
dn: cn=config
objectClass: olcGlobal
cn: config
olcServerID: 1

dn: olcDatabase={0}config,cn=config
objectClass: olcDatabaseConfig
olcDatabase: {0}config
olcRootPW: secret
second and third servers will have a different olcServerID obviously:
dn: cn=config
objectClass: olcGlobal
cn: config
olcServerID: 2

dn: olcDatabase={0}config,cn=config
objectClass: olcDatabaseConfig
olcDatabase: {0}config
olcRootPW: secret
This sets up syncrepl as a provider (since these are all masters):
dn: cn=module,cn=config
objectClass: olcModuleList
cn: module
olcModulePath: /usr/local/libexec/openldap
olcModuleLoad: syncprov.la
Now we setup the first Master Node (replace $URI1, $URI2 and $URI3 etc. with your
actual ldap urls):
dn: cn=config
changetype: modify
replace: olcServerID
olcServerID: 1 $URI1
olcServerID: 2 $URI2
olcServerID: 3 $URI3

dn: olcOverlay=syncprov,olcDatabase={0}config,cn=config
changetype: add
objectClass: olcOverlayConfig
objectClass: olcSyncProvConfig
olcOverlay: syncprov

dn: olcDatabase={0}config,cn=config
changetype: modify
add: olcSyncRepl
olcSyncRepl: rid=001 provider=$URI1 binddn="cn=config" bindmethod=simple
credentials=secret searchbase="cn=config" type=refreshAndPersist
retry="5 5 300 5" timeout=1
olcSyncRepl: rid=002 provider=$URI2 binddn="cn=config" bindmethod=simple
credentials=secret searchbase="cn=config" type=refreshAndPersist
retry="5 5 300 5" timeout=1
olcSyncRepl: rid=003 provider=$URI3 binddn="cn=config" bindmethod=simple
credentials=secret searchbase="cn=config" type=refreshAndPersist
retry="5 5 300 5" timeout=1
-
add: olcMirrorMode
olcMirrorMode: TRUE
Now start up the Master and a consumer/s, also add the above LDIF to the first
consumer, second consumer etc. It will then replicate cn=config. You now have N-
Way Multimaster on the config database.
We still have to replicate the actual data, not just the config, so add to the master (all
active and configured consumers/masters will pull down this config, as they are all
syncing). Also, replace all ${} variables with whatever is applicable to your setup:
dn: olcDatabase={1}$BACKEND,cn=config
objectClass: olcDatabaseConfig
objectClass: olc${BACKEND}Config
olcDatabase: {1}$BACKEND
olcSuffix: $BASEDN
olcDbDirectory: ./db
olcRootDN: $MANAGERDN
olcRootPW: $PASSWD
olcLimits: dn.exact="$MANAGERDN" time.soft=unlimited time.hard=unlimited
size.soft=unlimited size.hard=unlimited
olcSyncRepl: rid=004 provider=$URI1 binddn="$MANAGERDN"
bindmethod=simple
credentials=$PASSWD searchbase="$BASEDN" type=refreshOnly
interval=00:00:00:10 retry="5 5 300 5" timeout=1
olcSyncRepl: rid=005 provider=$URI2 binddn="$MANAGERDN"
bindmethod=simple
credentials=$PASSWD searchbase="$BASEDN" type=refreshOnly
interval=00:00:00:10 retry="5 5 300 5" timeout=1
olcSyncRepl: rid=006 provider=$URI3 binddn="$MANAGERDN"
bindmethod=simple
credentials=$PASSWD searchbase="$BASEDN" type=refreshOnly
interval=00:00:00:10 retry="5 5 300 5" timeout=1
olcMirrorMode: TRUE

dn: olcOverlay=syncprov,olcDatabase={1}${BACKEND},cn=config
changetype: add
objectClass: olcOverlayConfig
objectClass: olcSyncProvConfig
olcOverlay: syncprov

Note: All of your servers' clocks must be tightly synchronized using e.g.
NTP http://www.ntp.org/, atomic clock, or some other reliable time reference.


Note: As stated in slapd-config(5), URLs specified in olcSyncRepl directives are the
URLs of the servers from which to replicate. These must exactly match the
URLs slapd listens on (-h in Command-Line Options). Otherwise slapd may attempt to
replicate from itself, causing a loop.

18.3.4. MirrorMode
MirrorMode configuration is actually very easy. If you have ever setup a normal slapd
syncrepl provider, then the only change is the following two directives:
mirrormode on
serverID 1

Note: You need to make sure that the serverID of each mirror node is different and
add it as a global configuration option.

18.3.4.1. Mirror Node Configuration
The first step is to configure the syncrepl provider the same as in the Set up the provider
slapd section.
Here's a specific cut down example using LDAP Sync
Replication in refreshAndPersist mode:
MirrorMode node 1:
# Global section
serverID 1
# database section

# syncrepl directive
syncrepl rid=001
provider=ldap://ldap-sid2.example.com
bindmethod=simple
binddn="cn=mirrormode,dc=example,dc=com"
credentials=mirrormode
searchbase="dc=example,dc=com"
schemachecking=on
type=refreshAndPersist
retry="60 +"

mirrormode on
MirrorMode node 2:
# Global section
serverID 2
# database section

# syncrepl directive
syncrepl rid=001
provider=ldap://ldap-sid1.example.com
bindmethod=simple
binddn="cn=mirrormode,dc=example,dc=com"
credentials=mirrormode
searchbase="dc=example,dc=com"
schemachecking=on
type=refreshAndPersist
retry="60 +"

mirrormode on
It's simple really; each MirrorMode node is setup exactly the same, except that
the serverID is unique, and each consumer is pointed to the other server.
18.3.4.1.1. Failover Configuration
There are generally 2 choices for this; 1. Hardware proxies/load-balancing or
dedicated proxy software, 2. using a Back-LDAP proxy as a syncrepl provider
A typical enterprise example might be:

Figure X.Y: MirrorMode in a Dual Data Center Configuration
18.3.4.1.2. Normal Consumer Configuration
This is exactly the same as the Set up the consumer slapd section. It can either setup in
normal syncrepl replication mode, or in delta-syncrepl replication mode.
18.3.4.2. MirrorMode Summary
You will now have a directory architecture that provides all of the consistency
guarantees of single-master replication, while also providing the high availability of
multi-master replication.
18.3.5. Syncrepl Proxy

Figure X.Y: Replacing slurpd
The following example is for a self-contained push-based replication solution:

#######################################################################
# Standard OpenLDAP Master/Provider

#######################################################################

include /usr/local/etc/openldap/schema/core.schema
include /usr/local/etc/openldap/schema/cosine.schema
include /usr/local/etc/openldap/schema/nis.schema
include /usr/local/etc/openldap/schema/inetorgperson.schema

include /usr/local/etc/openldap/slapd.acl

modulepath /usr/local/libexec/openldap
moduleload back_hdb.la
moduleload syncprov.la
moduleload back_monitor.la
moduleload back_ldap.la

pidfile /usr/local/var/slapd.pid
argsfile /usr/local/var/slapd.args

loglevel sync stats

database hdb
suffix "dc=suretecsystems,dc=com"
directory /usr/local/var/openldap-data

checkpoint 1024 5
cachesize 10000
idlcachesize 10000

index objectClass eq
# rest of indexes
index default sub

rootdn "cn=admin,dc=suretecsystems,dc=com"
rootpw testing

# syncprov specific indexing
index entryCSN eq
index entryUUID eq

# syncrepl Provider for primary db
overlay syncprov
syncprov-checkpoint 1000 60

# Let the replica DN have limitless searches
limits dn.exact="cn=replicator,dc=suretecsystems,dc=com"
time.soft=unlimited time.hard=unlimited size.soft=unlimited
size.hard=unlimited

database monitor

database config
rootpw testing


#############################################################################
#
# Consumer Proxy that pulls in data via Syncrepl and pushes out via
slapd-ldap

#############################################################################
#

database ldap
# ignore conflicts with other databases, as we need to push out to
same suffix
hidden on
suffix "dc=suretecsystems,dc=com"
rootdn "cn=slapd-ldap"
uri ldap://localhost:9012/

lastmod on

# We don't need any access to this DSA
restrict all

acl-bind bindmethod=simple
binddn="cn=replicator,dc=suretecsystems,dc=com"
credentials=testing

syncrepl rid=001
provider=ldap://localhost:9011/
binddn="cn=replicator,dc=suretecsystems,dc=com"
bindmethod=simple
credentials=testing
searchbase="dc=suretecsystems,dc=com"
type=refreshAndPersist
retry="5 5 300 5"

overlay syncprov
A replica configuration for this type of setup could be:

#######################################################################
# Standard OpenLDAP Slave without Syncrepl

#######################################################################

include /usr/local/etc/openldap/schema/core.schema
include /usr/local/etc/openldap/schema/cosine.schema
include /usr/local/etc/openldap/schema/nis.schema
include /usr/local/etc/openldap/schema/inetorgperson.schema

include /usr/local/etc/openldap/slapd.acl

modulepath /usr/local/libexec/openldap
moduleload back_hdb.la
moduleload syncprov.la
moduleload back_monitor.la
moduleload back_ldap.la

pidfile /usr/local/var/slapd.pid
argsfile /usr/local/var/slapd.args

loglevel sync stats

database hdb
suffix "dc=suretecsystems,dc=com"
directory /usr/local/var/openldap-slave/data

checkpoint 1024 5
cachesize 10000
idlcachesize 10000

index objectClass eq
# rest of indexes
index default sub

rootdn "cn=admin,dc=suretecsystems,dc=com"
rootpw testing

# Let the replica DN have limitless searches
limits dn.exact="cn=replicator,dc=suretecsystems,dc=com"
time.soft=unlimited time.hard=unlimited size.soft=unlimited
size.hard=unlimited

updatedn "cn=replicator,dc=suretecsystems,dc=com"

# Refer updates to the master
updateref ldap://localhost:9011

database monitor

database config
rootpw testing
You can see we use the updatedn directive here and example ACLs
(usr/local/etc/openldap/slapd.acl) for this could be:
# Give the replica DN unlimited read access. This ACL may need to be
# merged with other ACL statements.

access to *
by dn.base="cn=replicator,dc=suretecsystems,dc=com" write
by * break

access to dn.base=""
by * read

access to dn.base="cn=Subschema"
by * read

access to dn.subtree="cn=Monitor"
by dn.exact="uid=admin,dc=suretecsystems,dc=com" write
by users read
by * none

access to *
by self write
by * read
In order to support more replicas, just add more database ldap sections and increment
the syncrepl rid number accordingly.

Note: You must populate the Master and Slave directories with the same data, unlike
when using normal Syncrepl

If you do not have access to modify the master directory configuration you can
configure a standalone ldap proxy, which might look like:

Figure X.Y: Replacing slurpd with a standalone version
The following configuration is an example of a standalone LDAP Proxy:
include /usr/local/etc/openldap/schema/core.schema
include /usr/local/etc/openldap/schema/cosine.schema
include /usr/local/etc/openldap/schema/nis.schema
include /usr/local/etc/openldap/schema/inetorgperson.schema

include /usr/local/etc/openldap/slapd.acl

modulepath /usr/local/libexec/openldap
moduleload syncprov.la
moduleload back_ldap.la


#############################################################################
#
# Consumer Proxy that pulls in data via Syncrepl and pushes out via
slapd-ldap

#############################################################################
#

database ldap
# ignore conflicts with other databases, as we need to push out to
same suffix
hidden on
suffix "dc=suretecsystems,dc=com"
rootdn "cn=slapd-ldap"
uri ldap://localhost:9012/

lastmod on

# We don't need any access to this DSA
restrict all

acl-bind bindmethod=simple
binddn="cn=replicator,dc=suretecsystems,dc=com"
credentials=testing

syncrepl rid=001
provider=ldap://localhost:9011/
binddn="cn=replicator,dc=suretecsystems,dc=com"
bindmethod=simple
credentials=testing
searchbase="dc=suretecsystems,dc=com"
type=refreshAndPersist
retry="5 5 300 5"

overlay syncprov
As you can see, you can let your imagination go wild using Syncrepl and slapd-
ldap(8) tailoring your replication to fit your specific network topology.







LDAP Replication Installation
Installing Zimbra LDAP Master Server
Installing a LDAP Replica Server
Setting Up Zimbra LDAP Servers for Replication
Configuring Zimbra Servers to use LDAP Replica
LDAP replication lets you distribute Zimbra server queries to specific LDAP replica servers. The
Zimbra install program is used to configure a master LDAP server and additional read-only
replica servers. The master LDAP server is installed following the normal ZCS installation
options. The LDAP replica server installation is modified to point the replica server to the LDAP
master host and to set the replica LDAP status to Disabled.
After the LDAP servers are correctly installed and configured, the following additional
configuration is required.
SSH keys are set up on each LDAP server
Trusted authentication between the master LDAP and the LDAP replica servers is set up

The content of the master LDAP directory is copied to the LDAP replica server. LDAP replica
servers are read-only.

Zimbra servers are configured to query the LDAP replica server instead of the master LDAP
server.
Note: To install a LDAP replica on a previously existing Zimbra server, you run the install
program again and perform an upgrade to the server to add the Zimbra LDAP package.
Installing Zimbra LDAP Master Server
You must install the Zimbra Master LDAP server before you can install LDAP replica servers.
1.
Follow steps 1 through 4 in the Multiple-Server installation chapter, Starting the
Installation Process section to open a SSH session to the LDAP server, log on to the server
as root, and unpack the Zimbra software.
2.
The Zimbra packages to installed should be marked Y. Those packages that should not be
installed mark N.
Note: These directions and screen shots are for installing the zimbra-LDAP package.

Select the packages to install
Install zimbra-ldap [Y]
Install zimbra-mta [Y]N
Install zimbra-snmp [Y]N
Install zimbra-store [Y]N
Install zimbra-logger [Y]N
Install zimbra-spell [Y]N

Installing:
zimbra-core
zimbra-ldap

This system will be modified. Continue [N} Y
Configuration section

3.
Type y, and press Enter to modify the system. The selected packages are installed on the
server.
The Main menu shows the default entries for the LDAP server. To expand the menu to see the
configuration values type x and press Enter. The main menu expands to display configuration
details for the LDAP server.

Main menu

1) Hostname: ldap.example.com
2) Ldap Master host: ldap.example.com
3) Ldap port: 389
4) Ldap password: set
5) zimbra-ldap: Enabled
+Create Domain: yes
+Domain to create: ldap.example.com
r) Start servers after configuration yes
s) Save config to file
x) Expand menu
q) Quit

Address unconfigured (**) items (? - help)

4.
Type 4 to display the automatically generated LDAP password. You can change this
password.
Note: Remember the LDAP password, the LDAP master host name, and the LDAP port. You
must configure this information when you install the LDAP replica servers.
5. Type 5 to change the zimbra-ldap settings.
Type 3 to change the default domain name to the email domain name.

Ldap configuration

1) Status: Enabled
2) Create Domain: yes
3) Domain to create: ldap.example.com
Select, or 'r' for previous menu [r] 3

Create Domain: [ldap.example.com] example.com

6.
When the LDAP server is configured, type a to apply the configuration changes. Press Enter to
save the configuration data.

Select, or press 'a' to apply config (? - help) a
Save configuration data? [Yes]
Save config in file: [/opt/zimbra/config.2843]
Saving config in /opt/zimbra/config.2843...Done
The system will be modified - continue? [No] y
Operations logged to /tmp/zmsetup.log.2843
Setting local config zimbra_server_hostname to [ldap.example.com]
.
Operations logged to /tmp/zmsetup.log.2843

Installation complete - press return to exit

7. When Save Configuration data to a file appears, press Enter.
8. When The system will be modified - continue? appears, type y and press Enter.
The server is modified. Installing all the components and configuring the server can take a few
minutes.
9. When Installation complete - press return to exit displays, press Enter.
The installation of the master LDAP server is complete.
Installing a LDAP Replica Server
You run the ZCS install program on the replica server to install the LDAP package, but you
make the following configuration changes.
In the Zimbra LDAP menu, you must change the Status to Disabled.
Important: If you do not disable the ldap replica servers, a new directory server is created and
you will have separate mail systems.

On the Main menu, change LDAP master host name, port and LDAP password to be the same
information as on the Master LDAP server.
Follow steps 1 through 4 in Starting the Installation Process section to open a SSH session to
the LDAP server, log on to the server as root, and unpack the Zimbra software.
1. The zimbra-ldap package should be marked y.

Select the packages to install
Install zimbra-ldap [Y]
Install zimbra-mta [Y]N
Install zimbra-snmp [Y]N
Install zimbra-store [Y]N
Install zimbra-logger [Y]N
Install zimbra-spell [Y]N

Installing:
zimbra-core
zimbra-ldap

This system will be modified. Continue [N} Y
Configuration section

2. Type y, and press Enter to modify the system. The selected packages are installed.
The Main menu shows the default entries for the LDAP replica server. To expand the menu
type x and press Enter.

Main menu

1) Hostname: ldapRep.example.com
2) Ldap Master host: ldapRep.example.com
3) Ldap port: 389
4) Ldap password: set
5) zimbra-ldap: Enabled
+Create Domain: yes
+Domain to create: ldapRep.example.com
r) Start servers after configuration yes
s) Save config to file
x) Expand menu
q) Quit

Address unconfigured (**) items (? - help)


3. Type 5 to disable the zimbra-ldap settings.

Type 1 to change the Status to Disabled.
Important, if you do not disable the ldap replica servers, a new directory server is
created and you will have separate mail systems.

Ldap configuration

1) Status: Disabled

Select, or 'r' for previous menu [r]

4.
Type 2 and change the LDAP Master host name to the Master LDAP host name that you
configured earlier.
5. Type 3, and change the port to the same port as configured for the Master LDAP server.
6. Type 4 and change the password to the Master LDAP server password.
7.
When the LDAP server is configured, type a to apply the configuration changes. Press Enter to
save the configuration data.

Select, or press 'a' to apply config (? - help) a
Save configuration data? [Yes]
Save config in file: [/opt/zimbra/config.2843]
Saving config in /opt/zimbra/config.2843...Done
The system will be modified - continue? [No] y
Operations logged to /tmp/zmsetup.log.2843
Setting local config zimbra_server_hostname to [ldap.example.com]
.
Operations logged to /tmp/zmsetup.log.2843

Installation complete - press return to exit

8. When Save Configuration data to a file appears, press Enter.
9. When The system will be modified - continue? appears, type y and press Enter.
The server is modified. Installing all the components and configuring the server can take a few
minutes.
10. When Installation complete - press return to exit displays, press Enter.
The installation is complete.
Setting Up Zimbra LDAP Servers for Replication
After the master and replica LDAP servers are installed, before LDAP replication will work you
must complete the following steps.
Populate the ssh keys
Set up replication
Test the replica
CLI commands are run as Zimbra user.
To set up the LDAP servers
1. On the master LDAP server,
Type zmupdateauthkeys and press Enter.
Type zmldapenablereplica, and press Enter
The key is updated on /opt/zimbra/.ssh/authorized_keys.
2. On the LDAP replica server,
Type zmupdateauthkeys and press Enter
Type zmldapenablereplica and press Enter
This sets up the replication account in the directory and makes a copy of the master content to
the replica LDAP server.
Note: If zmupdateauthkeys does not fetch the keys correctly, run zmsshkeygen on both servers
and rerun zmupdateauthkeys.
To test the replica
1.
Create several user accounts, either from the admin console or on the master LDAP server.
The CLI command is zmprov ca <[email protected]> <password>
2.
To see if the accounts were correctly copied to the LDAP replica server, on the replica LDAP
server, type zmprov gaa. The accounts created on the master LDAP should display on the
LDAP replica.
Configuring Zimbra Servers to use LDAP Replica
To use the LDAP replica server instead of the master LDAP server, you must add the LDAP
replica URL on each Zimbra server
1. Stop the Zimbra services on the server, zmcontrol stop.
2. Enter the LDAP replica server URL
zmlocalconfig -e ldap_url=ldap://<replicahost>ldap://<masterhost>
Enter more than one replica hostnames in the list typed
as ldap://<replicahost1>ldap://<replicahost2>ldap://<masterhost>. The hosts are tried in the
order listed.
3. Restart the Zimbra server, zmcontrol start.


OpenLDAP Server
LDAP is an acronym for Lightweight Directory Access Protocol, it is a simplified version of the X.500
protocol. The directory setup in this section will be used for authentication. Nevertheless, LDAP can
be used in numerous ways: authentication, shared directory (for mail clients), address book, etc.
To describe LDAP quickly, all information is stored in a tree structure. With OpenLDAP you have
freedom to determine the directory arborescence (the Directory Information Tree: the DIT) yourself.
We will begin with a basic tree containing two nodes below the root:
"People" node where your users will be stored
"Groups" node where your groups will be stored
Before beginning, you should determine what the root of your LDAP directory will be. By default,
your tree will be determined by your Fully Qualified Domain Name (FQDN). If your domain is
example.com (which we will use in this example), your root node will be dc=example,dc=com.
Installation
First, install the OpenLDAP server daemon slapd and ldap-utils, a package containing LDAP
management utilities:
sudo apt-get install slapd ldap-utils
By default slapd is configured with minimal options needed to run the slapd daemon.
The configuration example in the following sections will match the domain name of the server. For
example, if the machine's Fully Qualified Domain Name (FQDN) is ldap.example.com, the default
suffix will be dc=example,dc=com.
Populating LDAP
OpenLDAP uses a separate directory which contains the cn=config Directory Information Tree
(DIT). The cn=config DIT is used to dynamically configure the slapd daemon, allowing the
modification of schema definitions, indexes, ACLs, etc without stopping the service.
The backend cn=config directory has only a minimal configuration and will need additional
configuration options in order to populate the frontend directory. The frontend will be populated with
a "classical" scheme that will be compatible with address book applications and with Unix Posix
accounts. Posix accounts will allow authentication to various applications, such as web applications,
email Mail Transfer Agent (MTA) applications, etc.

For external applications to authenticate using LDAP they will each need to be specifically
configured to do so. Refer to the individual application documentation for details.

Remember to change dc=example,dc=com in the following examples to match your LDAP
configuration.
First, some additional schema files need to be loaded. In a terminal enter:
sudo ldapadd -Y EXTERNAL -H ldapi:/// -f /etc/ldap/schema/cosine.ldif
sudo ldapadd -Y EXTERNAL -H ldapi:/// -f /etc/ldap/schema/nis.ldif
sudo ldapadd -Y EXTERNAL -H ldapi:/// -f /etc/ldap/schema/inetorgperson.ldif
Next, copy the following example LDIF file, naming it backend.example.com.ldif, somewhere on
your system:
# Load dynamic backend modules
dn: cn=module,cn=config
objectClass: olcModuleList
cn: module
olcModulepath: /usr/lib/ldap
olcModuleload: back_hdb

# Database settings
dn: olcDatabase=hdb,cn=config
objectClass: olcDatabaseConfig
objectClass: olcHdbConfig
olcDatabase: {1}hdb
olcSuffix: dc=example,dc=com
olcDbDirectory: /var/lib/ldap
olcRootDN: cn=admin,dc=example,dc=com
olcRootPW: secret
olcDbConfig: set_cachesize 0 2097152 0
olcDbConfig: set_lk_max_objects 1500
olcDbConfig: set_lk_max_locks 1500
olcDbConfig: set_lk_max_lockers 1500
olcDbIndex: objectClass eq
olcLastMod: TRUE
olcDbCheckpoint: 512 30
olcAccess: to attrs=userPassword by dn="cn=admin,dc=example,dc=com" write by
anonymous auth by self write by * none
olcAccess: to attrs=shadowLastChange by self write by * read
olcAccess: to dn.base="" by * read
olcAccess: to * by dn="cn=admin,dc=example,dc=com" write by * read


Change olcRootPW: secret to a password of your choosing.
Now add the LDIF to the directory:
sudo ldapadd -Y EXTERNAL -H ldapi:/// -f backend.example.com.ldif
The frontend directory is now ready to be populated. Create a frontend.example.com.ldif with
the following contents:
# Create top-level object in domain
dn: dc=example,dc=com
objectClass: top
objectClass: dcObject
objectclass: organization
o: Example Organization
dc: Example
description: LDAP Example

# Admin user.
dn: cn=admin,dc=example,dc=com
objectClass: simpleSecurityObject
objectClass: organizationalRole
cn: admin
description: LDAP administrator
userPassword: secret

dn: ou=people,dc=example,dc=com
objectClass: organizationalUnit
ou: people

dn: ou=groups,dc=example,dc=com
objectClass: organizationalUnit
ou: groups

dn: uid=john,ou=people,dc=example,dc=com
objectClass: inetOrgPerson
objectClass: posixAccount
objectClass: shadowAccount
uid: john
sn: Doe
givenName: John
cn: John Doe
displayName: John Doe
uidNumber: 1000
gidNumber: 10000
userPassword: password
gecos: John Doe
loginShell: /bin/bash
homeDirectory: /home/john
shadowExpire: -1
shadowFlag: 0
shadowWarning: 7
shadowMin: 8
shadowMax: 999999
shadowLastChange: 10877
mail: [email protected]
postalCode: 31000
l: Toulouse
o: Example
mobile: +33 (0)6 xx xx xx xx
homePhone: +33 (0)5 xx xx xx xx
title: System Administrator
postalAddress:
initials: JD

dn: cn=example,ou=groups,dc=example,dc=com
objectClass: posixGroup
cn: example
gidNumber: 10000
In this example the directory structure, a user, and a group have been setup. In other examples you
might see theobjectClass: top added in every entry, but that is the default behaviour so you do not
have to add it explicitly.
Add the entries to the LDAP directory:
sudo ldapadd -x -D cn=admin,dc=example,dc=com -W -f frontend.example.com.ldif
We can check that the content has been correctly added with the ldapsearch utility. Execute a
search of the LDAP directory:
ldapsearch -xLLL -b "dc=example,dc=com" uid=john sn givenName cn

dn: uid=john,ou=people,dc=example,dc=com
cn: John Doe
sn: Doe
givenName: John

Just a quick explanation:
-x: will not use SASL authentication method, which is the default.
-LLL: disable printing LDIF schema information.
Further Configuration
The cn=config tree can be manipulated using the utilities in the ldap-utils package. For example:
Use ldapsearch to view the tree, entering the admin password set during installation or
reconfiguration:
sudo ldapsearch -LLL -Y EXTERNAL -H ldapi:/// -b cn=config dn

SASL/EXTERNAL authentication started
SASL username: gidNumber=0+uidNumber=0,cn=peercred,cn=external,cn=auth
SASL SSF: 0
dn: cn=config

dn: cn=module{0},cn=config

dn: cn=schema,cn=config

dn: cn={0}core,cn=schema,cn=config

dn: cn={1}cosine,cn=schema,cn=config

dn: cn={2}nis,cn=schema,cn=config

dn: cn={3}inetorgperson,cn=schema,cn=config

dn: olcDatabase={-1}frontend,cn=config

dn: olcDatabase={0}config,cn=config

dn: olcDatabase={1}hdb,cn=config

The output above is the current configuration options for the cn=config backend database. Your
output may be vary.
As an example of modifying the cn=config tree, add another attribute to the index list
using ldapmodify:
sudo ldapmodify -Y EXTERNAL -H ldapi:///

SASL/EXTERNAL authentication started
SASL username: gidNumber=0+uidNumber=0,cn=peercred,cn=external,cn=auth
SASL SSF: 0
dn: olcDatabase={1}hdb,cn=config
add: olcDbIndex
olcDbIndex: uidNumber eq

modifying entry "olcDatabase={1}hdb,cn=config"

Once the modification has completed, press Ctrl+D to exit the utility.
ldapmodify can also read the changes from a file. Copy and paste the following into a file
named uid_index.ldif:
dn: olcDatabase={1}hdb,cn=config
add: olcDbIndex
olcDbIndex: uid eq,pres,sub
Then execute ldapmodify:
sudo ldapmodify -Y EXTERNAL -H ldapi:/// -f uid_index.ldif

SASL/EXTERNAL authentication started
SASL username: gidNumber=0+uidNumber=0,cn=peercred,cn=external,cn=auth
SASL SSF: 0
modifying entry "olcDatabase={1}hdb,cn=config"

The file method is very useful for large changes.
Adding additional schemas to slapd requires the schema to be converted to LDIF format.
The /etc/ldap/schemadirectory contains some schema files already converted to LDIF format as
demonstrated in the previous section. Fortunately, the slapd program can be used to automate the
conversion. The following example will add thedyngoup.schema:
1. First, create a conversion schema_convert.conf file containing the following lines:
2. include /etc/ldap/schema/core.schema
3. include /etc/ldap/schema/collective.schema
4. include /etc/ldap/schema/corba.schema
5. include /etc/ldap/schema/cosine.schema
6. include /etc/ldap/schema/duaconf.schema
7. include /etc/ldap/schema/dyngroup.schema
8. include /etc/ldap/schema/inetorgperson.schema
9. include /etc/ldap/schema/java.schema
10. include /etc/ldap/schema/misc.schema
11. include /etc/ldap/schema/nis.schema
12. include /etc/ldap/schema/openldap.schema
13. include /etc/ldap/schema/ppolicy.schema
14. Next, create a temporary directory to hold the output:
15. mkdir /tmp/ldif_output
16. Now using slapcat convert the schema files to LDIF:
17. slapcat -f schema_convert.conf -F /tmp/ldif_output -n0 -s
"cn={5}dyngroup,cn=schema,cn=config" > /tmp/cn=dyngroup.ldif
Adjust the configuration file name and temporary directory names if yours are different.
Also, it may be worthwhile to keep the ldif_output directory around in case you want to
add additional schemas in the future.
18. Edit the /tmp/cn\=dyngroup.ldif file, changing the following attributes:
19. dn: cn=dyngroup,cn=schema,cn=config
20. ...
21. cn: dyngroup
And remove the following lines from the bottom of the file:
structuralObjectClass: olcSchemaConfig
entryUUID: 10dae0ea-0760-102d-80d3-f9366b7f7757
creatorsName: cn=config
createTimestamp: 20080826021140Z
entryCSN: 20080826021140.791425Z#000000#000#000000
modifiersName: cn=config
modifyTimestamp: 20080826021140Z

The attribute values will vary, just be sure the attributes are removed.
22. Finally, using the ldapadd utility, add the new schema to the directory:
23. sudo ldapadd -Y EXTERNAL -H ldapi:/// -f /tmp/cn\=dyngroup.ldif
There should now be a dn: cn={4}dyngroup,cn=schema,cn=config entry in the cn=config tree.
LDAP Replication
LDAP often quickly becomes a highly critical service to the network. Multiple systems will come to
depend on LDAP for authentication, authorization, configuration, etc. It is a good idea to setup a
redundant system through replication.
Replication is achieved using the Syncrepl engine. Syncrepl allows the changes to be synced using
a consumer, providermodel. A provider sends directory changes to consumers.
Provider Configuration
The following is an example of a Single-Master configuration. In this configuration one OpenLDAP
server is configured as aprovider and another as a consumer.
1. First, configure the provider server. Copy the following to a file
named provider_sync.ldif:
2. # Add indexes to the frontend db.
3. dn: olcDatabase={1}hdb,cn=config
4. changetype: modify
5. add: olcDbIndex
6. olcDbIndex: entryCSN eq
7. -
8. add: olcDbIndex
9. olcDbIndex: entryUUID eq
10.
11. #Load the syncprov and accesslog modules.
12. dn: cn=module{0},cn=config
13. changetype: modify
14. add: olcModuleLoad
15. olcModuleLoad: syncprov
16. -
17. add: olcModuleLoad
18. olcModuleLoad: accesslog
19.
20. # Accesslog database definitions
21. dn: olcDatabase={2}hdb,cn=config
22. objectClass: olcDatabaseConfig
23. objectClass: olcHdbConfig
24. olcDatabase: {2}hdb
25. olcDbDirectory: /var/lib/ldap/accesslog
26. olcSuffix: cn=accesslog
27. olcRootDN: cn=admin,dc=example,dc=com
28. olcDbIndex: default eq
29. olcDbIndex: entryCSN,objectClass,reqEnd,reqResult,reqStart
30.
31. # Accesslog db syncprov.
32. dn: olcOverlay=syncprov,olcDatabase={2}hdb,cn=config
33. changetype: add
34. objectClass: olcOverlayConfig
35. objectClass: olcSyncProvConfig
36. olcOverlay: syncprov
37. olcSpNoPresent: TRUE
38. olcSpReloadHint: TRUE
39.
40. # syncrepl Provider for primary db
41. dn: olcOverlay=syncprov,olcDatabase={1}hdb,cn=config
42. changetype: add
43. objectClass: olcOverlayConfig
44. objectClass: olcSyncProvConfig
45. olcOverlay: syncprov
46. olcSpNoPresent: TRUE
47.
48. # accesslog overlay definitions for primary db
49. dn: olcOverlay=accesslog,olcDatabase={1}hdb,cn=config
50. objectClass: olcOverlayConfig
51. objectClass: olcAccessLogConfig
52. olcOverlay: accesslog
53. olcAccessLogDB: cn=accesslog
54. olcAccessLogOps: writes
55. olcAccessLogSuccess: TRUE
56. # scan the accesslog DB every day, and purge entries older than 7 days
57. olcAccessLogPurge: 07+00:00 01+00:00
58. The AppArmor profile for slapd will need to be adjusted for the accesslog database
location. Edit/etc/apparmor.d/usr.sbin.slapd adding:
59. /var/lib/ldap/accesslog/ r,
60. /var/lib/ldap/accesslog/** rwk,
Then create the directory, reload the apparmor profile, and copy the DB_CONFIG file:
sudo -u openldap mkdir /var/lib/ldap/accesslog
sudo -u openldap cp /var/lib/ldap/DB_CONFIG /var/lib/ldap/accesslog/
sudo /etc/init.d/apparmor reload

Using the -u openldap option with the sudo commands above removes the need to
adjust permissions for the new directory later.
61. Edit the file and change the olcRootDN to match your directory:
62. olcRootDN: cn=admin,dc=example,dc=com
63. Next, add the LDIF file using the ldapadd utility:
64. sudo ldapadd -Y EXTERNAL -H ldapi:/// -f provider_sync.ldif
65. Restart slapd:
66. sudo /etc/init.d/slapd restart
The Provider server is now configured, and it is time to configure a Consumer server.
Consumer Configuration
1. On the Consumer server configure it the same as the Provider except for
the Syncrepl configuration steps.
Add the additional schema files:
sudo ldapadd -Y EXTERNAL -H ldapi:/// -f /etc/ldap/schema/cosine.ldif
sudo ldapadd -Y EXTERNAL -H ldapi:/// -f /etc/ldap/schema/nis.ldif
sudo ldapadd -Y EXTERNAL -H ldapi:/// -f
/etc/ldap/schema/inetorgperson.ldif
Also, create, or copy from the provider server, the backend.example.com.ldif
# Load dynamic backend modules
dn: cn=module,cn=config
objectClass: olcModuleList
cn: module
olcModulepath: /usr/lib/ldap
olcModuleload: back_hdb

# Database settings
dn: olcDatabase=hdb,cn=config
objectClass: olcDatabaseConfig
objectClass: olcHdbConfig
olcDatabase: {1}hdb
olcSuffix: dc=example,dc=com
olcDbDirectory: /var/lib/ldap
olcRootDN: cn=admin,dc=example,dc=com
olcRootPW: secret
olcDbConfig: set_cachesize 0 2097152 0
olcDbConfig: set_lk_max_objects 1500
olcDbConfig: set_lk_max_locks 1500
olcDbConfig: set_lk_max_lockers 1500
olcDbIndex: objectClass eq
olcLastMod: TRUE
olcDbCheckpoint: 512 30
olcAccess: to attrs=userPassword by dn="cn=admin,dc=example,dc=com"
write by anonymous auth by self write by * none
olcAccess: to attrs=shadowLastChange by self write by * read
olcAccess: to dn.base="" by * read
olcAccess: to * by dn="cn=admin,dc=example,dc=com" write by * read
And add the LDIF by entering:
sudo ldapadd -Y EXTERNAL -H ldapi:/// -f backend.example.com.ldif
2. Do the same with the frontend.example.com.ldif file listed above, and add it:
3. sudo ldapadd -x -D cn=admin,dc=example,dc=com -W -f
frontend.example.com.ldif
The two severs should now have the same configuration except for the Syncrepl options.
4. Now create a file named consumer_sync.ldif containing:
5. #Load the syncprov module.
6. dn: cn=module{0},cn=config
7. changetype: modify
8. add: olcModuleLoad
9. olcModuleLoad: syncprov
10.
11. # syncrepl specific indices
12. dn: olcDatabase={1}hdb,cn=config
13. changetype: modify
14. add: olcDbIndex
15. olcDbIndex: entryUUID eq
16. -
17. add: olcSyncRepl
18. olcSyncRepl: rid=0 provider=ldap://ldap01.example.com
bindmethod=simple binddn="cn=admin,dc=example,dc=com"
19. credentials=secret searchbase="dc=example,dc=com"
logbase="cn=accesslog"
20. logfilter="(&(objectClass=auditWriteObject)(reqResult=0))"
schemachecking=on
21. type=refreshAndPersist retry="60 +" syncdata=accesslog
22. -
23. add: olcUpdateRef
24. olcUpdateRef: ldap://ldap01.example.com
You will probably want to change the following attributes:
ldap01.example.com to your server's hostname.
binddn
credentials
searchbase
olcUpdateRef:
25. Add the LDIF file to the configuration tree:
26. sudo ldapadd -c -Y EXTERNAL -H ldapi:/// -f consumer_sync.ldif
The frontend database should now sync between servers. You can add additional servers using the
steps above as the need arises.

The slapd daemon will send log information to /var/log/syslog by default. So if all
does not go well check there for errors and other troubleshooting information. Also, be
sure that each server knows it's Fully Qualified Domain Name (FQDN). This is
configured in /etc/hosts with a line similar to:
127.0.0.1 ldap01.example.com ldap01
.
Setting up ACL
Authentication requires access to the password field, that should be not accessible by default. Also,
in order for users to change their own password, using passwd or other
utilities, shadowLastChange needs to be accessible once a user has authenticated.
To view the Access Control List (ACL), use the ldapsearch utility:
ldapsearch -xLLL -b cn=config -D cn=admin,cn=config -W olcDatabase=hdb
olcAccess
Enter LDAP Password:
dn: olcDatabase={1}hdb,cn=config
olcAccess: {0}to attrs=userPassword,shadowLastChange by
dn="cn=admin,dc=exampl
e,dc=com" write by anonymous auth by self write by * none
olcAccess: {1}to dn.base="" by * read
olcAccess: {2}to * by dn="cn=admin,dc=example,dc=com" write by * read

TLS and SSL
When authenticating to an OpenLDAP server it is best to do so using an encrypted session. This can
be accomplished using Transport Layer Security (TLS) and/or Secure Sockets Layer (SSL).
The first step in the process is to obtain or create a certificate. Because slapd is compiled using
the gnutls library, thecerttool utility will be used to create certificates.
1. First, install gnutls-bin by entering the following in a terminal:
2. sudo apt-get install gnutls-bin
3. Next, create a private key for the Certificate Authority (CA):
4. sudo sh -c "certtool --generate-privkey > /etc/ssl/private/cakey.pem"
5. Create a /etc/ssl/ca.info details file to self-sign the CA certificate containing:
6. cn = Example Company
7. ca
8. cert_signing_key
9. Now create the self-signed CA certificate:
10. sudo certtool --generate-self-signed --load-privkey
/etc/ssl/private/cakey.pem \
11. --template /etc/ssl/ca.info --outfile /etc/ssl/certs/cacert.pem
12. Make a private key for the server:
13. sudo sh -c "certtool --generate-privkey >
/etc/ssl/private/ldap01_slapd_key.pem"

Replace ldap01 in the filename with your server's hostname. Naming the
certificate and key for the host and service that will be using them will help keep
filenames and paths straight.
14. To sign the server's certificate with the CA, create the /etc/ssl/ldap01.info info file
containing:
15. organization = Example Company
16. cn = ldap01.example.com
17. tls_www_server
18. encryption_key
19. signing_key
20. Create the server's certificate:
21. sudo certtool --generate-certificate --load-privkey
/etc/ssl/private/ldap01_slapd_key.pem \
22. --load-ca-certificate /etc/ssl/certs/cacert.pem --load-ca-privkey
/etc/ssl/private/cakey.pem \
23. --template /etc/ssl/ldap01.info --outfile
/etc/ssl/certs/ldap01_slapd_cert.pem
Once you have a certificate, key, and CA cert installed, use ldapmodify to add the new
configuration options:
sudo ldapmodify -Y EXTERNAL -H ldapi:///
Enter LDAP Password:
dn: cn=config
add: olcTLSCACertificateFile
olcTLSCACertificateFile: /etc/ssl/certs/cacert.pem
-
add: olcTLSCertificateFile
olcTLSCertificateFile: /etc/ssl/certs/ldap01_slapd_cert.pem
-
add: olcTLSCertificateKeyFile
olcTLSCertificateKeyFile: /etc/ssl/private/ldap01_slapd_key.pem

modifying entry "cn=config"


Adjust the ldap01_slapd_cert.pem, ldap01_slapd_key.pem, and cacert.pem names
if yours are different.
Next, edit /etc/default/slapd uncomment the SLAPD_SERVICES option:
SLAPD_SERVICES="ldap:/// ldapi:/// ldaps:///"
Now the openldap user needs access to the certificate:
sudo adduser openldap ssl-cert
sudo chgrp ssl-cert /etc/ssl/private/ldap01_slapd_key.pem
sudo chmod g+r /etc/ssl/private/ldap01_slapd_key.pem

If the /etc/ssl/private and /etc/ssl/private/server.key have different
permissions, adjust the commands appropriately.
Finally, restart slapd:
sudo /etc/init.d/slapd restart
The slapd daemon should now be listening for LDAPS connections and be able to use STARTTLS
during authentication.

If you run into troubles with the server not starting, check the /var/log/syslog. If you see
errors like main: TLS init def ctx failed: -1, it is likely there is a configuration problem.
Check that the certificate is signed by the authority from in the files configured, and that
the ssl-cert group has read permissions on the private key.
TLS Replication
If you have setup Syncrepl between servers, it is prudent to encrypt the replication traffic
using Transport Layer Security (TLS). For details on setting up replication see the section called
LDAP Replication.
Assuming you have followed the above instructions and created a CA certificate and server
certificate on the Provider server. Follow the following instructions to create a certificate and key for
the Consumer server.
1. Create a new key for the Consumer server:
2. mkdir ldap02-ssl
3. cd ldap02-ssl
4. certtool --generate-privkey > ldap02_slapd_key.pem

Creating a new directory is not strictly necessary, but it will help keep things
organized and make it easier to copy the files to the Consumer server.
5. Next, create an info file, ldap02.info for the Consumer server, changing the attributes to
match your locality and server:
6. country = US
7. state = North Carolina
8. locality = Winston-Salem
9. organization = Example Company
10. cn = ldap02.salem.edu
11. tls_www_client
12. encryption_key
13. signing_key
14. Create the certificate:
15. sudo certtool --generate-certificate --load-privkey
ldap02_slapd_key.pem \
16. --load-ca-certificate /etc/ssl/certs/cacert.pem --load-ca-privkey
/etc/ssl/private/cakey.pem \
17. --template ldap02.info --outfile ldap02_slapd_cert.pem
18. Copy the cacert.pem to the dicretory:
19. cp /etc/ssl/certs/cacert.pem .
20. The only thing left is to copy the ldap02-ssl directory to the Consumer server, then
copy ldap02_slapd_cert.pem andcacert.pem to /etc/ssl/certs, and
copy ldap02_slapd_key.pem to /etc/ssl/private.
21. Once the files are in place adjust the cn=config tree by entering:
22. sudo ldapmodify -Y EXTERNAL -H ldapi:///
23. Enter LDAP Password:
24. dn: cn=config
25. add: olcTLSCACertificateFile
26. olcTLSCACertificateFile: /etc/ssl/certs/cacert.pem
27. -
28. add: olcTLSCertificateFile
29. olcTLSCertificateFile: /etc/ssl/certs/ldap02_slapd_cert.pem
30. -
31. add: olcTLSCertificateKeyFile
32. olcTLSCertificateKeyFile: /etc/ssl/private/ldap02_slapd_key.pem
33.
34. modifying entry "cn=config"
35.
36. As with the Provider you can now edit /etc/default/slapd and add
the ldaps:/// parameter to the SLAPD_SERVICESoption.
Now that TLS has been setup on each server, once again modify
the Consumer server's cn=config tree by entering the following in a terminal:
sudo ldapmodify -Y EXTERNAL -H ldapi:///
SASL/EXTERNAL authentication started
SASL username: gidNumber=0+uidNumber=0,cn=peercred,cn=external,cn=auth
SASL SSF: 0

dn: olcDatabase={1}hdb,cn=config
replace: olcSyncrepl
olcSyncrepl: {0}rid=0 provider=ldap://ldap01.example.com bindmethod=simple
binddn="cn=ad
min,dc=example,dc=com" credentials=secret searchbase="dc=example,dc=com"
logbas
e="cn=accesslog" logfilter="(&(objectClass=auditWriteObject)(reqResult=0))"
s
chemachecking=on type=refreshAndPersist retry="60 +" syncdata=accesslog
starttls=yes

modifying entry "olcDatabase={1}hdb,cn=config"

If the LDAP server hostname does not match the Fully Qualified Domain Name (FQDN) in the
certificate, you may have to edit/etc/ldap/ldap.conf and add the following TLS options:
TLS_CERT /etc/ssl/certs/ldap02_slapd_cert.pem
TLS_KEY /etc/ssl/private/ldap02_slapd_key.pem
TLS_CACERT /etc/ssl/certs/cacert.pem
Finally, restart slapd on each of the servers:
sudo /etc/init.d/slapd restart
LDAP Authentication
Once you have a working LDAP server, the auth-client-config and libnss-ldap packages take the
pain out of configuring an Ubuntu client to authenticate using LDAP. To install the packages from, a
terminal prompt enter:
sudo apt-get install libnss-ldap
During the install a menu dialog will ask you connection details about your LDAP server.
If you make a mistake when entering your information you can execute the dialog again using:
sudo dpkg-reconfigure ldap-auth-config
The results of the dialog can be seen in /etc/ldap.conf. If your server requires options not
covered in the menu edit this file accordingly.
Now that libnss-ldap is configured enable the auth-client-config LDAP profile by entering:
sudo auth-client-config -t nss -p lac_ldap
-t: only modifies /etc/nsswitch.conf.
-p: name of the profile to enable, disable, etc.
lac_ldap: the auth-client-config profile that is part of the ldap-auth-config package.
Using the pam-auth-update utility, configure the system to use LDAP for authentication:
sudo pam-auth-update
From the pam-auth-update menu, choose LDAP and any other authentication mechanisms you
need.
You should now be able to login using user credentials stored in the LDAP directory.

If you are going to use LDAP to store Samba users you will need to configure the server
to authenticate using LDAP. See the section called Samba and LDAP for details.
User and Group Management
The ldap-utils package comes with multiple utilities to manage the directory, but the long string of
options needed, can make them a burden to use. The ldapscripts package contains configurable
scripts to easily manage LDAP users and groups.
To install the package, from a terminal enter:
sudo apt-get install ldapscripts
Next, edit the config file /etc/ldapscripts/ldapscripts.conf uncommenting and changing the
following to match your environment:
SERVER=localhost
BINDDN='cn=admin,dc=example,dc=com'
BINDPWDFILE="/etc/ldapscripts/ldapscripts.passwd"
SUFFIX='dc=example,dc=com'
GSUFFIX='ou=Groups'
USUFFIX='ou=People'
MSUFFIX='ou=Computers'
GIDSTART=10000
UIDSTART=10000
MIDSTART=10000
Now, create the ldapscripts.passwd file to allow authenticated access to the directory:
sudo sh -c "echo -n 'secret' > /etc/ldapscripts/ldapscripts.passwd"
sudo chmod 400 /etc/ldapscripts/ldapscripts.passwd

Replace secret with the actual password for your LDAP admin user.
The ldapscripts are now ready to help manage your directory. The following are some examples of
how to use the scripts:
Create a new user:
sudo ldapadduser george example
This will create a user with uid george and set the user's primary group (gid) to example
Change a user's password:
sudo ldapsetpasswd george
Changing password for user uid=george,ou=People,dc=example,dc=com
New Password:
New Password (verify):
Delete a user:
sudo ldapdeleteuser george
Add a group:
sudo ldapaddgroup qa
Delete a group:
sudo ldapdeletegroup qa
Add a user to a group:
sudo ldapaddusertogroup george qa
You should now see a memberUid attribute for the qa group with a value of george.
Remove a user from a group:
sudo ldapdeleteuserfromgroup george qa
The memberUid attribute should now be removed from the qa group.
The ldapmodifyuser script allows you to add, remove, or replace a user's attributes. The script
uses the same syntax as the ldapmodify utility. For example:
sudo ldapmodifyuser george
# About to modify the following entry :
dn: uid=george,ou=People,dc=example,dc=com
objectClass: account
objectClass: posixAccount
cn: george
uid: george
uidNumber: 1001
gidNumber: 1001
homeDirectory: /home/george
loginShell: /bin/bash
gecos: george
description: User account
userPassword:: e1NTSEF9eXFsTFcyWlhwWkF1eGUybVdFWHZKRzJVMjFTSG9vcHk=

# Enter your modifications here, end with CTRL-D.
dn: uid=george,ou=People,dc=example,dc=com
replace: gecos
gecos: George Carlin
The user's gecos should now be George Carlin.
Another great feature of ldapscripts, is the template system. Templates allow you to customize the
attributes of user, group, and machine objectes. For example, to enable the user template
edit /etc/ldapscripts/ldapscripts.confchanging:
UTEMPLATE="/etc/ldapscripts/ldapadduser.template"
There are sample templates in the /etc/ldapscripts directory. Copy or rename
the ldapadduser.template.sample file to /etc/ldapscripts/ldapadduser.template:
sudo cp /etc/ldapscripts/ldapadduser.template.sample
/etc/ldapscripts/ldapadduser.template
Edit the new template to add the desired attributes. The following will create new user's as with
an objectClass ofinetOrgPerson:
dn: uid=<user>,<usuffix>,<suffix>
objectClass: inetOrgPerson
objectClass: posixAccount
cn: <user>
sn: <ask>
uid: <user>
uidNumber: <uid>
gidNumber: <gid>
homeDirectory: <home>
loginShell: <shell>
gecos: <user>
description: User account
title: Employee
Notice the <ask> option used for the cn value. Using <ask> will configure ldapadduser to prompt
you for the attribute value during user creation.
There are more useful scripts in the package, to see a full list enter: dpkg -L ldapscripts | grep bin
Resources
The OpenLDAP Ubuntu Wiki page has more details.
For more information see OpenLDAP Home Page
Though starting to show it's age, a great source for in depth LDAP information is O'Reilly's LDAP
System Administration
Packt's Mastering OpenLDAP is a great reference covering newer versions of OpenLDAP.
For more information on auth-client-config see the man page: man auth-client-config.
For more details regarding the ldapscripts package see the man pages: man ldapscripts, man
ldapadduser, man ldapaddgroup, etc.

You might also like