Debian Archiv - Page 2 of 2 - credativ®

Congratulations to the Debian Community

The Debian Project just released version 11 (aka “bullseye”) of their free operating system. In total, over 6,208 contributors worked on this release and were indispensable in making this launch happen. We would like to thank everyone involved for their combined efforts, hard work, and many hours pent in recent years building this new release that will benefit the entire open source community.

We would also like to acknowledge our in-house Debian developers who contributed to this effort. We really appreciate the work you do on behalf of the community and stand firmly behind your contributions.

What’s New in Debian 11 Bullseye

Debian 11 comes with a number of meaningful changes and enhancements. The new release includes over 13,370 new software packages, for a total of over 57,703 packages on release. Out of these, 35,532 packages have been updated to newer versions, including an update in the kernel from 4.19 in “buster” to 5.10 in bullseye.

Bullseye expands on the capabilities of driverless printing with Common Unix Printing System (CUPS) and driverless scanning with Scanner Access Now Easy (SANE). While it was possible to use CUPS for driverless printing with buster, bullseye comes with the package ipp-usb, which allows a USB device to be treated as a network device and thus extend driverless printing capabilities. SANE connects to this when set up correctly and connected to a USB port.

As in previous releases, Debian 11 comes with a Debian Edu / Skolelinux version. Debian Edu has been a complete solution for schools for many years. Debian Edu can provide the entire network for a school and then only users and machines need to be added after installation. This can also be easily managed via the web interface GOsa².

Debian 11 bullseye can be downloaded here.
https://www.debian.org/devel/debian-installer/index.en.html

For more information and greater technical detail on the new Debian 11 release, please refer to the official release notes on Debian.org

https://www.debian.org/releases/bullseye/amd64/release-notes/.

Contributions by Instaclustr Employees

Our Debian roots run deep here. credativ, which was acquired by Instaclustr in March 2021, has always been an active part of the Debian community and visited every DebConf since 2004. Debian also serves as the operating system at the heart of the Instaclustr Managed Platform.

For the release of Debian 11, our team has taken over various responsibilities in the community. Our contributions include:

  • 90% of the PostgreSQL packaging of the new release Maintenance work on various packages
  • Support as Debian-Sys-Admin
  • Contributions to Debian Edu/Skolelinux
  • Development work on kernel images
  • Development work on cloud images
  • Development work for various Debian backports
  • Work on salsa.debian.org

Many of our colleagues have made significant contributions to the current release, including:

  • Adrian Vondendriesch
  • Alexander Wirt (Formorer)
  • Bastian Blank (waldi)
  • Christoph Berg (Myon)
  • Dominik George (natureshadow)
  • Felix Geyer (fgeyer)
  • Martin Zobel-Helas (zobel)
  • Michael Banck (azeem)
  • Michael Meskes (feivel)
  • Noël Köthe (Noel)
  • Sven Bartscher (kritzefitz)

How to Upgrade

Given that Debian 11 bullseye is a major release, we suggest that everyone running on Debian 10 buster upgrade. The main steps for an upgrade include:

  1. Make sure to backup any data that should not get lost and prepare for recovery
  2. Remove non-Debian packages and clean up leftover files and old versions
  3. Upgrade to latest point release
  4. Check and prepare your APT source-list files by adding the relevant Internet sources or local mirrors
  5. Upgrade your packages and then upgrade your system

You can find a more detailed walkthrough of the upgrade process in the Debian documentation.

All existing credativ customers who are running a Debian-based installation are naturally covered by our service and support and are encouraged to reach out.

If you are interested in upgrading from your old Debian version, or if you have questions with regards to your Debian infrastructure, do not hesitate to drop us an email or contact us at info@credativ.de.

Or, you can get started in minutes with any one of these open source technologies like Apache Cassandra, Apache Kafka, Redis, and OpenSearch on the Instaclustr Managed Platform. Sign up for a free trial today.

In the preceding article, Two-Factor Authentication with Yubico OTP, we demonstrated how quickly existing services can be extended with two-factor authentication (2FA) using Yubico OTP with the help of the PAM module pam_yubico. The validation service used, the YubiCloud, is provided by Yubico free of charge.

However, the fact that you are bound to an external service provider is not to everyone’s liking: data protection concerns or doubts about the reliability of the cloud service lead to the question of whether the required services could not also be operated on your own systems. There may also be scenarios in which you cannot access external services.

The good news is that there is also the option of hosting the services yourself!

Components

To be able to validate Yubico OTPs on your own system, two components are required: the YubiKey OTP Validation Server and the YubiKey Key Storage Module. Yubico provides the necessary software both in source code and as ready-made binary packages in various Linux distributions.

It should be noted that a large part of the documentation available online is still based on the old Key Storage Module, YK-KSM. The YK-KSM is implemented in PHP5 and is to be regarded as obsolete because it requires functions and libraries that are no longer included or available in current PHP versions.

Validation Server – VAL

The Validation Server implements the Yubico WSAPI for validating Yubico OTPs, which is also used in the YubiCloud. This is a PHP application that requires an RDBMS such as PostgreSQL® or MySQL in addition to the Apache web server to operate.

The PAM module pam_yubico discussed in the previous articles can be configured by specifying a URL so that it uses a different Validation Server, in our case a local one, instead of the YubiCloud.

If a client, for example the PAM module, sends a Yubico OTP to the validation service via the WSAP, the validation service forwards the OTP to the Key Storage Module and receives the decrypted OTP back from there. Based on the counter readings and timestamps, which are compared with the last values stored in the database, the VAL can then decide whether the OTP is valid or not.

Key Storage Module – KSM

The Key Storage Module is used for the secure storage of the shared secrets of the YubiKeys used. The key used for encryption is either stored on a hardware module costing a good €650, or – as in this case – inexpensively in a text file. In contrast to the VAL, the KSM does not require a relational database, but instead uses the file system, by default the folder /var/cache/yubikey-ksm, and stores the shared secrets there in encrypted form in so-called AEAD files.

The KSM used here is implemented in Python and runs as an independent service, which by default listens on port 8002 TCP for connections from localhost and offers a simple REST interface there.

The Validation Server can use this REST interface to send OTPs to be checked to the Key Storage Module, which then uses the Public ID to read the corresponding shared secret from its memory in order to decrypt the OTP and return its content to the VAL.

Installation

Fortunately, there are ready-made packages for both the validation server and the key storage module in most Linux distributions. The following describes the installation and configuration of the services under Debian GNU/Linux Buster.

Key Storage Module – KSM

The KSM can be easily installed in Debian with the package yhsm−yubikey−ksm installed:

# apt-get install yhsm−yubikey−ksm

Before configuring the newly installed service, the so-called keyfile, which contains the key used to encrypt the key storage, must be created:

# mkdir -p /etc/yubico/yhsm
# touch /etc/yubico/yhsm/keys.json
# chown yhsm−ksmsrv /etc/yubico/yhsm/keys.json
# chmod 400 /etc/yubico/yhsm/keys.json

The keyfile can now be opened with any editor and filled with a key. As the file extension suggests, the keyfile is a JSON file. In the following example, the key, which is located in “Slot” 1, is 000102030405060708090a0b0c0d0e0f101112131415161718191a1b1c1d1e1f.

{
  "1": "000102030405060708090a0b0c0d0e0f101112131415161718191a1b1c1d1e1f"
}

000102030405060708090a0b0c0d0e0f101112131415161718191a1b1c1d1e1f is the hexadecimal representation of a 32-byte (i.e. 256-bit) example key. For productive use, a reasonable key created from random data should of course be used. The program openssl can be used for this:

$ openssl rand -hex 32

To configure the KSM, only a few parameters need to be adjusted in the file /etc/default/yhsm−yubikey−ksm:

YHSM_KSM_ENABLE="true"
YHSM_KSM_DEVICE="/etc/yubico/yhsm/keys.json"
YHSM_KSM_KEYHANDLES="1"

The parameter YHSM_KSM_ENABLE="true" ensures that the KSM is started automatically when the system starts. Instead of the hardware device configured by default, the keyfile just created and the key with the ID 1 from it are used.

Finally, the KSM is restarted with the changed configuration using systemctl restart yhsm−yubikey−ksm.

Validation Server – VAL

As already mentioned, the Validation Server is a web application written in PHP, the operation of which requires a web server and an RDBMS. While the package dependencies prescribe an Apache web server, you have the choice between MySQL, MariaDB and PostgreSQL® for the databases. According to the dependency order of the package, MySQL would be installed by apt here, but to give PostgreSQL® preference, it must be explicitly listed:

# apt-get install yubikey−val postgresql php−pgsql

The configuration of the Validation Server in /etc/yubico/val/ is set by default to the Key Storage Module running locally on the same system, so no further intervention is necessary.

So that the PAM module can authenticate itself later when making requests to the Validation Server, credentials in the form of an ID and a key must still be created:

# ykval−gen−clients
2,cOyFHRvltNYDjx74JE9jOBcdPhI=

This step corresponds to the registration in the YubiCloud described in the previous article. The output of the command consists of two parts: the ID, followed by a comma and the key in Base64 encoding.

If the Validation Service is to be used from several machines, it is recommended to create separate credentials for each machine. To have several ID-key pairs created, ykval-gen-clients is simply called with the desired number:

# ykval−gen−clients 5
3,6WP1q1ohy92G/BNLMNjpHpVeL1Q=
4,InVj6Nbqc9FQN1EgtbsedtuYT9I=
5,p/R/hHx6E3Kf3Qc+671O46laNec=
6,/FRX6YqioHSap+zoM+LkWp88TFU=
7,XxEp4zoHSi9zTDSngvxnGiD4V1A=

To avoid losing track, you should note which credentials were used for which computer. Alternatively, ykval-gen-clients with the switch --notes allows you to create a note:

# ykval-gen-clients --notes=OpenVPN
8,rZKpqc5WcU4OB4Nv551+U3lj2tk=

The program ykval-export-clients outputs all credentials stored in the database, including notes, to the standard output:

# ykval-export-clients
1,1,1619686861,ua//VH5rvFoxrFHGhLZBz/RO3m0=,,,
…
8,1,1619687606,pkodRX1F77Ck7bvS9MzpXE5IfxA=,,OpenVPN,

Here you can see that credentials with ID 1 were already created during the installation of the package. Of course, instead of creating your own ID, you can also read this from the database and use it to set up the PAM module.

PAM

As the last change to the system, the PAM module pam_yubico must be installed and entered in the corresponding service configuration.

# apt-get install libpam−yubico

As in the previous articles, OpenVPN should once again benefit from 2FA with Yubico OTP. For this purpose, the file /etc/pam.d/openvpn is created or adapted:

auth sufficient pam_yubico.so id=2 key=cOyFHRvltNYDjx74JE9jOBcdPhI= urllist=http://localhost/wsapi/2.0/verify authfile=/etc/yubikey_mappings
account required pam_permit.so

The values specified in the above call of ykval-gen-clients or ykval-export-clients are used as values for id and key. The parameter urllist receives the URL of the WSAPI of the validation service, which in this case runs on the same computer.

As with the use of the YubiCloud, a authfile must be specified again this time – a file that contains the mappings of user names to Public IDs. This is created later, after the keys have been generated.

The configuration of the OpenVPN service is carried out as described in the article Two-Factor Authentication for OpenSSH and OpenVPN. On the server side, the supplied OpenVPN-PAM plugin must be loaded in the configuration:

plugin /usr/lib/openvpn/openvpn-plugin-auth-pam.so openvpn

On the client side, only the option auth-user-pass is added to the existing configuration, so that the user is asked for a user name and password (here: OTP) when establishing a connection.

Key Management

So that YubiKeys can be used with your own validation service, they must be programmed with a new key, the shared secret. These keys are created in the KSM, read out from it and then written to the YubiKey.

As the shared secret programmed on the YubiKey at the factory cannot be read out, it is of no use for a self-hosted validation service.

Generation in the KSM

To generate a series of keys in the Key Storage Module, the command yhsm-generate-keys is used:

# yhsm-generate-keys -D /etc/yubico/yhsm/keys.json --key-handle 1 --start-public-id credtivccccc -c 10
output dir : /var/cache/yubikey-ksm/aeads
keys to generate : 10
key handles : {1: '1'}
start public_id : 13412510728192 (0xc32d7f00000)
YHSM device : /etc/yubico/yhsm/keys.json

Generating 10 keys

Done

The above command creates 10 (-c) keys, starting with the ID credtivccccc (--start-public-id) and uses the key with the ID 1 (--key-handle), which is in the file /etc/yubico/yhsm/keys.json (-D) for encryption. The keys are stored as described above under /var/cache/yubikey-ksm/aeads.

The output gives a brief overview of the parameters used, the simple Done indicates the successful creation and storage of the credentials.

Caution: if the above command is called several times, existing keys with the same ID will be overwritten without prompting!

With the help of the command yhsm-decrypt-aead, the keys just created can now be read out from the KSM:

# yhsm-decrypt-aead --aes-key 000102030405060708090a0b0c0d0e0f101112131415161718191a1b1c1d1e1f --format yubikey-csv /var/cache/yubikey-ksm/aeads/
1,credtivccccf,47072c411963,1feff43b2d2b529c697d9db0849c9594,000000000000,,,,,
2,credtivccccc,512a73c09e98,d6e07a6def46cee722be21b7c2f35aec,000000000000,,,,,
3,credtivcccce,b491988426de,a72669341ab2a7d2acecec91c2fa0efb,000000000000,,,,,
4,credtivcccci,fccc5e1dcfcf,b0b14a2454c6d2a54bd2351f09d19d6e,000000000000,,,,,
5,credtivccccb,8a0b3916582f,a031f201920f6204a38b239836486bbf,000000000000,,,,,
6,credtivccccj,b9dd85895291,e04d79d45ff80438c744f2a8deec4a15,000000000000,,,,,
7,credtivccccg,a5213cab8e9c,f20acb5646de4282f21ef12b65c6a082,000000000000,,,,,
8,credtivcccch,73e9c1fa06b9,4c9d121e432a2fbd14b4a5d96f3b9d8f,000000000000,,,,,
9,credtivccccd,0695db026eb8,90779c79b363b7dbe54a9c3012e688e5,000000000000,,,,,
10,credtivcccck,ddd42451acb3,f5803057ea519149041be830509b7b2a,000000000000,,,,,

The AES key created during the setup of the KSM is specified here as --aes-key; the argument --format yubikey-csv ensures that the credentials are output as comma-separated values instead of in raw format. The last argument specifies the storage location of the AEAD in the file system.

Programming the YubiKey

A line in the above output of the command yhsm-decrypt-aead contains the credentials for a YubiKey in several comma-separated fields: in addition to the serial number (field 1), these are the Public ID (field 2), the Private ID (field 3) and the actual AES key (field 4). All other fields are not used in our case.

The entry in line 1 therefore contains the Public ID credtivccccf with the Private UID 47072c411963 and the AES key 1feff43b2d2b529c697d9db0849c9594.

These credentials can now be written to a YubiKey. The program ykpersonalize is a powerful command line tool for configuring YubiKeys and is located in the package yubikey-personalization on Debian.

If there is already a configuration in slot 1 (-1) of the Yubikey that should not be overwritten, you can instead write to slot 2 using -2. The call ykpersonalize -x swaps the contents of slot 1 and slot 2 of a YubiKey.

Unfortunately, the tool ykpersonalize uses different terms for the components of a credential: the Public ID becomes the fixed part and the Private UID becomes the uid. The following call writes the above credentials to slot 1 of a plugged-in YubiKey.

$ ykpersonalize -1 -o fixed=credtivccccf -o uid=47072c411963 -a 1feff43b2d2b529c697d9db0849c9594
Firmware version 5.1.2 Touch level 1287 Program sequence 3

Configuration data to be written to key configuration 1:

fixed: m:credtivccccf
uid: 47072c411963
key: h:1feff43b2d2b529c697d9db0849c9594
acc_code: h:000000000000
ticket_flags: APPEND_CR
config_flags:
extended_flags:

Commit? (y/n) [n]: 

It should be noted that fixed part and uid are transferred as a KV pair using -o, while the AES key is transferred directly using -a.

If the query Commit? is answered in the affirmative, the displayed configuration is written to the Yubikey and from then on, when the button is pressed, it outputs a Yubico OTP created with the new credentials.

If you want to program several YubiKeys, the next of the generated credentials is simply used in further calls of ykpersonalize accordingly. All commands and tools used use files in CSV format or use stdin/stdout; recurring processes can therefore be excellently automated by a bash script or similar.

As an alternative to the approach described here, the YubiKey Personalization Tool from the package yubikey-personalization-gui offers the possibility to program several YubiKeys in a row. To do this, activate the option Program Multiple YubiKeys in the GUI under Yubico OTP → Advanced. In order to store the credentials of the YubiKeys programmed in this way in the KSM, the log file configuration_log_hmac.csv offered for saving after programming must first be adapted before the credentials contained therein can be imported into the KSM with the program yhsm-import-keys.

According to the man page, yhsm-import-keys expects a CSV file with the following structure:

# ykksm 1
seqno, public id, private uid, AES key,,,,
…

The log file of the YubiKey Personalization Tool already contains the fields public id, private uid, and AES key in the correct order as fields 4-6. The following awk script log2ksm.awk extracts these fields from the file, sets their line number as a sequence number in front of them, and outputs the entries line by line after the mandatory header # ykksm 1:

#!/usr/bin/awk -f

BEGIN {
  FS=","
  printf("# ykksm 1\n")
}

/^Yubico OTP/ {
  printf("%d,%s,%s,%s,,,,
", NR, $4, $5, $6)
}

The command to convert the file configuration_log_hmac.csv and save the result as yubikey_secrets.csv is:

$ ./log2ksm.awk configuration_log_hmac.csv  >  yubikey_secrets.csv

The generated file can then be copied to the machine where the KSM is running, and its entries can be imported into it with the following command:

# yhsm-import-keys -D /etc/yubico/yhsm/keys.json --key-handle 1  <  yubikey_secrets.csv
output dir : /var/cache/yubikey-ksm/aeads
key handles : {1: '1'}
YHSM device : /etc/yubico/yhsm/keys.json


Done

Here too, Done indicates that the credentials have been successfully imported.

PAM

So that PAM can map received Public IDs to user accounts during authentication, the above configured authfile under /etc/yubikey_mappings must still be created. This contains a username and its assigned YubiKey IDs per line, separated by colons. If the newly created YubiKey with the Public ID credtivccccf is to be used by user bob, the authfile must contain the following line:

bob:credtivccccf

Mappings for further user accounts are configured accordingly in separate lines.

As an alternative to using a authfile, the mappings can also be made using an LDAP directory service. A separate article will be dedicated to this variant.

Demo

If all steps have been carried out successfully up to this point, the OpenVPN client will ask for a username and password or the OTP when establishing a connection. While the username still has to be entered manually, a press of a button on the YubiKey is sufficient to enter the OTP, so that the connection can be established. Instead of the individual characters of the OTP, only asterisks are displayed here as well.

# openvpn client.conf
...
Enter Auth Username: bob
Enter Auth Password: ********************************************
...

Conclusion

The installation and operation of your own validation service and Key Storage Module is quite complex and involves some effort. The interaction of the components with each other is difficult to track (which makes troubleshooting difficult), and the available tools are sometimes not very intuitive or even inconsistent (which makes understanding difficult).

However, those who do not shy away from the effort and are willing to delve deeper into the subject can ultimately enjoy the full comfort of Yubico OTP while still maintaining control over all components.

Support

If you require support with the configuration or use of two-factor authentication, our Open Source Support Center is at your disposal – if desired, also 24 hours a day, 365 days a year.

The apt.postgresql.org repository originally started with the two architectures amd64 and i386 (64- and 32-bit x64). In September 2016, ppc64el (POWER) was added. Over time, there have been repeated requests as to whether we might also support “arm”, which usually meant Raspberry Pi. However, these are mostly only 32-bit, and the widespread “armhf” Raspbian port is unfortunately only ARM6, an older hardware version.

Through HUAWEI Cloud Services, an “arm64” build machine has now been made available to the PostgreSQL® community, which is a modern processor architecture that is also suitable for PostgreSQL® servers. We then set up the machine and expanded the apt.postgresql.org repository to include this architecture. For the supported distributions, we chose Debian buster (stable), bullseye (testing) and sid (unstable) as well as Ubuntu bionic (18.04) and focal (20.04).

The build machine is very powerful. We built all the packages for the new architecture in just a few days. Very few special arm-specific problems occurred, which speaks for the stability of the Linux port on this architecture.

There is nothing standing in the way of using PostgreSQL® on arm64, on Debian or Ubuntu.

In parallel, we have expanded the repository to include support for the new Ubuntu LTS version: focal (20.04).
This distribution can now be used with support for five years until April 2025.

The credativ PostgreSQL® Competence Center is of course always happy to answer questions about the use of PostgreSQL® on arm and other architectures on Debian, Ubuntu and other operating systems. Please contact us!

This article was originally written by Christoph Berg.

Patroni is a clustering solution for PostgreSQL® that is getting more and more popular in the cloud and Kubernetes sector due to its operator pattern and integration with Etcd or Consul. Some time ago we wrote a blog post about the integration of Patroni into Debian. Recently, the vip-manager project which is closely related to Patroni has been uploaded to Debian by us. We will present vip-manager and how we integrated it into Debian in the following.

To recap, Patroni uses a distributed consensus store (DCS) for leader-election and failover. The current cluster leader periodically updates its leader-key in the DCS. As soon the key cannot be updated by Patroni for whatever reason it becomes stale. A new leader election is then initiated among the remaining cluster nodes.

PostgreSQL Client-Solutions for High-Availability

From the user’s point of view it needs to be ensured that the application is always connected to the leader, as no write transactions are possible on the read-only standbys. Conventional high-availability solutions like Pacemaker utilize virtual IPs (VIPs) that are moved to the primary node in the case of a failover.

For Patroni, such a mechanism did not exist so far. Usually, HAProxy (or a similar solution) is used which does periodic health-checks on each node’s Patroni REST-API and routes the client requests to the current leader.

An alternative is client-based failover (which is available since PostgreSQL 10), where all cluster members are configured in the client connection string. After a connection failure the client tries each remaining cluster member in turn until it reaches a new primary.

vip-manager

A new and comfortable approach to client failover is vip-manager. It is a service written in Go that gets started on all cluster nodes and connects to the DCS. If the local node owns the leader-key, vip-manager starts the configured VIP. In case of a failover, vip-manager removes the VIP on the old leader and the corresponding service on the new leader starts it there. The clients are configured for the VIP and will always connect to the cluster leader.

Debian-Integration of vip-manager

For Debian, the pg_createconfig_patroni program from the Patroni package has been adapted so that it can now create a vip-manager configuration:

pg_createconfig_patroni 11 test --vip=10.0.3.2

Similar to Patroni, we start the service for each instance:

systemctl start vip-manager@11-test

The output of patronictl shows that pg1 is the leader:

+---------+--------+------------+--------+---------+----+-----------+
| Cluster | Member |    Host    |  Role  |  State  | TL | Lag in MB |
+---------+--------+------------+--------+---------+----+-----------+
| 11-test |  pg1   | 10.0.3.247 | Leader | running |  1 |           |
| 11-test |  pg2   | 10.0.3.94  |        | running |  1 |         0 |
| 11-test |  pg3   | 10.0.3.214 |        | running |  1 |         0 |
+---------+--------+------------+--------+---------+----+-----------+

In journal of ‘pg1’ it can be seen that the VIP has been configured:

Jan 19 14:53:38 pg1 vip-manager[9314]: 2020/01/19 14:53:38 IP address 10.0.3.2/24 state is false, desired true
Jan 19 14:53:38 pg1 vip-manager[9314]: 2020/01/19 14:53:38 Configuring address 10.0.3.2/24 on eth0
Jan 19 14:53:38 pg1 vip-manager[9314]: 2020/01/19 14:53:38 IP address 10.0.3.2/24 state is true, desired true

If LXC containers are used, one can also see the VIP in the output of lxc-ls -f:

NAME    STATE   AUTOSTART GROUPS IPV4                 IPV6 UNPRIVILEGED
pg1     RUNNING 0         -      10.0.3.2, 10.0.3.247 -    false
pg2     RUNNING 0         -      10.0.3.94            -    false
pg3     RUNNING 0         -      10.0.3.214           -    false

The vip-manager packages are available for Debian testing (bullseye) and unstable, as well as for the upcoming 20.04 LTS Ubuntu release (focal) in the official repositories. For Debian stable (buster), as well as for Ubuntu 19.04 and 19.10, packages are available at apt.postgresql.org maintained by credativ, along with the updated Patroni packages with vip-manager integration.

Switchover Behaviour

In case of a planned switchover, e.g. pg2 becomes the new leader:

# patronictl -c /etc/patroni/11-test.yml switchover --master pg1 --candidate pg2 --force
Current cluster topology
+---------+--------+------------+--------+---------+----+-----------+
| Cluster | Member |    Host    |  Role  |  State  | TL | Lag in MB |
+---------+--------+------------+--------+---------+----+-----------+
| 11-test |  pg1   | 10.0.3.247 | Leader | running |  1 |           |
| 11-test |  pg2   | 10.0.3.94  |        | running |  1 |         0 |
| 11-test |  pg3   | 10.0.3.214 |        | running |  1 |         0 |
+---------+--------+------------+--------+---------+----+-----------+
2020-01-19 15:35:32.52642 Successfully switched over to "pg2"
+---------+--------+------------+--------+---------+----+-----------+
| Cluster | Member |    Host    |  Role  |  State  | TL | Lag in MB |
+---------+--------+------------+--------+---------+----+-----------+
| 11-test |  pg1   | 10.0.3.247 |        | stopped |    |   unknown |
| 11-test |  pg2   | 10.0.3.94  | Leader | running |  1 |           |
| 11-test |  pg3   | 10.0.3.214 |        | running |  1 |         0 |
+---------+--------+------------+--------+---------+----+-----------+

The VIP has now been moved to the new leader:

NAME    STATE   AUTOSTART GROUPS IPV4                 IPV6 UNPRIVILEGED
pg1     RUNNING 0         -      10.0.3.247          -    false
pg2     RUNNING 0         -      10.0.3.2, 10.0.3.94 -    false
pg3     RUNNING 0         -      10.0.3.214          -    false

This can also be seen in the journals, both from the old leader:

Jan 19 15:35:31 pg1 patroni[9222]: 2020-01-19 15:35:31,634 INFO: manual failover: demoting myself
Jan 19 15:35:31 pg1 patroni[9222]: 2020-01-19 15:35:31,854 INFO: Leader key released
Jan 19 15:35:32 pg1 vip-manager[9314]: 2020/01/19 15:35:32 IP address 10.0.3.2/24 state is true, desired false
Jan 19 15:35:32 pg1 vip-manager[9314]: 2020/01/19 15:35:32 Removing address 10.0.3.2/24 on eth0
Jan 19 15:35:32 pg1 vip-manager[9314]: 2020/01/19 15:35:32 IP address 10.0.3.2/24 state is false, desired false

As well as from the new leader pg2:

Jan 19 15:35:31 pg2 patroni[9229]: 2020-01-19 15:35:31,881 INFO: promoted self to leader by acquiring session lock
Jan 19 15:35:31 pg2 vip-manager[9292]: 2020/01/19 15:35:31 IP address 10.0.3.2/24 state is false, desired true
Jan 19 15:35:31 pg2 vip-manager[9292]: 2020/01/19 15:35:31 Configuring address 10.0.3.2/24 on eth0
Jan 19 15:35:31 pg2 vip-manager[9292]: 2020/01/19 15:35:31 IP address 10.0.3.2/24 state is true, desired true
Jan 19 15:35:32 pg2 patroni[9229]: 2020-01-19 15:35:32,923 INFO: Lock owner: pg2; I am pg2

As one can see, the VIP is moved within one second.

Updated Ansible Playbook

Our Ansible-Playbook for the automated setup of a three-node cluster on Debian has also been updated and can now configure a VIP if so desired:

# ansible-playbook -i inventory -e vip=10.0.3.2 patroni.yml

Questions and Help

Do you have any questions or need help? Feel free to write to info@credativ.com.

Yesterday, the fourth beta of the upcoming PostgreSQL®-major version 12 was released.

Compared to its predecessor PostgreSQL® 11, there are many new features:

  • Performance improvements for indexes: btree indexes now manage space more efficiently. The REINDEX command now also supports CONCURRENTLY, which was previously only possible with new indexes.
  • WITH queries are now embedded in the main query and thus optimized much better by the planner. Previously, WITH queries were always executed independently.
  • The native partitioning was further improved. Foreign keys can now also reference partitioned tables. Maintenance commands such as ATTACH PARTITION no longer require an exclusive table lock.
  • The support of page checksums and the tool pg_checksums was further improved, also with substantial cooperation by credativ.
  • It is now possible to integrate additional storage engines. The “zheap”, which is still under development, will be based on this, which promises more compact data storage with less bloat.

Of course, PostgreSQL® 12 will be tested using sqlsmith, the SQL “fuzzer” from our colleague Andreas Seltenreich. Numerous bugs in different PostgreSQL® versions were found with sqlsmith by using randomly generated SQL queries.

Debian and Ubuntu packages for PostgreSQL® 12 are going to be published on apt.postgresql.org with credativ’s help. This work will be handled by our colleague Christoph Berg.

The release of PostgreSQL® 12 is expected in the next weeks.

Patroni is a PostgreSQL high availability solution with a focus on containers and Kubernetes. Until recently, the available Debian packages had to be configured manually and did not integrate well with the rest of the distribution. For the upcoming Debian 10 “Buster” release, the Patroni packages have been integrated into Debian’s standard PostgreSQL framework by credativ. They now allow for an easy setup of Patroni clusters on Debian or Ubuntu.

Patroni employs a “Distributed Consensus Store” (DCS) like Etcd, Consul or Zookeeper in order to reliably run a leader election and orchestrate automatic failover. It further allows for scheduled switchovers and easy cluster-wide changes to the configuration. Finally, it provides a REST interface that can be used together with HAProxy in order to build a load balancing solution. Due to these advantages Patroni has gradually replaced Pacemaker as the go-to open-source project for PostgreSQL high availability.

However, many of our customers run PostgreSQL on Debian or Ubuntu systems and so far Patroni did not integrate well into those. For example, it does not use the postgresql-common framework and its instances were not displayed in pg_lsclusters output as usual.

Integration into Debian

In a collaboration with Patroni lead developer Alexander Kukushkin from Zalando the Debian Patroni package has been integrated into the postgresql-common framework to a large extent over the last months. This was due to changes both in Patroni itself as well as additional programs in the Debian package. The current Version 1.5.5 of Patroni contains all these changes and is now available in Debian “Buster” (testing) in order to setup Patroni clusters.

The packages are also available on apt.postgresql.org and thus installable on Debian 9 “Stretch” and Ubuntu 18.04 “Bionic Beaver” LTS for any PostgreSQL version from 9.4 to 11.

The most important part of the integration is the automatic generation of a suitable Patroni configuration with the pg_createconfig_patroni command. It is run similar to pg_createcluster with the desired PostgreSQL major version and the instance name as parameters:

pg_createconfig_patroni 11 test

This invocation creates a file /etc/patroni/11-test.yml, using the DCS configuration from /etc/patroni/dcs.yml which has to be adjusted according to the local setup. The rest of the configuration is taken from the template /etc/patroni/config.yml.in which is usable in itself but can be customized by the user according to their needs. Afterwards the Patroni instance is started via systemd similar to regular PostgreSQL instances:

systemctl start patroni@11-test

A simple 3-node Patroni cluster can be created and started with the following few commands, where the nodes pg1, pg2 and pg3 are considered to be hostnames and the local file dcs.yml contains the DCS configuration:


for i in pg1 pg2 pg3; do ssh $i 'apt -y install postgresql-common'; done
for i in pg1 pg2 pg3; do ssh $i 'sed -i "s/^#create_main_cluster = true/create_main_cluster = false/" /etc/postgresql-common/createcluster.conf'; done
for i in pg1 pg2 pg3; do ssh $i 'apt -y install patroni postgresql'; done
for i in pg1 pg2 pg3; do scp ./dcs.yml $i:/etc/patroni; done
for i in pg1 pg2 pg3; do ssh @$i 'pg_createconfig_patroni 11 test' && systemctl start patroni@11-test'; done

Afterwards, you can get the state of the Patroni cluster via

ssh pg1 'patronictl -c /etc/patroni/11-patroni.yml list'
+---------+--------+------------+--------+---------+----+-----------+
| Cluster | Member |    Host    |  Role  |  State  | TL | Lag in MB |
+---------+--------+------------+--------+---------+----+-----------+
| 11-test |  pg1   | 10.0.3.111 | Leader | running |  1 |           |
| 11-test |  pg2   | 10.0.3.41  |        | stopped |    |   unknown |
| 11-test |  pg3   | 10.0.3.46  |        | stopped |    |   unknown |
+---------+--------+------------+--------+---------+----+-----------+

Leader election has happened and pg1 has become the primary. It created its instance with the Debian-specific pg_createcluster_patroni program that runs pg_createcluster in the background. Then the two other nodes clone from the leader using the pg_clonecluster_patroni program which sets up an instance using pg_createcluster and then runs pg_basebackup from the primary. After that, all nodes are up and running:

+---------+--------+------------+--------+---------+----+-----------+
| Cluster | Member |    Host    |  Role  |  State  | TL | Lag in MB |
+---------+--------+------------+--------+---------+----+-----------+
| 11-test |  pg1   | 10.0.3.111 | Leader | running |  1 |         0 |
| 11-test |  pg2   | 10.0.3.41  |        | running |  1 |         0 |
| 11-test |  pg3   | 10.0.3.46  |        | running |  1 |         0 |
+---------+--------+------------+--------+---------+----+-----------+

The well-known Debian postgresql-common commands work as well:

ssh pg1 'pg_lsclusters'
Ver Cluster Port Status Owner    Data directory                 Log file
11  test    5432 online postgres /var/lib/postgresql/11/test    /var/log/postgresql/postgresql-11-test.log

Failover Behaviour

If the primary is abruptly shutdown, its leader token will expire after a while and Patroni will eventually initiate failover and a new leader election:

+---------+--------+-----------+------+---------+----+-----------+
| Cluster | Member |    Host   | Role |  State  | TL | Lag in MB |
+---------+--------+-----------+------+---------+----+-----------+
| 11-test |  pg2   | 10.0.3.41 |      | running |  1 |         0 |
| 11-test |  pg3   | 10.0.3.46 |      | running |  1 |         0 |
+---------+--------+-----------+------+---------+----+-----------+
[...]
+---------+--------+-----------+--------+---------+----+-----------+
| Cluster | Member |    Host   |  Role  |  State  | TL | Lag in MB |
+---------+--------+-----------+--------+---------+----+-----------+
| 11-test |  pg2   | 10.0.3.41 | Leader | running |  2 |         0 |
| 11-test |  pg3   | 10.0.3.46 |        | running |  1 |         0 |
+---------+--------+-----------+--------+---------+----+-----------+
[...]
+---------+--------+-----------+--------+---------+----+-----------+
| Cluster | Member |    Host   |  Role  |  State  | TL | Lag in MB |
+---------+--------+-----------+--------+---------+----+-----------+
| 11-test |  pg2   | 10.0.3.41 | Leader | running |  2 |         0 |
| 11-test |  pg3   | 10.0.3.46 |        | running |  2 |         0 |
+---------+--------+-----------+--------+---------+----+-----------+

The old primary will rejoin the cluster as standby once it is restarted:

+---------+--------+------------+--------+---------+----+-----------+
| Cluster | Member |    Host    |  Role  |  State  | TL | Lag in MB |
+---------+--------+------------+--------+---------+----+-----------+
| 11-test |  pg1   | 10.0.3.111 |        | running |    |   unknown |
| 11-test |  pg2   | 10.0.3.41  | Leader | running |  2 |         0 |
| 11-test |  pg3   | 10.0.3.46  |        | running |  2 |         0 |
+---------+--------+------------+--------+---------+----+-----------+
[...]
+---------+--------+------------+--------+---------+----+-----------+
| Cluster | Member |    Host    |  Role  |  State  | TL | Lag in MB |
+---------+--------+------------+--------+---------+----+-----------+
| 11-test |  pg1   | 10.0.3.111 |        | running |  2 |         0 |
| 11-test |  pg2   | 10.0.3.41  | Leader | running |  2 |         0 |
| 11-test |  pg3   | 10.0.3.46  |        | running |  2 |         0 |
+---------+--------+------------+--------+---------+----+-----------+

If a clean rejoin is not possible due to additional transactions on the old timeline the old primary gets re-cloned from the current leader. In case the data is too large for a quick re-clone, pg_rewind can be used. In this case a password needs to be set for the postgres user and regular database connections (as opposed to replication connections) need to be allowed between the cluster nodes.

Creation of additional Instances

It is also possible to create further clusters with pg_createconfig_patroni, one can either assign a PostgreSQL port explicitly via the --port option, or let pg_createconfig_patroni assign the next free port as is known from pg_createcluster:

for i in pg1 pg2 pg3; do ssh $i 'pg_createconfig_patroni 11 test2 && systemctl start patroni@11-test2'; done
ssh pg1 'patronictl -c /etc/patroni/11-test2.yml list'
+----------+--------+-----------------+--------+---------+----+-----------+
| Cluster  | Member |       Host      |  Role  |  State  | TL | Lag in MB |
+----------+--------+-----------------+--------+---------+----+-----------+
| 11-test2 |  pg1   | 10.0.3.111:5433 | Leader | running |  1 |         0 |
| 11-test2 |  pg2   |  10.0.3.41:5433 |        | running |  1 |         0 |
| 11-test2 |  pg3   |  10.0.3.46:5433 |        | running |  1 |         0 |
+----------+--------+-----------------+--------+---------+----+-----------+

Ansible Playbook

In order to easily deploy a 3-node Patroni cluster we have created an Ansible playbook on Github. It automates the installation and configuration of PostgreSQL and Patroni on the three nodes, as well as the DCS server on a fourth node.

Questions and Help

Do you have any questions or need help? Feel free to write to info@credativ.com.

Last weekend, DebCamp, which is the pre-event of the largest Debian conference worldwide – DebConf, started.

This years DebConf will take place in Taiwan from July 29th to August 05th.

After we already took part in this years MiniDebConf in Hamburg, we are now also going to visit Taiwan to attend the DebConf, also because credativ is sponsoring the event this year.

Even if the flights last 16 hours on average, the anticipation of the colleagues is already great and we are very curious what the event has in store for its visitors.

Lectures, Talks and BoFs

Meanwhile the packed lecture program has been published. There will be around 90 presentations in 3 tracks on the following 8 topics:

  • Debian blends
  • Cloud and containers
  • Debian in science
  • Embedded
  • Packaging, policy, and Debian infrastructure
  • Security
  • Social context
  • Systems administration, automation and orchestration

Many of the presentations can even be streamed. The links are as follows:

We are especially looking forward to the SPI BoF (Software in the Public Interest – Birds of a Feather) and the DSA BoF (Debian System Administrators – Birds of a Feather), in which our colleague and Debian sysadmin Martin Zobel-Helas participates.

Job Fair

For us at credativ, the Job Fair is of particular importance. Here we meet potential new colleagues and hopefully many visitors interested in our company. If you want to have a chat with our colleagues: Noël, Sven and Martin will be available for you at the Job Fair and on all other days of the event.

In addition to exciting discussions, we of course hope for one or two interesting applications that we will receive afterwards.

The Job Fair takes place one day before DebConf18, on 28 July.

Daytrip and other events

Of course there will also be the “Daytrip” this year, which will show the conference visitors the surroundings and the country. Participants have a whole range of options to choose from. Whether you want to explore the city, go hiking, or hold a Taiwanese tea ceremony, there should be something for everyone.

In addition to the daily breakfast, lunch, coffee break and dinner, there will be a cheese-and-wine party on Monday 30 August and a conference dinner on Thursday 2 August. Our colleagues will attend both events and hopefully have some interesting conversations.

Debian and credativ

The free operating system Debian is one of the most popular Linux distributions and has thousands of users worldwide.

Besides Martin Zobel-Helas, the credativ employs many members of the Debian project. Also our managing director, Dr. Michael Meskes, was actively involved in Debian even before credativ GmbH (1999) was founded. This also resulted in a close and long-standing bond with the Debian project and the community for credativ.

This article was originally written by Philip Haas.

Debian 9 “Stretch”, the latest version of Debian, is about to be released and after a full-freeze on February 5th everyone is trying its best to fix the last bugs that are left.

Upon entering the final phase of development in February the test version was “frozen” so that no more packages could be added or removed without the approval of the release team.

However, Stretch has some bugs left, which need to be resolved until the release date, especially the so called release critical bugs (RC). For this purpose, numerous Debian developers host worldwide meet ups.

These meet ups are a long standing tradition and are lovingly called “Bug Squashing Party”. Despite the cute name, these events usually turn out to be one the most focused, intense and hard working days in the life cycle of a new Debian version. Pressured by the upcoming release date, everyone gets together to get rid of the nasty release critical bugs and focus on unfinished packages.

This weekend, from the 17th to the 19th of March, the Debian developers from credativ are hosting a Bug Squashing Party in the German Open Source Support Center in Mönchengladbach.

The Open Source Support Center employs the likely biggest number of European Debian developers in one place. Therefore credativ GmbH is providing the location and technical infrastructure for everyone who decided to join the Bug Squashing Party.

We hope that this year’s meeting is going to be as successful as in the previous years. Developers from all neighbouring countries took part in past events and some even found their future employer.

Coordinating the event are: Martin Zobel-Helas “zobel” (Debian system administrator) and Alexander Wirt “formorer” (Debian Quality Assurance).

If you would like to participate, feel free to sign up!

We are looking forward to your visit.

Here is the announcement on the mailing-list:

https://lists.debian.org/debian-devel-announce/2017/02/msg00006.html

Here is the entry in the Debian wiki:

https://wiki.debian.org/BSP/2017/03/de/credativ

This article was originally written by Philip Haas.

In this blog, we describe the integration of Icinga2 into Graphite and Grafana on Debian.

What is Graphite

Graphite stores performance data over a configurable period. Services can send metrics to Graphite via a defined interface, which are then stored in a structured manner for the desired period. Possible examples of such metrics include CPU utilization or web server access numbers. Graphs can now be generated from the various metrics via Graphite’s integrated web interface. This allows us to detect and observe changes in values over different periods. A good example of such a trend analysis is disk space utilization. With the help of a trend graph, it is easy to see at what rate the space requirement is growing and approximately when a storage replacement will be necessary.

What is Grafana

Although Graphite offers its own web interface, it is not particularly attractive or flexible. This is where Grafana steps in.

Grafana is a frontend for various metric storage systems. For example, it supports Graphite, InfluxDB, and OpenTSDB. Grafana offers an intuitive interface for creating representative graphs from metrics. It also has a variety of functions to optimize the appearance and display of graphs. Subsequently, graphs can be grouped into dashboards. Parameterization of graphs is also possible. This also allows you to display only a graph from a specific host.

Installing Icinga2

At this point, only the installation required for Graphite is described. Current versions of Icinga2 packages for Debian can be obtained directly from the Debmon Project. The Debmon Project, run by official Debian package maintainers, provides current versions of various monitoring tools for Debian releases in a timely manner. To integrate these packages, the following commands are required:

# add debmon
cat  <<EOF  >/etc/apt/sources.list.d/debmon.list
deb http://debmon.org/debmon debmon-jessie main
EOF

# add debmon key
wget -O - http://debmon.org/debmon/repo.key 2>/dev/null | apt-key add -

# update repos
apt-get update

Next, we can install Icinga2:

apt-get install icinga2

Installing Graphite and Graphite-Web

After Icinga2 is installed, Graphite and Graphite-web can also be installed.

# install packages for icinga2 and graphite-web and carbon

apt-get install icinga2 graphite-web graphite-carbon libapache2-mod-wsgi apache2

Configuring Icinga2 with Graphite

Icinga2 must be configured to export all collected metrics to Graphite. The Graphite component that receives this data is called “Carbon”. In our example installation, Carbon runs on the same host as Icinga2 and also uses the default port. For this reason, no further configuration of Icinga2 is necessary; it is sufficient to enable the export.

The command does this. icinga2 feature enable graphite
Next, Icinga2 must be restarted: service icinga2 restart

If the Carbon server runs on a different host or a different port, the Icinga2 configuration can be adjusted in the file /etc/icinga2/features-enabled/graphite.conf. Details can be found in the Icinga2 documentation.

If the configuration was successful, a number of files should appear shortly in “/var/lib/graphite/whisper/icinga“. If this is not the case, you should check the Icinga2 log file (located in “/var/log/icinga2/icinga2.log“).

Configuring Graphite-web

Grafana uses Graphite’s web frontend as an interface for the metrics stored by Graphite. For this reason, it is necessary to configure Graphite-web correctly. For performance reasons, we operate Graphite-web as a WSGI module. A number of configuration steps are required for this:

  1. First, we create a user database for Graphite-web. Since we will not have many users, we use sqlite as the backend for our user data at this point. For this purpose, we execute the following commands, which initialize the user database and assign it to the user under which the web frontend runs:
    graphite-manage syncdb
    chown _graphite:_graphite /var/lib/graphite/graphite.db
  2. Next, we activate the WSGI module in Apache: a2enmod wsgi
  3. For simplicity, the web interface should run in its own virtual host and on its own port. To ensure Apache also listens on this port, we add the line “Listen 8000” to the file “/etc/apache2/ports.conf“.
  4. The Graphite Debian package already provides an Apache configuration file that we can use for our purposes, with slight modifications. cp /usr/share/graphite-web/apache2-graphite.conf /etc/apache2/sites-available/graphite.conf To ensure the virtual host also uses port 8000, we must replace the line
    <VirtualHost *:80>

    with

    <VirtualHost *:8000>

    .

  5. Then we activate the new virtual host via a2ensite graphite and restart Apache: systemctl restart apache2
  6. Graphite-web should now be accessible at http://YOURIP:8000/. If this is not the case, the Apache log files under “/var/log/apache2/” could provide valuable information.

Configuring Grafana

Grafana is currently not included in Debian. However, the author offers an Apt repository through which Grafana can be installed. Even if the repository refers to Wheezy, the packages also work under Debian Jessie.

The repository is only accessible via HTTPS. For this reason, HTTPS support for apt must first be installed: apt-get install apt-transport-https

Next, the repository can be integrated.

# add repo (package for wheezy works on jessie)
cat  <<EOF  >/etc/apt/sources.list.d/grafana.list
deb https://packagecloud.io/grafana/stable/debian/ wheezy main
EOF
 
# add key
curl -s https://packagecloud.io/gpg.key | sudo apt-key add -
 
# update repos
apt-get update

Subsequently, the package can be installed: apt-get install grafana. For Grafana to run, we still need to enable the service systemctl enable grafana-server.service and start it systemctl start grafana-server.

Grafana is now accessible at http://YOURIP:3000/. The default username and password in our example is “admin”. This password should, of course, be replaced with a secure password at the next opportunity.

Next, Grafana must be configured to use Graphite as a data source. For simplicity, the configuration is explained via a screencast.

You are currently viewing a placeholder content from Default. To access the actual content, click the button below. Please note that doing so will share data with third-party providers.

More Information

After successfully integrating Graphite as a data source, we can create our first graph. There is also a short screencast for this here.

You are currently viewing a placeholder content from Default. To access the actual content, click the button below. Please note that doing so will share data with third-party providers.

More Information

Congratulations, you have now successfully installed and configured Icinga2, Graphite, and Grafana. For all further steps, please refer to the documentation of the respective projects:

The vast majority of Debian installations are simplified with the use of Preseeding and Netboot. Friedrich Weber, a school student on a work experience placement with us at our German office has observed the process and captured it in a Howto here:

Imagine the following situation: you find yourself with ten to twenty brand new Notebooks and the opportunity to install them with Debian and customise to your own taste. In any case it would be great fun to manually perform the Debian installation and configuration on each Notebook.

This is where Debian Preseed comes into play. The concept is simple and self-explanatory; usually, whoever is doing the installation will be faced with a number of issues during the process (e.g. language, partitioning, packages, Bootloader, etc.) In terms of Preseed, all of these issues can now be resolved. Only those which are not already accounted for in Preseed remain for the Debian installer. In the ideal situation these would become apparent at the outset of the installation, where the solution would differ depending on the target system and which the administrator must deal with manually – only when these have been dealt with can the installation be left to run unattended.

Preseed functions on some simple inbuilt configuration data: preseed.cfg. It includes, as detailed above, the questions which must be answered during installation, and in debconf-format. Data such as this consists of several rows, each row of which defines a debconf configuration option – a response to a question – for example:

d-i debian-installer/locale string de_DE.UTF-8

The first element of these lines is the name of the package, which is configured (d-i is here an abbreviation of debian installer), the second element is the name of the option, which is set, as the third element of the type of option (a string) and the rest is the value of the option.

In this example, we set the language to German using UTF-8-coding. You can put lines like this together yourself, even simpler with the tool debconf-get-selections: these commands provide straight forward and simple options, which can be set locally.

From the selection you can choose your desired settings, adjusted if necessary and copied into preseed.cfg. Here is an example of preseed.cfg:

d-i debian-installer/locale string de_DE.UTF-8
d-i debian-installer/keymap select de-latin1
d-i console-keymaps-at/keymap select de
d-i languagechooser/language-name-fb select German
d-i countrychooser/country-name select Germany
d-i console-setup/layoutcode string de_DE
d-i clock-setup/utc boolean true
d-i time/zone string Europe/Berlin
d-i clock-setup/ntp boolean true
d-i clock-setup/ntp-server string ntp1
 
tasksel tasksel/first multiselect standard, desktop, gnome-desktop, laptop

d-i pkgsel/include string openssh-client vim less rsync

In addition to language and timezone settings, selected tasks and packages are also set with these options. If left competely unattended, the installation will not complete, but will make a good start.

Now onto the question of where Preseed pulls its data from. It is in fact possible to use Preseed with CD and DVD images or USB sticks, but generally more comfortable to use a Debian Netboot Image, essentially an installer, which is started across the network and which can cover its Preseed configuration.

This boot across the network is implemented with PXE and requires a system that can boot from a network card. Next, the system depends on booting from the network card. It travels from a DHCO server to an IP address per broadcast.

This DHCP server transmits not only a suitable IP, but also to the IP of a so-called Bootserver. A Bootserver is a TFTP-Server, which provides a Bootloader to assist the Administrator with the desired Debian Installer. At the same time the Debian Installer can be shared with the Boot options that Preseed should use and where he can find the Preseed configuration. Here is a snippet of the PXELINUX configuration data pxelinux.cfg/default:

label i386
kernel debian-installer/i386/linux
append vga=normal initrd=debian-installer/i386/initrd.gz netcfg/choose_interface=eth0 domain=example.com locale=de_DE debian-installer/country=DE debian-installer/language=de debian-installer/keymap=de-latin1-nodeadkeys console-keymaps-at/keymap=de-latin1-nodeadkeys auto-install/enable=false preseed/url=http://$server/preseed.cfg DEBCONF_DEBUG=5 -- quiet

When the user types i386, the debian-installer/i386/linux kernel (found on the TFTP server) is downloaded and run. This is in addition to a whole load of bootoptions given along the way. The debian installer allows the provision of debconf options as boot parameters. It is good practice for the installer to somehow communicate where to find the Preseed communication on the network (preseed/url).

In order to download this Preseed configuration, it must also be somehow built into the network. The options for that will be handed over (the options for the hostnames would be deliberately omitted here, as every target system has its own Hostname). auto-install/enable would delay the language set up so that it is only enabled after the network configuration, in order that these installations are read through preseed.cfg.

It is not necessary as the language set up will also be handed over to the kernel options to ensure that the network configuration is German. The examples and configuration excerpts mentioned here are obviously summarised and shortened. Even so, this blog post should have given you a glimpse into the concept of Preseed in connection with netboot. Finally, here is a complete version of preseed.cfg:

d-i debian-installer/locale string de_DE.UTF-8
d-i debian-installer/keymap select de-latin1
d-i console-keymaps-at/keymap select de
d-i languagechooser/language-name-fb select German
d-i countrychooser/country-name select Germany
d-i console-setup/layoutcode string de_DE

# Network
d-i netcfg/choose_interface select auto
d-i netcfg/get_hostname string debian
d-i netcfg/get_domain string example.com
 
# Package mirror
d-i mirror/protocol string http
d-i mirror/country string manual
d-i mirror/http/hostname string debian.example.com
d-i mirror/http/directory string /debian
d-i mirror/http/proxy string
d-i mirror/suite string lenny
 
# Timezone
d-i clock-setup/utc boolean true
d-i time/zone string Europe/Berlin
d-i clock-setup/ntp boolean true
d-i clock-setup/ntp-server string ntp.example.com
 
# Root-Account
d-i passwd/make-user boolean false
d-i passwd/root-password password secretpassword
d-i passwd/root-password-again password secretpassword
 
# Further APT-Options
d-i apt-setup/non-free boolean false
d-i apt-setup/contrib boolean false
d-i apt-setup/security-updates boolean true
 
d-i apt-setup/local0/source boolean false
d-i apt-setup/local1/source boolean false
d-i apt-setup/local2/source boolean false
 
# Tasks
tasksel tasksel/first multiselect standard, desktop
d-i pkgsel/include string openssh-client vim less rsync
d-i pkgsel/upgrade select safe-upgrade
 
# Popularity-Contest
popularity-contest popularity-contest/participate boolean true

# Command to be followed after the installation. `in-target` means that the following
# Command is followed in the installed environment, rather than in the installation environment.
# Here http://$server/skript.sh nach /tmp is downloaded, enabled and implemented.
d-i preseed/late_command string in-target wget -P /tmp/ http://$server/skript.sh; in-target chmod +x /tmp/skript.sh; in-target /tmp/skript.sh>

All Howtos of this blog are grouped together in the Howto category – and if you happen to be looking for Support and Services for Debian you’ve come to the right place at credativ.

This post was originally written by Irenie White.