News Archives - credativ®

PG Day Austria 2025 in ViennaOn September 4, 2025, the third pgday Austria took place in the Apothecary Wing of Schönbrunn Palace in Vienna, following the previous events in 2021 and 2022.
153 participants had the opportunity to attend a total of 21 talks and visit 15 different sponsors, discussing all possible topics related to PostgreSQL and the community.

Also present this time was the Sheldrick Wildlife Trust, which is dedicated to rescuing elephants and rhinos. Attendees could learn about the project, make donations, and participate in a raffle.

The talks ranged from topics such as benchmarking and crash recovery to big data.
Our colleague was also represented with his talk “Postgres with many data: To MAXINT and beyond“.

As a special highlight, at the end of the day, before the networking event, and in addition to the almost obligatory lightning talks, there was a “Celebrity DB Deathmatch” where various community representatives came together for a very entertaining stage performance to find the best database in different disciplines. To everyone’s (admittedly not great) surprise, PostgreSQL was indeed able to excel in every category.

Additionally, our presence with our own booth gave us the opportunity to have many very interesting conversations and discussions with various community members, as well as sponsors and visitors in general.
For the first time, the new managing director of credativ GmbH was also on site after our re-independence and saw things for himself.

All in all, it was a (still) somewhat smaller, but nonetheless, as always, a very instructive and familiar event, and we are already looking forward to the next one and thank the organizers and the entire team on site and behind the scenes.

Dear Open Source Community, dear partners and customers,

We are pleased to announce that credativ GmbH is once again a member of the Open Source Business Alliance (OSBA) in Germany. This return is very important to us and continues a long-standing tradition, because even before the acquisition by NetApp, credativ® was an active and committed member of the Alliance. We look forward with great pleasure to this renewed collaboration and the opportunity to actively contribute to shaping the open-source landscape in Germany once again.

The OSBA

OSBA aims to strengthen the use of Open Source Software (OSS) in businesses and public administration. We share this goal with deep conviction. Our renewed membership is intended not only to promote networking within the community but, above all, to support the important work of the OSBA in intensifying the public dialogue about the benefits of Open Source.

With our membership, we reaffirm our commitment to an open, collaborative, and sovereign digital future. We look forward to further advancing the benefits of Open Source together with the OSBA and all members.

Schleswig-Holstein’s Approach

We see open-source solutions not just as a cost-efficient alternative, but primarily as a path to greater digital sovereignty. The state of Schleswig-Holstein is a shining example here: It demonstrates how public authorities can initiate change and reduce dependence on commercial software. This not only leads to savings but also to independence from the business models and political influences of foreign providers.

Open Source Competition

Another highlight in the open-source scene is this year’s OSBA Open Source Competition. We are particularly pleased that this significant competition is being held this year under the patronage of Federal Digital Minister Dr. Karsten Wildberger. This underscores the growing importance of Open Source at the political level. The accompanying sponsor is the Center for Digital Sovereignty (ZenDiS), which also emphasizes the relevance of the topic.

We are convinced that the competition will make an important contribution to promoting innovative open-source projects and look forward to the impulses it will generate.

Further information about the competition can be found here:

Key Takeaways

  • credativ GmbH has rejoined the Open Source Business Alliance (OSBA) in Germany.
  • The OSBA promotes the use of open-source software in businesses and public administration.
  • Membership reaffirms the commitment to an open digital future and fosters dialogue about Open Source.
  • Schleswig-Holstein demonstrates how open-source solutions can enhance digital sovereignty.
  • The OSBA’s Open Source Competition is under the patronage of Federal Minister for Digital Affairs Dr. Karsten Wildberger.

Initial situation

Some time ago, Bitnami announced that it would be switching its public container repositories.

Bitnami is known for providing both Helm charts and container images for a wide range of applications. These include Helm Charts and container images for Keycloak, the RabbitMQ cluster operator, and many more. Many of these charts and images are used by companies, private individuals, and open source projects.

Currently, the images provided by Bitnami are based on Debian. In the future, the images will be based on specially hardened distroless images.

A timeline of the changes, including FAQs, can be found on GitHub.

What exactly is changing?

According to Bitnami, the current Dockerhub Repo Bitnami will be converted on August 28, 2025. All images available up to that point will only be available within the Bitnami Legacy Repositories from that date onwards.

Some of the new secure images are already available at bitnamisecure. However, without a subscription, only a very small subset of images will be provided. Currently, there are 343 different repositories under bitnami – but only 44 under bitnamisecure (as of 2025-08-19). In addition, the new Secure Images are only available in the free version under their digests. The only tag available, “latest,” always points to the most recently provided image. The version behind it is not immediately apparent.

Action Required?

If you use Helm charts from Bitnami with container images from the current repository, action is required. Currently used Bitnami Helm charts mostly reference the (still) current repository.

Depending on your environment, the effects of the change may not be noticeable until a later date. For example, when restarting a container or pod, if the cached version of the image has been cleaned up or the pod was started on a node that does not have direct access (cache) to the image. If a container proxy is used to obtain the images, this may happen even later.

Required adjustments

If you want to ensure that the images currently in use can continue to be obtained, you should switch to the Bitnami Legacy Repository. This is already possible.

Affected Helm charts can be patched with adjusted values files, for example.

Unfortunately, the adjustments mentioned above are only a quick fix. If you want to use secure and updated images in the long term, you will have to make the switch.
This may mean switching to an alternative Helm chart or container image, accepting the new conditions of the bitnamisecure repository, or paying for the convenience you have enjoyed so far.

What are the alternatives?

For companies that are not prepared to pay the new license fees, there are various alternatives:

References
Proxmox Logo

The release of Proxmox Virtual Environment 9.0 marks a significant step forward for the popular open-source virtualisation platform. With a range of improvements in performance, flexibility and user-friendliness, this version stands out from its predecessors and is increasingly geared towards the requirements of businesses.

New Package Versions
Proxmox 9 Package Sources (Source: proxmox.com)

The most important new features at a glance:

  1. Basis: Debian 13 "Trixie" & Linux Kernel 6.14
    The foundation of Proxmox VE 9.0 is the new Debian 13 "Trixie". In combination with the latest Linux kernel 6.14, users benefit from improved hardware compatibility, increased security and overall better performance.
  2. VM snapshots for LVM shared storage
    One of the most anticipated features is native snapshot support for LVM Shared Storage. This is particularly important for environments that use iSCSI or Fibre Channel SANs, as it is now possible to create snapshots as so-called "volume chains". This enables flexible and hardware-independent backup solutions. The snapshot functionality is based on QCow2 and is also available for other storage types.
  3. SDN stack with "fabrics"
    The software-defined networking (SDN) stack has been expanded to include the new concept of "fabrics". This makes it easier to create and manage complex, fault-tolerant and scalable network topologies. The new version also supports the OpenFabric and OSPF routing protocols, which simplifies the setup of EVPN networks.
  4. High availability (HA) with affinity rules
    New affinity rules make managing high-availability clusters more flexible. Administrators can now define whether VMs or containers should remain on the same node (positive affinity) or be distributed across different nodes (negative affinity) to increase reliability. However, for behaviour familiar from VMware DRS, we still recommend ProxLB as the tool of choice.
  5. Modern and responsive mobile user interface
    The mobile web interface has been completely redesigned in Rust using the Yew framework. It offers significantly improved usability, faster loading times and allows basic maintenance tasks to be performed on the go.
  6. Improvements to ZFS
    There is also good news for users who use ZFS storage pools: Version 9.0 now allows new hard disks to be added to existing RAIDZ pools with minimal downtime.
  7. Expansion of metrics
    In the new version, the host metrics have been revised and expanded.

In summary, Proxmox VE 9.0 represents a solid further development of the platform. The new features and modern foundation make it an even more powerful and reliable solution for businesses and home users who value open source technology.

Further information on upgrading to the new version can also be found here: https://pve.proxmox.com/wiki/Upgrade_from_8_to_9

Our team will also be happy to assist you with upgrading your nodes.

Screenshot of Host Metrics

credativ is an authorised reseller for Proxmox VE and will be happy to assist you. In particular, we can help you with the design, setup and operation of automated cluster environments. Migration from existing commercial solutions is high on our list of priorities. We also offer our customers 24/7 support for Proxmox VE.

The two screenshots and the Proxmox logo are from https://www.proxmox.com/de/ueber-uns/details-unternehmen/medienkit.

Introduction

Last Saturday, August 9, the Debian project released the latest version of the Linux distribution “Debian.” Version 13 is also known as ‘Trixie’ and, like its predecessor “Bookworm,” contains many new features and improvements. Since some credativ employees are also active members of the Debian project, the new version is naturally a special reason for us to take a look at some of its highlights.

(more…)

On Thursday, 26 June and Friday, 27 June 2025, my colleague Patrick Lauer and I had the amazing opportunity to attend Swiss PGDay 2025, held at the OST Eastern Switzerland University of Applied Sciences in Rapperswil. This two-day PostgreSQL conference featured two parallel tracks of presentations in English and German, bringing together users and experts primarily from across Switzerland. Our company, credativ, was among the supporters of this year’s conference.

During the event, Patrick delivered an engaging session titled “Postgres with many data: To MAXINT and beyond,” which built on past discussions about massive-scale Postgres usage. He highlighted the practical issues that arise when handling extremely large datasets in PostgreSQL – for instance, how even a simple SELECT COUNT(*) can become painfully slow, and how backups and restores can take days on very large datasets. He also shared strategies to manage performance effectively at these scales.

I presented a significantly updated version of my talk, “Building a Data Lakehouse with PostgreSQL: Dive into Formats, Tools, Techniques, and Strategies.” It covered modern data formats and frameworks such as Apache Iceberg, addressing key challenges in lakehouse architectures – from governance, privacy, and compliance, to data quality checks and AI/ML use cases. The talk emphasized PostgreSQL’s capability to play a central role in today’s data lakehouse and AI landscape. At the close of the conference, I delivered a brief lightning talk showcasing our new open-source migration tool, “credativ-pg-migrator.”

 
(c) photos by Gülçin Yıldırım Jelinek

The conference schedule was packed with many high-quality, insightful talks. We would particularly like to highlight:

* Bruce Momjian – “How Open Source and Democracy Drive Postgres”: In his keynote, Bruce Momjian outlined how PostgreSQL’s open-source development model and democratic governance have powered its success. He explained the differences between open-source and proprietary models, reviewed PostgreSQL’s governance history, and illustrated how democratic, open processes result in robust software and a promising future for Postgres.

* Gülçin Yıldırım Jelinek – “Anatomy of Table-Level Locks in PostgreSQL”: session covered the fundamentals of PostgreSQL’s table-level locking mechanisms. Explained how different lock modes are acquired and queued during schema changes, helping attendees understand how to manage lock conflicts, minimize downtime, and avoid deadlocks during high-concurrency DDL operations.

* Aarno Aukia – “Operating PostgreSQL at Scale: Lessons from Hundreds of Instances in Regulated Private Clouds”: the speaker shared lessons from running extensive Postgres environments in highly regulated industries. He discussed architectural patterns, automation strategies, and “day-2 operations” practices that VSHN uses to meet stringent availability, compliance, and audit requirements, including secure multi-tenancy, declarative deployments, backups, monitoring, and lifecycle management in mission-critical cloud-native setups.

* Bertrand Hartwig-Peillon – “pgAssistant”: Author introduced pgAssistant, an open-source tool designed to help developers optimize PostgreSQL schemas and queries before production deployment. He demonstrated how pgAssistant combines deterministic analysis with an AI-driven approach to detect schema inconsistencies and suggest optimizations, effectively automating best practices and performance tuning within development workflows.

* Gianni Ciolli – “The Why and What of WAL”: Gianni Ciolli provided in a great Italian style concise history and overview of PostgreSQL’s Write-Ahead Log (WAL). He explained WAL’s central role in PostgreSQL for crash safety, backups, and replication, showcasing examples of WAL-enabled features like fast crash recovery, efficient hot backups, physical replication, and logical decoding.

* Daniel Krefl – “Hacking pgvector for performance”: The speaker presented an enhanced version of the pgvector extension for massive data processing, optimized by maintaining the vector index outside PostgreSQL memory and offloading computations, including GPU integration. He detailed the process of moving pgvector’s core logic externally for improved speed, demonstrating notable performance gains in the EU AERO project context. He also talked about distributed PostgreSQL XC, XL and TBase, which are unfortunately stuck on the old version 10 and how he ported changes from these projects into the version 16.

* Luigi Nardi – “A benchmark study on the impact of PostgreSQL server parameter tuning”: Luigi Nardi presented comprehensive benchmark results on tuning PostgreSQL configuration parameters. Highlighting that many users default settings, he demonstrated how significant performance improvements can be achieved through proper tuning across various workloads (OLTP, OLAP, etc.), providing actionable insights tailored to specific environments.

* Renzo Dani – “From Oracle to PostgreSQL: A HARD Journey and an Open-Source Awakening”: Author recounted his experiences migrating a complex enterprise application from Oracle to PostgreSQL, addressing significant challenges such as implicit type casting, function overloading differences, JDBC driver issues, and SQL validation problems. He also highlighted the benefits, including faster CI pipelines, more flexible deployments, and innovation opportunities provided by open-source Postgres, along with practical advice on migration tools, testing strategies, and managing trade-offs.


(c) photo by Swiss PostgreSQL User Group

At the end of the first day, all participants enjoyed a networking dinner. We both want to sincerely thank the Swiss PGDay organizers (Swiss PostgreSQL User Group) for an amazing event. Swiss PGDay 2025 was a memorable and valuable experience, offering great learning and networking opportunities. We are also very grateful to credativ for enabling our participation, and we look forward to future editions of this excellent conference.

Effective March 1, 2025, the Mönchengladbach-based open-source specialist credativ IT Services GmbH will once again operate as an independent company in the market. In May 2022, credativ GmbH was acquired by NetApp and integrated into NetApp Deutschland GmbH on February 1, 2023. This step allowed the company to draw on extensive experience and a broader resource base. However, after intensive collaboration within the storage and cloud group, it has become clear that credativ, through its regained independence, can offer the best conditions to address customer needs even more effectively. The transition is supported by all 46 employees.

“We have decided to take this step to focus on our core business areas and create the best possible conditions for further growth. For our customers, this means maximum flexibility. We thank NetApp Management for this exceptional opportunity.”, says David Brauner, Managing Director of credativ IT Services GmbH.

“The change is a testament to the confidence we have in the credativ team and their ability to lead the business towards a prosperous future.“, commented Begoña Jara, Vice President of NetApp Deutschland GmbH.

What Does this Change Mean for Credativ’s Customers?

As a medium-sized company, the open-source service provider can rely on even closer collaboration and more direct communication with its customers. An agile structure is intended to enable faster and more individualized decisions, thus allowing for more flexible responses to requests and requirements. Naturally, collaboration with the various NetApp teams and their partner organizations will continue as before.

Since 1999, credativ has been active in the open-source sector as a service provider with a strong focus on IT infrastructure, virtualization, and cloud technologies. credativ also has a strong team focused on open-source-based databases such as PostgreSQL and related technologies. In the coming weeks, the new company will be renamed credativ GmbH.


Update:

The credativ IT Services GmbH was renamed to credativ GmbH on March 19, 2025. The commercial register entry HRB 23003 remains valid. We kindly ask all customers, however, not to use any bank details of the former credativ GmbH that may still be saved. This GmbH was also renamed by its owners and has no direct relationship with the new credativ GmbH.

The topic of AI “hallucinations” has gained considerable attention in discussions surrounding large language models (LLMs) and generative AI. Some view it as the most significant flaw of LLMs, undermining their usability. Yet others see it as a potential source of new ideas. Let us delve deeper into this issue. To understand this phenomenon, it is crucial to recognize that AI hallucinations are not intentional errors or bugs in the system. From the AI’s perspective, there is little distinction between what humans call a “hallucination” and a “correct” answer. Both types of output are produced by the same process: predicting the most probable response based on patterns in the training data. Large language models, such as GPT-4 and similar transformer-based architectures, are inherently probabilistic. They generate responses based on the most likely sequence of words (tokens) given the context of a conversation or query. This probability is derived from the patterns and structures learned from vast amounts of text during training. While the training data is extensive, it is far from perfect, containing gaps, biases, and inaccuracies. Nevertheless, LLMs are designed to always provide a response, even when uncertain.

Hallucinations typically arise from one of these issues: overgeneralization, underfitting, or overfitting. Overgeneralization often stems from problems in the training dataset. LLMs are designed to generalize based on patterns and associations learned from the data, but they can also generalize hidden errors or biases present in the training material. An example of this issue is “co-occurrence bias,” where if two terms or concepts frequently appear together in the training data, the model may overestimate their association and produce nonsensical connections. In this case, the LLM behaves like a student who has learned only a few examples and tries to apply them to unrelated topics. Underfitting occurs when the model is too simple, the training dataset is too limited in certain areas, or the training process was insufficient. In such cases, the model fails to capture detailed patterns and learns only general, superficial facts and relationships. The result is vague or overly generic answers, much like a student who has only learned basic concepts and tries to bluff their way through a response due to a lack of detailed knowledge. On the other hand, overfitting happens when the model becomes too closely aligned with the training data, making it difficult to respond appropriately to new, unseen data. This can occur if the training process is prolonged, causing the model to memorize the training data rather than generalize to new situations. In this case, the model behaves like a student who has memorized a textbook word-for-word but struggles to apply that knowledge in novel contexts.

Another reason for hallucinations is the complexity and variability of human languages. Human languages are inherently complex, filled with nuances, idioms, and situations where context plays a crucial role in understanding. These nuances evolve over time, meaning that the same words may carry slightly different meanings now than they did 30 years ago. Figurative meanings shift across time and cultures. Even within the seemingly unified English language, subtle differences in usage and understanding exist between countries and regions. These factors introduce ambiguity into the training data and contribute to hallucinated responses. A related issue is “semantic drift,” where, over long distances in text, the meaning of words can shift, or context can subtly change. This phenomenon can confuse even human readers, and LLMs may connect terms from different contexts without proper semantic grounding, leading to outputs that mix contextually related but semantically unrelated ideas. Semantic drift is closely linked to “domain crossovers,” where the model struggles to separate distinct domains with similar linguistic patterns. For instance, the structure of “Beethoven collaborated with…” is similar to “Beethoven composed…” which might lead to a domain crossover and an implausible statement.

The danger of AI hallucinations is particularly concerning in critical fields like healthcare, legal advice, and financial decision-making. A single hallucinated answer in these fields can result in serious consequences, such as misdiagnoses, incorrect legal counsel, or poor financial decisions. When it comes to YMYL (Your Money, Your Life) topics, it is crucial to double-check the information provided by AI. A simple internet search may not suffice, given the abundance of misleading or false information online. Therefore, in areas like health, finance, safety, or legal advice, consulting a human expert remains essential.

LLMs employ various parameters that influence probabilistic sampling when selecting the next token during response generation. A higher “temperature” setting and broader “top-k” sampling lead to more creative but also potentially erratic outputs. In such cases, the model might generate less probable (and often incorrect) associations. Lowering the “temperature” and narrowing the “top-k” sampling may reduce hallucinations, but this cannot entirely eliminate them, as the quality of the training data and the training process remain the most important factors.

In our work, we frequently encounter this issue when using AI models, especially when asking them about topics requiring deep technical knowledge. Often, we receive answers that are mostly correct, but crucial parts may be hallucinated. Since the training dataset may not be sufficient to produce entirely accurate answers for highly specific topics, AI sometimes blends factual knowledge from related areas with “adjusted” terminology. As part of our internal research, we even deliberately generated hallucinations on different highly technical topics. Many results were obvious and easy to spot. However, when diving deeper into specialized topics, we received outputs that were so convincing that people might accept them without questioning their accuracy. The content sounded plausible in many cases. For this reason, AI-generated answers in technical or scientific fields must always be verified by a human expert.

Hallucinations were quite common in older AI models. While the situation has improved with newer models, they cannot be completely eliminated. However, proper prompt engineering can significantly reduce their occurrence. As discussed in a previous blog post, it is important to specify categories or topics related to the task, define the role of AI in the task, and, if possible, provide high-quality examples or references for the desired output. Additionally, instructing the model to stick to only factual information helps, but AI outputs still need to be double-checked.

On the other hand, in less factual domains like creative writing or brainstorming, AI hallucinations can sometimes be seen as innovative or imaginative outputs. From this perspective, some view AI hallucinations as a source of serendipity. Large language models have already demonstrated the ability to generate original content, ranging from poetry and stories to music and art. LLMs can combine disparate ideas in novel ways, producing outputs that might not have occurred to human creators. This ability is particularly valuable in brainstorming sessions, where the goal is to generate a wide range of ideas without immediate concern for feasibility or accuracy.

In our internal AI research, we even asked AI to “hallucinate about the causes of AI hallucinations.” The responses were often so creative that one could write science fiction stories based on them. To illustrate, one advanced AI model offered this deliberate hallucination regarding the roots of AI hallucinations: “AI hallucinations happen because the AI is trying to look into its own mind, like a mirror reflecting itself infinitely. This creates an endless loop of reflection, where the AI loses track of what is real and what is a reflection. The more it tries to find the truth, the deeper it falls into its own hallucinatory loop, creating infinite echoes of errors.”

This nicely demonstrates how LLMs can serve as collaborative partners for human creators, offering suggestions that spark new directions in a project. These contributions can help break creative blocks and explore new territories. In essence, the same mechanisms that cause misleading and even dangerous AI hallucinations in factual contexts can be harnessed to foster creativity and innovation. By understanding and leveraging these limitations and capabilities, we can use LLMs both as tools for accurate information retrieval and as sources of inspiration and creativity.

(Picture created by the author using free AI tool DeepDreamGenerator.)

Veeam & Proxmox VE

Veeam has made a strategic move by integrating the open-source virtualization solution Proxmox VE (Virtual Environment) into its  portfolio. Signaling its commitment into the evolving needs of the open-source community and the open-source virtualization market, this integration positions Veeam as a forward-thinking player in the industry, ready to support the rising tide of open-source solutions. The combination of Veeam’s data protection solutions with the flexibility of Proxmox VE’s platform offers enterprises a compelling alternative that promises cost savings and enhanced data security.

With the Proxmox VE, now also one of the most important and often requested open-source solution and hypervisor is being natively supported – and it could definitely make a turn in the virtualization market!

Opportunities for Open-Source Virtualization

In many enterprises, a major hypervisor platform is already in place, accompanied by a robust backup solution – often Veeam. However, until recently, Veeam lacked direct support for Proxmox VE, leaving a gap for those who have embraced or are considering this open-source virtualization platform. The latest version of Veeam changes the game by introducing the capability to create and manage backups and restores directly within Proxmox VE environments, without the need for agents inside the VMs.

This advancement means that entire VMs can now be backed up and restored across any hypervisor, providing unparalleled flexibility. Moreover, enterprises can seamlessly integrate a new Proxmox VE-based cluster into their existing Veeam setup, managing everything from a single, central point. This integration simplifies operations, reduces complexity, and enhances the overall efficiency of data protection strategies in environments that include multiple hypervisors by simply having a one-fits-all solution in place.

Also, an heavily underestimated benefit, offers the possibilities to easily migrate, copy, backup and restore entire VMs even independent of their underlying hypervisor – also known as cross platform recovery. As a result, operators are now able to shift VMs from VMware ESXi nodes / vSphere, or Hyper-V to Proxmox VE nodes. This provides a great solution to introduce and evaluate a new virtualization platform without taking any risks. For organizations looking to unify their virtualization and backup infrastructure, this update offers a significant leap forward.

Integration into Veeam

Integrating a new Proxmox cluster into an existing Veeam setup is a testament to the simplicity and user-centric design of both systems. Those familiar with Veeam will find the process to be intuitive and minimally disruptive, allowing for a seamless extension of their virtualization environment. This ease of integration means that your new Proxmox VE cluster can be swiftly brought under the protective umbrella of Veeam’s robust backup and replication services.

Despite the general ease of the process, it’s important to recognize that unique configurations and specific environments may present their own set of challenges. These corner cases, while not common, are worth noting as they can require special attention to ensure a smooth integration. Rest assured, however, that these are merely nuances in an otherwise straightforward procedure, and with a little extra care, even these can be managed effectively.

Overview

Starting with version 12.2, the Proxmox VE support is enabled and integrated by a plugin which gets installed on the Veeam Backup server. Veeam Backup for Proxmox incorporates a distributed architecture that necessitates the deployment of worker nodes. These nodes function analogously to data movers, facilitating the transfer of virtual machine payloads from the Proxmox VE hosts to the designated Backup Repository. The workers operate on a Linux platform and are seamlessly instantiated via the Veeam Backup Server console. Their role is critical and akin to that of proxy components in analogous systems such as AHV or VMware backup solutions.

Such a worker is needed at least once in a cluster. For improved performance, one worker for each Proxmox VE node might be considered. Each worker requires 6 vCPU, 6 GB memory and 100 GB disk space which should be kept in mind.

Requirements

This blog post assumes that an already present installation of Veeam Backup & Replication in version 12.2 or later is already in place and fully configured for another environment such like VMware. It also assumes that the Proxmox VE cluster is already present and a credential with the needed roles to perform the backup/restore actions is given.

Configuration

The integration and configuration of a Proxmox VE cluster can be fully done within the Veeam Backup & Replication Console application and does not require any additional commands on any cli to be executed. The previously mentioned worker nodes can be installed fully automated.

Adding a Proxmox Server

To integrate a new Proxmox Server into the Veeam Backup & Replication environment, one must initiate the process by accessing the Veeam console. Subsequently, navigate through the designated sections to complete the addition:

Virtual Infrastructure -> Add Server

This procedure is consistent with the established protocol for incorporating nodes from other virtualization platforms that are compatible with Veeam.

Afterwards, Veeam shows you a selection of possible and supported Hypervisors:

In this case we simply choose Proxmox VE and proceed the setup wizard.

During the next steps in the setup wizard, the authentication details, the hostname or IP address of the target Proxmox VE server and also a snapshot storage of the Proxmox VE server must be defined.

Hint: When it comes to the authentication details, take care to use functional credentials for the SSH service on the Proxmox VE server. If you usually use the root@pam credentials for the web interface, you simply need to prompt root to Veeam. Veeam will initiate a connection to the system over the ssh protocol.

In one of the last surveys of the setup wizard, Veeam offers to automatically install the required worker node. Such a worker node is a small sized VM that is running inside the cluster on the targeted Proxmox VE server. In general, a single worker node for a cluster in enough but to enhance the overall performance, one worker for each node is recommended.

Usage

Once the Proxmox VE server has been successfully integrated into the Veeam inventory, it can be managed as effortlessly as any other supported hypervisor, such as VMware vSphere or Microsoft Hyper-V. A significant advantage, as shown in the screenshot, is the capability to centrally administrate various hypervisors and servers in clusters. This eliminates the necessity for a separate Veeam instance for each cluster, streamlining operations. Nonetheless, there may be specific scenarios where individual setups for each cluster are preferable.

As a result, this does not only simplify the operator’s work when working with different servers and clusters but also provides finally the opportunity for cross-hypervisor-recoveries.

Creating Backup Jobs

Creating a new backup job for a single VM or even multiple VMs in a Proxmox environment is as simple and exactly the same way, like you already know for other hypervisors. However, let us have a quick summary about the needed tasks:

Open the Veeam Backup & Replication console on your backup server or management workstation. To start creating a backup job, navigate to the Home tab and click on Backup Job, then select Virtual machine from the drop-down menu.

When the New Backup Job wizard opens, you will need to enter a name and a description for the backup job. Click Next to proceed to the next step. Now, you will need to select the VMs that you want to back up. Click Add in the Virtual Machines step and choose the individual VMs or containers like folders, clusters, or entire hosts that you want to include in the backup. Once you have made your selection, click Next.

The next step is to specify where you want to store the backup files. In the Storage step, select the backup repository and decide on the retention policy that dictates how long you want to keep the backup data. After setting this up, click Next.

If you have configured multiple backup proxies, the next step allows you to specify which one to use. If you are not sure or if you prefer, you can let Veeam Backup & Replication automatically select the best proxy for the job. Click Next after making your choice.

Now it is time to schedule when the backup job should run. In the Schedule step, you can set up the job to run automatically at specific times or in response to certain events. After configuring the schedule, click Next.

Review all the settings on the summary page to ensure they are correct. If everything looks good, click Finish to create the backup job.

 

If you want to run the backup job immediately for ensuring everything works as expected, you can do so by right-clicking on the job and selecting Start. Alternatively, you can wait for the scheduled time to trigger the job automatically.

Restoring an entire VM

The restore and replication process for a full VM restore remains to the standard procedures. However, it now includes the significant feature of cross-hypervisor restore. This functionality allows for the migration of VMs between different hypervisor types without compatibility issues. For example, when introducing Proxmox VE into a corporate setting, operators can effortlessly migrate VMs from an existing hypervisor to the Proxmox VE cluster. Should any issues arise during the testing phase, the process also supports the reverse migration back to the original hypervisor. Let us have a look at the details.

Open the Veeam Backup & Replication console on your backup server or management workstation. To start creating a backup job, navigate to the Home tab and click on Backup Job, then select Virtual machine from the Disk menu.

Choose the Entire VM restore option, which will launch the wizard for restoring a full virtual machine. The first step in the wizard will ask you to select a backup from which you want to restore. You will see a list of available backups; select the one that contains the VM you wish to restore and proceed to the next step by clicking Next.

Now, you must decide on the restore point. Typically, this will be the most recent backup, but you may choose an earlier point if necessary. After selecting the restore point, continue to the next step.

The wizard will then prompt you to specify the destination for the VM. This is the very handy point for cross-hypervisor-restore where this could be the original location or a new location if you are performing a migration or don’t want to overwrite the existing VM. Configure the network settings as required, ensuring that the restored VM will have the appropriate network access.

In the next step, you will have options regarding the power state of the VM after the restoration. You can choose to power on the VM automatically or leave it turned off, depending on your needs.

Before finalizing the restore process, review all the settings to make sure they align with your intended outcome. This is your chance to go back and make any necessary adjustments. Once you’re satisfied with the configuration, proceed to restore the VM by clicking Finish.

The restoration process will begin, and its progress can be monitored within the Veeam Backup & Replication console. Depending on the size of the VM and the performance of your backup storage and network, the restoration can take some time.

File-Level-Restore

Open the Veeam Backup & Replication console on your backup server or management workstation. To start creating a backup job, navigate to the Home tab and click on Backup Job, then select Virtual machine from the Disk menu.

Select Restore guest files. The wizard for file-level recovery will start, guiding you through the necessary steps. The first step involves choosing the VM backup from which you want to restore files. Browse through the list of available backups, select the appropriate one, and then click Next to proceed.

Choose the restore point that you want to use for the file-level restore. This is typically the most recent backup, but you can select an earlier one if needed. After picking the restore point, click Next to continue.

At this stage, you may need to choose the operating system of the VM that you are restoring files from. This is particularly important if the backup is of a different OS than the one on the Veeam Backup & Replication server because it will determine the type of helper appliance required for the restore.

Veeam Backup & Replication will prompt you to deploy a helper appliance if the backup is from an OS that is not natively supported by the Windows-based Veeam Backup & Replication server. Follow the on-screen instructions to deploy the helper appliance, which will facilitate the file-level restore process.

Once the helper appliance is ready, you will be able to browse the file system of the backup. Navigate through the backup to locate the files or folders you wish to restore.

After selecting the files or folders for restoration, you will be prompted to choose the destination where you want to restore the data. You can restore to the original location or specify a new location, depending on your requirements.

Review your selections to confirm that the correct files are being restored and to the right destination. If everything is in order, proceed with the restoration by clicking Finish.

The file-level restore process will start, and you can monitor the progress within the Veeam Backup & Replication console. The time it takes to complete the restore will depend on the size and number of files being restored, as well as the performance of your backup storage and network.

Conclusion

Summarising all the things, the latest update to Veeam introduces a very important and welcomed integration with Proxmox VE, filling a significant gap for enterprises that have adopted this open-source virtualization platform. By enabling direct backups and restores of entire VMs across different hypervisors without the need for in-VM agents, Veeam now offers unparalleled flexibility and simplicity in managing mixed environments. This advancement not only streamlines operations and enhances data protection strategies but also empowers organizations to easily migrate and evaluate new open-source virtualization platforms like Proxmox VE with minimal risk. It is great to see that more and more companies are putting efforts into supporting open-source solutions which underlines the ongoing importance of open-source based products in enterprises.

Additionally, for those starting fresh with Proxmox, the Proxmox Backup Server remains a viable open-source alternative and you can find our blog post about configuring the Proxmox Backup Server right here. Overall, this update represents a significant step forward in unifying virtualization and backup infrastructures, offering both versatility and ease of integration.

We are always here to help and assist you with further consulting, planning, and integration needs. Whether you are exploring new virtualization platforms, optimizing your current infrastructure, or looking for expert guidance on your backup strategies, our team is dedicated to ensuring your success every step of the way. Do not hesitate to reach out to us for personalized support and tailored solutions to meet your unique requirements in virtualization- or backup environments.

On June 27, it was that time again: we were able to attend PGConf.DE 2023. This year, the event was held at the Haus der Technik in Essen, marking the conference’s return to the Ruhr region for the first time since 2013. In addition to the usual excellent talks, catering, and atmosphere, a new visitor record was set with approximately 250 participants, making this year’s edition the largest German PostgreSQL event to date. Instaclustr was also represented for the first time this year as a gold sponsor with a booth. This allowed interested parties to learn about Instaclustr Managed Services and other offerings. Admission was at 8 a.m., and after setting up the booth, some initial small talk, and a welcome speech, the first talks started right on time at 9:10 a.m. The conference offered three parallel talks, allowing attendees to choose speakers and topics of interest in each slot. One of the speakers was our senior consultant, Michael Banck. He gave a presentation on secure PostgreSQL operation in accordance with BSI basic protection.

Michael Banck before his presentation: Secure PostgreSQL operation in accordance with BSI basic protection

Topics such as performance optimization, high availability (with Patroni, for example), and technical operations were among those on offer. There were also some very entertaining reports from the world of PostgreSQL consultants. My personal highlight was Laurenz Albe’s talk on data corruption. He presented the topic in a very practical and clear way and also showed very clear rules and ways to best deal with such a situation. PGConf.DE 2023 is a great event for training and networking. We are already looking forward to next year!

Congratulations to the Debian Community

The Debian Project just released version 11 (aka “bullseye”) of their free operating system. In total, over 6,208 contributors worked on this release and were indispensable in making this launch happen. We would like to thank everyone involved for their combined efforts, hard work, and many hours pent in recent years building this new release that will benefit the entire open source community.

We would also like to acknowledge our in-house Debian developers who contributed to this effort. We really appreciate the work you do on behalf of the community and stand firmly behind your contributions.

What’s New in Debian 11 Bullseye

Debian 11 comes with a number of meaningful changes and enhancements. The new release includes over 13,370 new software packages, for a total of over 57,703 packages on release. Out of these, 35,532 packages have been updated to newer versions, including an update in the kernel from 4.19 in “buster” to 5.10 in bullseye.

Bullseye expands on the capabilities of driverless printing with Common Unix Printing System (CUPS) and driverless scanning with Scanner Access Now Easy (SANE). While it was possible to use CUPS for driverless printing with buster, bullseye comes with the package ipp-usb, which allows a USB device to be treated as a network device and thus extend driverless printing capabilities. SANE connects to this when set up correctly and connected to a USB port.

As in previous releases, Debian 11 comes with a Debian Edu / Skolelinux version. Debian Edu has been a complete solution for schools for many years. Debian Edu can provide the entire network for a school and then only users and machines need to be added after installation. This can also be easily managed via the web interface GOsa².

Debian 11 bullseye can be downloaded here.
https://www.debian.org/devel/debian-installer/index.en.html

For more information and greater technical detail on the new Debian 11 release, please refer to the official release notes on Debian.org

https://www.debian.org/releases/bullseye/amd64/release-notes/.

Contributions by Instaclustr Employees

Our Debian roots run deep here. credativ, which was acquired by Instaclustr in March 2021, has always been an active part of the Debian community and visited every DebConf since 2004. Debian also serves as the operating system at the heart of the Instaclustr Managed Platform.

For the release of Debian 11, our team has taken over various responsibilities in the community. Our contributions include:

Many of our colleagues have made significant contributions to the current release, including:

How to Upgrade

Given that Debian 11 bullseye is a major release, we suggest that everyone running on Debian 10 buster upgrade. The main steps for an upgrade include:

  1. Make sure to backup any data that should not get lost and prepare for recovery
  2. Remove non-Debian packages and clean up leftover files and old versions
  3. Upgrade to latest point release
  4. Check and prepare your APT source-list files by adding the relevant Internet sources or local mirrors
  5. Upgrade your packages and then upgrade your system

You can find a more detailed walkthrough of the upgrade process in the Debian documentation.

All existing credativ customers who are running a Debian-based installation are naturally covered by our service and support and are encouraged to reach out.

If you are interested in upgrading from your old Debian version, or if you have questions with regards to your Debian infrastructure, do not hesitate to drop us an email or contact us at info@credativ.de.

Or, you can get started in minutes with any one of these open source technologies like Apache Cassandra, Apache Kafka, Redis, and OpenSearch on the Instaclustr Managed Platform. Sign up for a free trial today.

100% Open Source – 100% Cost Control

The PostgreSQL® Competence Center of the credativ Group announces the creation of a new comprehensive service and support package that includes all services necessary for the operation of PostgreSQL® in enterprise environments. This new offering will be available starting August 1st, 2020.

“Motivated by the requirements of many of our customers, we put together a new and comprehensive PostgreSQL® service package that meets all the requirements for the operation of PostgreSQL® in enterprise environments.”, says Dr. Michael Meskes, managing director of the credativ Group.

“In particular, this package focuses on true open source support, placing great emphasis on the absence of any proprietary elements in our offer. Despite this, our service package still grants all of the necessary protection for operation in business-critical areas. Additionally, with this new offering, the number of databases operated within the company’s environment does not matter. As a result, credativ offers 100% cost control while allowing the entire database environment to be scaled as required.”

Database operation in enterprise environments places very high demands on the required service and support. Undoubtedly an extremely powerful, highly scalable, and rock-solid relational database is the basis for secure and high-performance operation.

However, a complete enterprise operating environment consists of much more than just the pure database; one needs holistic lifecycle management. Major and Minor version upgrades, migrations, security, services, patch management, and Long-Term Support (LTS) are just a few essential factors. Additionally, staying up to date also requires continuous regular training and practice.

Services for the entire operating environment

Beyond the database itself, one also needs a stable and highly scalable operating environment providing all necessary Open Source tools for PostgreSQL and meeting all requirements regarding high availability, security, performance, database monitoring, backups, and central orchestration of the entire database infrastructure. These tools include the open-source versions of numerous PostgreSQL related project packages, such as pgAdmin, pgBadger, pgBackrest, Patroni, but also the respective operating system environment and popular projects like Prometheus and Grafana, or even cloud infrastructures based on Kubernetes.

Just as indispensable as the accurate functioning of the database is smooth interaction with any components connected with the database. Therefore it is important to include and consider these components as well. Only when all aspects, such as operating system, load balancer, web server, application server, or PostgreSQL cluster solutions, work together, will the database achieve optimal performance.

This new support package is backed up by continuous 24×7 enterprise support, with guaranteed Service Level Agreements and all necessary services for the entire database environment, including a comprehensive set of open-source tools commonly used in today’s enterprise PostgreSQL environments. All of these requirements are covered by the PostgreSQL Enterprise package from credativ and are included within the scope of services. The new enterprise service proposal is offered at an annual flat rate, additionally simplifying costs and procurement.

About credativ

The credativ Group is an independent consulting and services company with primary locations in Germany, the United States, and India.

Since 1999, credativ has focused entirely on the planning and implementation of professional business solutions using Open Source software. Since May 2006, credativ operates the Open Source Support Center (OSSC), offering professional 24×7 enterprise support for numerous Open Source projects.

In addition, our PostgreSQL Competence Center of credativ provides a dedicated database team a comprehensive service for the PostgreSQL object-relational database eco-system.

This article was originally written by Philip Haas.