Open Source Archives - credativ®

Migrating VMs from VMware ESXi to Proxmox

In response to Broadcom’s recent alterations in VMware’s subscription model, an increasing number of enterprises are reevaluating their virtualization strategies. With heightened concerns over licensing costs and accessibility to features, businesses are turning towards open source solutions for greater flexibility and cost-effectiveness. Proxmox, in particular, has garnered significant attention as a viable alternative. Renowned for its robust feature set and open architecture, Proxmox offers a compelling platform for organizations seeking to mitigate the impact of proprietary licensing models while retaining comprehensive virtualization capabilities. This trend underscores a broader industry shift towards embracing open-source technologies as viable alternatives in the virtualization landscape. Just to mention, Proxmox is widely known as a viable alternative to VMware ESXi but there are also other options available, such as bhyve which we also covered in one of our blog posts.

Benefits of Opensource Solutions

In the dynamic landscape of modern business, the choice to adopt open source solutions for virtualization presents a strategic advantage for enterprises. With platforms like KVM, Xen and even LXC containers, organizations can capitalize on the absence of license fees, unlocking significant cost savings and redirecting resources towards innovation and growth. This financial flexibility empowers companies to make strategic investments in their IT infrastructure without the burden of proprietary licensing costs. Moreover, open source virtualization promotes collaboration and transparency, allowing businesses to tailor their environments to suit their unique needs and seamlessly integrate with existing systems. Through community-driven development and robust support networks, enterprises gain access to a wealth of expertise and resources, ensuring the reliability, security, and scalability of their virtualized infrastructure. Embracing open source virtualization not only delivers tangible financial benefits but also equips organizations with the agility and adaptability needed to thrive in an ever-evolving digital landscape.

Migrating a VM

Prerequisites

To ensure a smooth migration process from VMware ESXi to Proxmox, several key steps must be taken. First, SSH access must be enabled on both the VMware ESXi host and the Proxmox host, allowing for remote management and administration. Additionally, it’s crucial to have access to both systems, facilitating the migration process. Furthermore, establishing SSH connectivity between VMware ESXi and Proxmox is essential for seamless communication between the two platforms. This ensures efficient data transfer and management during migration. Moreover, it’s imperative to configure the Proxmox system or cluster in a manner similar to the ESXi setup, especially concerning networking configurations. This includes ensuring compatibility with VLANs or VXLANs for more complex setups. Additionally, both systems should either run on local storage or have access to shared storage, such as NFS, to facilitate the transfer of virtual machine data. Lastly, before initiating the migration, it’s essential to verify that the Proxmox system has sufficient available space to accommodate the imported virtual machine, ensuring a successful transition without storage constraints.

Activate SSH on ESXi

The SSH server must be activated in order to copy the content from the ESXi system to the new location on the Proxmox server. The virtual machine will later be copied from the Proxmox server. Therefore, it is necessary that the Proxmox system can establish an SSH connection on tcp/22 to the ESXi system:

Find Source Information about VM on ESXi

One of the challenging matters in finding the location of the virtual machine holding the virtual machine disk. The path can be found within the web UI of the ESXi system:

Create a New Empty VM on Proxmox

Copy VM from ESXi to Proxmox

The content of the virtual machine (VM) will be transferred from the ESXi to the Proxmox system using the open source tool rsync for efficient synchronization and copying. Therefore, the following commands need to be executed from the Proxmox system, where we create a temporary directory to store the VM’s content:

mkdir /tmp/migration_pgsql07.gyptazy.ch
cd /tmp/migration_pgsql07.gyptazy.ch
rsync -avP root@esx02-test.gyptazy.ch:/vmfs/volumes/137b4261-68e88bae-0000-000000000000/pgsq07.gyptazy.ch/* .

Depending on the file size of them virtual machine and the network connectivity this process may take some time.

Import VM in Proxmox

Afterwards, the disk is imported using the qm utility, defining the VM ID (which got created during the VM creation process), along with specifying the disk name (which has been copied over) and the destination data storage on the Proxmox system where the VM disk should be stored:

qm disk import 119 pgsql07.gyptazy.ch.vmdk local-lvm

Depending on the creation format of the VM or the exporting format there may be multiple disk files which may also be suffixed by _flat. This procedure needs to be repeated by all available disks.

Starting the VM

In the final step, all settings, resources, definitions and customizations of the system should be thoroughly reviewed. One validated, the VM can be launched, ensuring that all components are correctly configured for operation within the Proxmox environment.

Conclusion

This article only covers one of many possible methods for migrations in simple, standalone setups. In more complex environments involving multiple host nodes and different storage systems like fibre channel or network storage, there are significant differences and additional considerations. Additionally, there may be specific requirements regarding availability and Service Level Agreements (SLAs) to be concern. This may be very specific for each environment. Feel free to contact us for personalized guidance on your specific migration needs at any time. We are also pleased to offer our support in related areas in open source such as virtualization (e.g., OpenStack, VirtualBox) and topics pertaining to cloud migrations.

Addendum

On the 27th of March, Proxmox released their new import wizard (pve-esxi-import-tools) which makes migrations from VMware ESXi instances to a Proxmox environment much easier. Within an upcoming blog post we will provide more information about the new tooling and cases where this might be more useful but also covering the corner cases where the new import wizard cannot be used.

This year’s FOSS Backstage took place on 2024-03-04 and 2024-03-05 at bUm in Berlin, Germany. One Debian Developer and employee of NetApp had the opportunity to attend this conference: Bastian Blank.

FOSS Backstage focuses on the non-coding aspects of Open Source projects. Aspects like governance, how to structure a project, and how to grow your project where talked a lot about. But also legal aspects on licenses choice and license compliance showed up.

Many startups will eventually show up the radar of Venture Capital. While this will make some people wealthy, this is often a turning point for FOSS principles. Melanie Rieback presented the keynote with insight into her successful non-profit security company. A company that can’t be sold to anyone really.

Once in a while, hopefully not too often, it will be necessary to change the structure of a project. For example if the founder of the project and his company suddenly goes away. Ruth Cheesley showed how changing the governance model can work in reality.

Funding is a not so easy, but still important part of every project. Some insights where given by Shane Curcuru about how projects are currently funded, with data he collects and publishes.

All presentations have been recorded and should show up on the YouTube channel of the organizers.

This years All Systems Go! took place on 2023-09-13 and 2023-09-14 at the nice location of Neue Mälzerei in Berlin, Germany. One Debian Developer and employee of NetApp had the opportunity to attend this conference: Bastian Blank.

All Systems Go! focuses on low-level user-space for Linux systems. Everything that is above the kernel, but is so fundamental that systems won’t work without.

A lot of talks happened, ranging from how to use TPM with Linux for measured boot, how to overcome another Y2038 problem, up to just how to actually boot fast. Videos for all talks are kindly provided by the nice people of the C3VOC.

DebConf23 https://debconf23.debconf.org/ took place from 2023-09-10 to –17 in Kochi, India. 

Four employees (three Debian developers) from NetApp had the opportunity to participate in the annual event, which is the most important conference in the Debian world: Christoph Senkel, Andrew Lee, Michael Banck and Noël Köthe. 

DebCamp 

What is DebCamp? DebCamp usually takes place a week before DebConf begins. For participants, DebCamp is a hacking session that takes place just before DebConf. It’s a week dedicated to Debian contributors focusing on their Debian-related projects, tasks, or problems without interruptions. 

DebCamps are largely self-organized since it’s a time for people to work. Some prefer to work individually, while others participate in or organize sprints. Both approaches are encouraged, although it’s recommended to plan your DebCamp week in advance. 

During this DebCamp, there are the following public sprints: 

In addition to the organizational part, our colleague Andrew also attended and arranged private sprints during DebCamp: 

It also allows the DebConf committee to work together with the local team to prepare additional details for the conference. During DebCamp, the organization team typically handles the following tasks: 

Conference talks 

The conference itself started on Sunday 10. September with an opening, some organizational stuff, GPG keysigning information (the fingerprint was printed on the badge) and a big welcome to everyone onsite and in the world. 

Most talks of DebConf were live streamed and are available in the video archive. The topics were broad from very technical (e.g., “What’s new in the Linux kernel”) over organizational (e.g., “DebConf committee”) to social (e.g., “Adulting”). 

Schedule: https://debconf23.debconf.org/schedule/ 

Videos: https://meetings-archive.debian.net/pub/debian-meetings/2023/DebConf23/ 

Thanks a lot, to the voluntarily organized video team for this video transmission coverage. 

Lightning Talks 

On the last day of DebConf, the traditional lightning talks were held. One talk in particular was noticed, the presentation of extrepo by Wouter Verhelst. At NetApp, we use bookworm-based Debian ThinkPad’s. However, in a corporate environment, non-packaged software needs to be used from time to time, and extrepo is a very elegant way to solve this problem by providing, maintaining and keeping UpToDate a list of 3rd-party APT repositories like Slack or Docker Desktop. 

Sadya 

On Tuesday, an incredibly special lunch was offered at DebConf: a traditional Kerala vegetarian banquet (Sadya in Maralayam), which is served on a banana leaf and eaten by hand. It was quite unusual for the European part of the attendees at first, but a wonderful experience once one got into it. 

Daytrip 

On Wednesday, the Daytrip happened and everybody could choose out of five different trips: https://wiki.debian.org/DebConf/23/DayTrip  

The houseboat trip was a bus tour to Alappuzha about 60 km away from the conference venue. It was interesting to see (and hear) the typical bus, car, motorbike and Tuktuk road traffic in action. During the boat trip the participants socialized and visited the local landscape outside the city. 

Unfortunately, we had an accident at one of the daytrip options. Abraham a local Indian participant drowned while swimming.  

https://debconf23.debconf.org/news/2023-09-14-mourning-abraham/ 

It was a big shock for everybody and all events including the traditional formal dinner were cancelled for Thursday. The funeral with the family was on Friday morning and DebConf people had the opportunity with organized buses to participate and say goodbye. 

NetApp internal dinner on Friday 

The NetApp team at DebConf wanted to take the chance to go to a local restaurant (“were the locals go eating”) and enjoyed very tasty food. 

DebConf24 

Sunday was the last day of DebConf23. As usual, the upcoming DebConf24 was very briefly presented and there was a call for bids for DebConf25. 

Maybe see you in Haifa, Israel next year. 

https://www.debconf.org/goals.shtml 

 

 

Authors: Andrew Lee, Michael Banck and Noël Köthe 

100% Open Source – 100% Cost Control

The PostgreSQL® Competence Center of the credativ Group announces the creation of a new comprehensive service and support package that includes all services necessary for the operation of PostgreSQL® in enterprise environments. This new offering will be available starting August 1st, 2020.

“Motivated by the requirements of many of our customers, we put together a new and comprehensive PostgreSQL® service package that meets all the requirements for the operation of PostgreSQL® in enterprise environments.”, says Dr. Michael Meskes, managing director of the credativ Group.

“In particular, this package focuses on true open source support, placing great emphasis on the absence of any proprietary elements in our offer. Despite this, our service package still grants all of the necessary protection for operation in business-critical areas. Additionally, with this new offering, the number of databases operated within the company’s environment does not matter. As a result, credativ offers 100% cost control while allowing the entire database environment to be scaled as required.”

Database operation in enterprise environments places very high demands on the required service and support. Undoubtedly an extremely powerful, highly scalable, and rock-solid relational database is the basis for secure and high-performance operation.

However, a complete enterprise operating environment consists of much more than just the pure database; one needs holistic lifecycle management. Major and Minor version upgrades, migrations, security, services, patch management, and Long-Term Support (LTS) are just a few essential factors. Additionally, staying up to date also requires continuous regular training and practice.

Services for the entire operating environment

Beyond the database itself, one also needs a stable and highly scalable operating environment providing all necessary Open Source tools for PostgreSQL and meeting all requirements regarding high availability, security, performance, database monitoring, backups, and central orchestration of the entire database infrastructure. These tools include the open-source versions of numerous PostgreSQL related project packages, such as pgAdmin, pgBadger, pgBackrest, Patroni, but also the respective operating system environment and popular projects like Prometheus and Grafana, or even cloud infrastructures based on Kubernetes.

Just as indispensable as the accurate functioning of the database is smooth interaction with any components connected with the database. Therefore it is important to include and consider these components as well. Only when all aspects, such as operating system, load balancer, web server, application server, or PostgreSQL cluster solutions, work together, will the database achieve optimal performance.

This new support package is backed up by continuous 24×7 enterprise support, with guaranteed Service Level Agreements and all necessary services for the entire database environment, including a comprehensive set of open-source tools commonly used in today’s enterprise PostgreSQL environments. All of these requirements are covered by the PostgreSQL Enterprise package from credativ and are included within the scope of services. The new enterprise service proposal is offered at an annual flat rate, additionally simplifying costs and procurement.

About credativ

The credativ Group is an independent consulting and services company with primary locations in Germany, the United States, and India.

Since 1999, credativ has focused entirely on the planning and implementation of professional business solutions using Open Source software. Since May 2006, credativ operates the Open Source Support Center (OSSC), offering professional 24×7 enterprise support for numerous Open Source projects.

In addition, our PostgreSQL Competence Center of credativ provides a dedicated database team a comprehensive service for the PostgreSQL object-relational database eco-system.

This article was originally written by Philip Haas.

In November 1999, 20 years ago, credativ GmbH was founded in Germany, and thus laid the first foundation for the current credativ group.

At that time, Dr. Michael Meskes and Jörg Folz started the business operations in the Technology Centre of Jülich, Germany. Our mission has always been to not only work to live, but also to live to work, because we love the work we do. Our aim is to support widespread use of open source software and to ensure independence from software vendors.

Furthermore, it is very important for us to support and remain active in open source communities. Since 1999 we have continuously taken part in PostgreSQL and Debian events, and supported them financially with sponsorships. Additionally, the development of the Linux operating system has also been a dear and important project of ours. Therefore, we have been a member of the Linux Foundation for over 10 years.

In 2006 we opened our Open Source Support Center. Here, for the first time, our customers had the opportunity to get the support for their entire Open Source infrastructure with just one contract. Since then we have expanded and included different locations into a globally operating Open Source Support Center.

Thanks to our healthy and steady growth, credativ grew to over 35 employees at its worldwide locations by our 10th anniversary.

Since then, the founding of credativ international GmbH in 2013 marked another milestone in credativ’s history, as the focus shifted from a local to a global market. We were also able to expand into different countries such as the USA and India.

We have grown now to over 80 employees, with 20 years of company history. credativ is now one of the leading providers of services and support for open source software in enterprise use. We thank our customers, business partners, and employees for their time together.

This Artikel was originally written by Philip Haas.

The PostgreSQL® Global Development Group (PGDG) has released version 12 of the popular free PostgreSQL® database. As our article for Beta 4 has already indicated, a number of new features, improvements and optimizations have been incorporated into the release. These include among others:

Optimized disk space utilization and speed for btree indexes

btree-Indexes, the default index type in PostgreSQL®, has experienced some optimizations in PostgreSQL® 12.

btree indexes used to store duplicates (multiple entries with the same key values) in an unsorted order. This has resulted in suboptimal use of physical representation in these indexes. An optimization now stores these multiple key values in the same order as they are physically stored in the table. This improves disk space utilization and the effort required to manage corresponding btree type indexes. In addition, indexes with multiple indexed columns use an improved physical representation so that their storage utilization is also improved. To take advantage of this in PostgreSQL® 12, however, if they were upgraded to the new version using pg_upgrade via a binary upgrade, these indexes must be recreated or re-indexed.

Insert operations in btree indexes are also accelerated by improved locking.

Improvements for pg_checksums

credativ has contributed an extension for pg_checksums that allows to enable or disable block checksums in stopped PostgreSQL® instances. Previously, this could only be done by recreating the physical data representation of the cluster using initdb.
pg_checksums now has the option to display a status history on the console with the --progress parameter. The corresponding code contributions come from the colleagues Michael Banck and Bernd Helmle.

Optimizer Inlining of Common Table Expressions

Up to and including PostgreSQL® 11, the PostgreSQL® Optimizer was unable to optimize common table expressions (also called CTE or WITH queries). If such an expression was used in a query, the CTE was always evaluated and materialized first before the rest of the query was processed. This resulted in expensive execution plans for more complex CTE expressions. The following generic example illustrates this. A join is given with a CTE expression that filters all even numbers from a numeric column:

WITH t_cte AS (SELECT id FROM foo WHERE id % 2 = 0) SELECT COUNT(*) FROM t_cte JOIN bar USING(id);

In PostgreSQL® 11, using a CTE always leads to a CTE scan that materializes the CTE expression first:

EXPLAIN (ANALYZE, BUFFERS) WITH t_cte AS (SELECT id FROM foo WHERE id % 2 = 0) SELECT COUNT(*) FROM t_cte JOIN bar USING(id) ;
                                                       QUERY PLAN                                                        
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
 Aggregate  (cost=2231.12..2231.14 rows=1 width=8) (actual time=48.684..48.684 rows=1 loops=1)
   Buffers: shared hit=488
   CTE t_cte
     ->  Seq Scan on foo  (cost=0.00..1943.00 rows=500 width=4) (actual time=0.055..17.146 rows=50000 loops=1)
           Filter: ((id % 2) = 0)
           Rows Removed by Filter: 50000
           Buffers: shared hit=443
   ->  Hash Join  (cost=270.00..286.88 rows=500 width=0) (actual time=7.297..47.966 rows=5000 loops=1)
         Hash Cond: (t_cte.id = bar.id)
         Buffers: shared hit=488
         ->  CTE Scan on t_cte  (cost=0.00..10.00 rows=500 width=4) (actual time=0.063..31.158 rows=50000 loops=1)
               Buffers: shared hit=443
         ->  Hash  (cost=145.00..145.00 rows=10000 width=4) (actual time=7.191..7.192 rows=10000 loops=1)
               Buckets: 16384  Batches: 1  Memory Usage: 480kB
               Buffers: shared hit=45
               ->  Seq Scan on bar  (cost=0.00..145.00 rows=10000 width=4) (actual time=0.029..3.031 rows=10000 loops=1)
                     Buffers: shared hit=45
 Planning Time: 0.832 ms
 Execution Time: 50.562 ms
(19 rows)

This plan first materializes the CTE with a sequential scan with a corresponding filter (id % 2 = 0). Here no functional index is used, therefore this scan is correspondingly more expensive. Then the result of the CTE is linked to the table bar by Hash Join with the corresponding Join condition. With PostgreSQL® 12, the optimizer now has the ability to inline these CTE expressions without prior materialization. The underlying optimized plan in PostgreSQL® 12 will look like this:

EXPLAIN (ANALYZE, BUFFERS) WITH t_cte AS (SELECT id FROM foo WHERE id % 2 = 0) SELECT COUNT(*) FROM t_cte JOIN bar USING(id) ;
                                                                QUERY PLAN                                                                 
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
 Aggregate  (cost=706.43..706.44 rows=1 width=8) (actual time=9.203..9.203 rows=1 loops=1)
   Buffers: shared hit=148
   ->  Merge Join  (cost=0.71..706.30 rows=50 width=0) (actual time=0.099..8.771 rows=5000 loops=1)
         Merge Cond: (foo.id = bar.id)
         Buffers: shared hit=148
         ->  Index Only Scan using foo_id_idx on foo  (cost=0.29..3550.29 rows=500 width=4) (actual time=0.053..3.490 rows=5001 loops=1)
               Filter: ((id % 2) = 0)
               Rows Removed by Filter: 5001
               Heap Fetches: 10002
               Buffers: shared hit=74
         ->  Index Only Scan using bar_id_idx on bar  (cost=0.29..318.29 rows=10000 width=4) (actual time=0.038..3.186 rows=10000 loops=1)
               Heap Fetches: 10000
               Buffers: shared hit=74
 Planning Time: 0.646 ms
 Execution Time: 9.268 ms
(15 rows)

The advantage of this method is that there is no initial materialization of the CTE expression. Instead, the query is executed directly with a Join. This works for all non-recursive CTE expressions without side effects (for example, CTEs with write statements) and those that are referenced only once per query. The old behavior of the optimizer can be forced with the WITH ... AS MATERIALIZED ... directive.

Generated Columns

Generated Columns in PostgreSQL® 12 are materialized columns, which calculate a result based on expressions using existing column values. These are stored with the corresponding result values in the tuple. The advantage is that there is no need to create triggers for subsequent calculation of column values. The following simple example illustrates the new functionality using a price table with net and gross prices:

CREATE TABLE preise(netto numeric,
                    brutto numeric GENERATED ALWAYS AS (netto * 1.19) STORED);
 
INSERT INTO preise VALUES(17.30);
INSERT INTO preise VALUES(225);
INSERT INTO preise VALUES(247);
INSERT INTO preise VALUES(19.15);
 
SELECT * FROM preise;
 netto │ brutto
───────┼─────────
 17.30 │ 20.5870
   225 │  267.75
   247 │  293.93
 19.15 │ 22.7885
(4 rows)

The column brutto is calculated directly from the net price. The keyword STORED is mandatory. Of course, indexes can also be created on Generated Columns, but they cannot be part of a primary key. Furthermore, the SQL expression must be unique, i.e. it must return the same result even if the input quantity is the same. Columns declared as Generated Columns cannot be used explicitly in INSERT or UPDATE operations. If a column list is absolutely necessary, the corresponding value can be indirectly referenced with the keyword DEFAULT.

Omission of explicit OID columns

Explicit OID columns have historically been a way to create unique column values so that a table row can be uniquely identified database-wide. However, for a long time PostgreSQL® has only created these explicitly and considered their basic functionality obsolete. With PostgreSQL® the possibility to create such columns explicitly is now finally abolished. This means that it will no longer be possible to specify the WITH OIDS directive for tables. System tables that have always referenced OID objects uniquely will now return OID values without explicitly specifying OID columns in the result set. Especially older software, which handled catalog queries carelessly, could get problems with a double column output.

Moving recovery.conf to postgresql.conf

Up to and including PostgreSQL® 11, database recovery and streaming replication instances were configured via a separate configuration file recovery.conf.

With PostgreSQL® 12, all configuration work done there now migrates to postgresql.conf. The recovery.conf file is no longer required. PostgreSQL® 12 refuses to start as soon as this file exists. Whether recovery or streaming standby is desired is now decided either by a recovery.signal file (for recovery) or by a standby.signal file (for standby systems). The latter has priority if both files are present. The old parameter standby_mode, which controlled this behavior since then, has been removed.

For automatic deployments of high-availability systems, this means a major change. However, it is now also possible to perform corresponding configuration work almost completely using the ALTER SYSTEM command.

REINDEX CONCURRENTLY

With PostgreSQL® 12 there is now a way to re-create indexes with as few locks as possible. This greatly simplifies one of the most common maintenance tasks in very write-intensive databases. Previously, a combination of CREATE INDEX CONCURRENTLY and DROP INDEX CONCURRENTLY had to be used. In doing so, it was also necessary to ensure that index names were reassigned accordingly, if required.

The release notes give an even more detailed overview of all new features and above all incompatibilities with previous PostgreSQL® versions.

– From today credativ is going to be a part of the OpenChain community and adhere to the specifications given by the OpenChain Project.

“The addition of credativ brings another important Open Source pillar into the OpenChain community and boosts our mission to make Open Source Software compliance easier, more understandable and more transparent,” says Shane Coughlan, OpenChain Program Manager. “credativ is one of the early supporters of Free and Open Source Software in our community. They have engaged at a unique time. We are looking forward to working closely together to channel influence in the European, American and Indian market as we scale OpenChain self-certification.”

“It is an important step for us to join the OpenChain community,” says Dr Michael Meskes, CEO of credativ. “Through our extensive experience with Open Source in business, we work with a lot of different companies and know how to improve upon the positive impact that Open Source provides, not only in technical but also compliance issues. Therefore we would like to further extend the reputation of the OpenChain Project by expressing adherence to their specifications. We think that this leads to more and more companies gaining trust and confidence in Free and Open Source Software. One of the key aspects of the OpenChain certification is, that it builds trust, which, already rooted in the core principles of Free and Open Source Software, thusly provides an essential contribution to the general acceptance of Open Source.“

The OpenChain Project identifies key recommended processes for effective Open Source Software component management. Among other things, these include having a general Free and Open Source Software policy, training for all developers and technical software staff on a regular basis, and the definition of what consitutes an acceptable standard for Open Source licenses and contribution to Projects in general. In terms of use cases the standard cover topics such as distribution of software in binary or source form, modification or integration of Open Source Software components and interaction of different licenses.

Thus the project builds trust in Free and Open Source Software by making Open Source license compliance simpler and more consistent.

The OpenChain Specification defines a core set of requirements every quality compliance program must satisfy. The OpenChain Curriculum provides the educational foundation for Open Source processes and solutions, whilst meeting a key requirement of the OpenChain Specification. OpenChain

Conformance allows organizations to display their adherence to these requirements.

The result is that Open Source license compliance becomes more predictable, understandable and efficient for participants of the software supply chain.

Organizations of all sizes are invited to review the OpenChain Project, to complete the free Online Self-Certification Questionnaire, and to join the OpenChain community of trust.

About credativ:

credativ is a independent consulting and services company operating in Germany, Spain, India, the Netherlands and the USA. Since the founding in 1999, credativ has been offering comprehensive services and technical support for the implementation and operation of Open Source Software in business applications. For more information, visit https://www.credativ.com

About The Linux Foundation:

The Linux Foundation is the organization of choice for the world’s top developers and companies to build ecosystems that accelerate open technology development and commercial adoption. Together with the worldwide open source community, it is solving the hardest technology problems by creating the largest shared technology investment in history. Founded in 2000, The Linux Foundation today provides tools, training and events to scale any open source project, which together deliver an economic impact not achievable by any one company. More information can be found at https://www.linuxfoundation.org.

The Linux Foundation has registered trademarks and uses trademarks. For a list of trademarks of The Linux Foundation, please see our trademark usage page: https://www.linuxfoundation.org/trademark-usage. Linux is a registered trademark of Linus Torvalds.

OpenChain Blogpost: https://www.openchainproject.org/news/2017/08/07/openchain-welcomes-credativ