Containerization
Containerization has significantly changed the modern IT landscape. While monolithic applications shaped IT in previous years, more recently it has been so-called microservices that have been playing an increasingly important role in companies’ IT infrastructures.
The use of distributed systems in particular has contributed to applications no longer being developed and operated as a single unit, but instead consisting of a large number of independent services and components.
The advantage is obvious. Services based on a microservices architecture can scale better and compensate for failures more easily, because the individual microservices can be operated independently and communicate with each other via fixed interfaces, without an outage affecting the entire system. Requirements such as scalability and high availability therefore make the microservices architecture attractive not only for global players such as Netflix, Amazon, or Google, but also for an increasing number of large and mid-sized companies. As a result, in recent years the need to migrate in-house services and infrastructures to a microservices architecture has risen enormously, in order to benefit from the advantages of this architecture and meet end-customer availability requirements.
“Requirements such as scalability and high availability therefore make the microservices architecture attractive not only for global players, but also for an increasing number of large and mid-sized companies.”
Containerization plays a decisive role in implementing a microservices architecture. Applications or components are no longer run on physical or virtual servers, but in so-called containers. Instead of providing a full virtual operating system for a single application, only the most important libraries and the application itself are executed in a container. What began many years ago with LXC and Docker containers has now evolved into complex architectures based on technologies such as Kubernetes or OpenShift.
Taking this step or dealing with such complex architectures presents some companies with major challenges. The constantly changing conditions and the continual emergence of new projects in this area also make it difficult for companies and their employees to keep pace with these developments. For this reason, it is important to have a strong partner such as credativ at your side, which has already been able to successfully support customers in multiple projects in the past. This has included migrating in-house applications to a microservices architecture or building entire infrastructures based on technologies such as Kubernetes.
Customers benefit not only from our understanding of containerization, the required architecture, and the design of an appropriate infrastructure for the intended use case, but also from our expertise in all adjacent areas such as DevOps and open-source topics. Many of the technologies used in the containerization environment are based on open-source technologies. With our many years of experience with these projects, we support you not only with containerization itself, but also with many services built on top of it.
In addition, our colleagues continuously further their education by attending conferences and trade fairs in order to keep pace with rapid changes. With this knowledge, we help our customers plan and build container platforms, adapt and integrate build and deployment pipelines, conduct architecture reviews in workshops together with customers, or support you in support cases and in operating such infrastructures.
Examples
Below you will find two concrete examples that are intended to give you a better insight into our previous projects:
As part of projects, our colleagues have already successfully re-planned customer-specific applications—previously designed to run on physical servers—together with the customer and ported them to container platforms. In doing so, our colleagues consistently supported the customer’s developers with knowledge about any necessary adjustments and application requirements, so that they are able to prepare the applications for operation on a container platform and make this possible.
The containerization of the applications is handled entirely by our colleagues and implemented in close coordination with the customer. Required build pipelines are agreed in meetings and, if desired, implemented independently. The design, planning, and implementation are typically documented in corresponding documentation and ticketing systems on the customer side, in order to share the project status openly and transparently with the respective stakeholders on the customer side.
During implementation, care is always taken to reuse the customer’s conventions and existing tools to enable the most consistent possible integration into the existing infrastructure, and to introduce new technologies only where necessary. This approach minimizes the need for new knowledge and reduces onboarding hurdles for the administrators who may need to operate it. The use of these new technologies is always chosen in line with necessity, the desire to use state-of-the-art technologies, and the employees’ onboarding time. Even the best container platform is only as useful as it can be sensibly used, operated, and fully leveraged by the customer’s employees.
With technologies such as Cluster API, multi-cluster environments can be created and operated automatically at scale. They can be provided on different providers such as Proxmox, VMware vSphere, or hyperscalers (e.g., AWS, Azure, and GCP), and they also support updating the components required for the respective multi-cluster environment without impacting operations or requiring manual steps. This also makes it easy to implement high availability, for example.
In addition to the Cluster API project, there are other projects that implement such environments automatically, for example based on Ansible and the kubespray project. Using these projects makes it possible to extend existing automation solutions for provisioning multi-cluster environments without requiring new technologies within the company. This is particularly useful when, for example, Ansible is already used in the corporate infrastructure and the effort required from the customer’s employees can therefore be reduced. In addition, it keeps the technology stack to be learned small, which lowers barriers to adoption within the company.
In previous projects, our employees have already successfully worked with customers to design, build, and operate multi-cluster environments. If desired, care was also taken to keep the degree of automation as high as possible in order to minimize the effort required to build and operate these multi-cluster environments. Nevertheless, attention is always paid to taking a pragmatic approach to implementation. It is not always worth the effort to automate every step, especially when certain steps are not recurring and only need to be implemented once. Thanks to this approach, such multi-cluster environments have now also become attractive for small and mid-sized companies, as the effort required to implement them has steadily decreased in recent years. In particular, due to the benefits that a microservices architecture brings to an organization’s own IT infrastructure, the need to implement such environments in the corporate context has also steadily increased in recent years.