Infrastructure Automation: How DN uses Oracle Cloud, Kubernetes and Terraform for the Graphius Project
A deeper insight on how we used a combination of Oracle Cloud, Kubernetes and Terraform to deliver a solid infrastructure automation solution to Printing Company Graphius.
Optimizing Ordering Process Custom Print Work
We’re happy to take you on the journey we experienced with Graphius’ printing group Belprinto, a webshop where people can order all kinds of high quality print work. Books, catalogs, magazines, flyers, leaflets you name it. Custom prints in all colors and sizes.
The ordering process’ flow consists of multiple steps. In every stage, the visitor is able to choose between several options. These choices have an impact on the price, something which has to be shown to the customer at all times. Technically, this implies the system has to be capable of calculating the prices very fast and that there are dozens of different prices. In addition, to enable printing, customers need to be able to upload their custom designs, which are often very large files. When the configuration is complete, users can add their order to their shopping basket and either continue shopping or checkout the order.
Quite a complex process. Once the order is fully completed, the information needs to be processed by the backend, sent to the printing department and finally get confirmed to the customer. Read on to discover how we configured and developed this.
An Interplay of Oracle Cloud, Kubernetes and Terraform
It might be clear that the optimization of the ordering process implies both backend and frontend work. We were responsible for all things related to the backend, Nascom implemented the frontend. On top of that, D&N was also in charge for the operations part which actually means running the application.
Even though the scope of this project sounds simple (configuring and ordering), the sheer amount of data was a tremendous challenge. Since we’re a long-term Oracle partner, we decided to use Oracle Cloud for this. Initially we started deploying it on the Compute Classic in their Amsterdam datacenter, configuring a Mesos/Marathon cluster. However, it was not as we wanted, due to some stability problems, and we had a hard time setting up multiple replicas for failover in a decent way. Most of the problems we had were not caused by the software or the Oracle Cloud setup, but by some tools we had to use because of the Compute Classic, which has limited platform services. For a while now, Oracle also has their “new” OCI (short for Oracle Compute Infrastructure), which they were rolling out in Frankfurt. Since Amsterdam was already scheduled to be terminated, we had to move everything to Frankfurt, and we decided to make use of the situation to migrate to OCI right away. Something which allowed us to use many of the new features: load balancers, API gateways, Kubernetes clusters, improved object storage facilities, and many more.
For a while now, we’ve been using Kubernetes clusters. Not only for internal use, but also for mission-critical customer applications. We like it because it’s stable and it plays nicely with our CI/CD (Continuous Integration, Delivery, and Deployment) policy, where we want to treat “infrastructure as code”. This implies that we don’t want to do manual changes on infrastructure, but rather put it in version-controlled configuration files, which are then auto-deployed using our CI/CD pipeline tool (we use Gitlab for this). This regarding the running of the application.
In addition, we also wanted to automate the environment where things are running. For this purpose we used Terraform, a tool that allows companies to implement “infrastructure as code” over a plethora of applications. And since public clouds offer an API , it ideally suits Terraform. Oracle has facilitated a Terraform provider (as it’s called) for both their Compute Classic and OCI. As from the moment we got green light from the customer to move to OCI, we immediately started setting up Terraform configurations. This allowed us to quickly set up the entire infrastructure, ranging from network configuration to storage setup, and the Kubernetes cluster itself. All of this is put into our source repository and CI/CD pipeline, which means that any changes to the platform configuration will automatically be deployed. With some safety checks, of course.
Any sensitive information is encrypted (e.g. passwords), so that it won’t be plainly readable for anybody who has access to the repository. Every configuration detail is stored in files (either Terraform .tf files or Kubernetes .yaml files), which are committed to our Gitlab repository. When something changes, the built pipeline is automatically triggered, which decrypts any sensitive files, runs validity checks and performs a dry-run of the files. The user can then inspect the result to see which changes will be applied. After that, the actual application of the configuration changes can be triggered by the user. This means we can have additional security, as we can restrict users from performing changes directly from their computers. We also dispose of a change history, since the repository keeps track of all changes that were ever done to the configuration files. And we also have automation, so we don’t have to worry that some essential files will be on somebody’s computer where others won’t be able to access it.
High-Performing and Robust Infrastructure
Taking in consideration all of the changes and setup details as described above, we were able to deliver a very stable and robust environment with plenty of safety checks. The latter allows us to make changes on the customer’s infrastructure with a significantly lowered risk of human errors causing problems. And with succes. Since its launch, the application has reached almost 100% uptime. Mission accomplished!
Triggered by our way of working? Interested in one of our Infrastructure Automation solutions? Don’t hesitate to get in touch, our team is fully ready to help you!
Truth can only be found in one place: the code.
Let's go invent tomorrow rather than worrying about what happened yesterday.
Debreuck Neirynck developed a new application for us for efficient and consistent pricing and planning. But more important is the smooth way in which they implied this. They delivered a very clear delivery plan and all our employees were well prepared for the new things to come. Absolutely a good piece of change management.