News and Stories
Read more about us. What is happening in the office, what we are doing outside of the office, what we have achieved during work and more.
In recent years in IT Operations life has changed a lot. It has come a long way from manually running shell scripts to fully automated DevOps processes, suppported by containerized cloud environments. People started to realize that instead of drag-and-drop interfaces, everything can be defined as code, primarily through domain-specific languages (DSLs) like CloudFormation, ARM, or Terraform. This shift led to the term "DevOps" with the implication that, if everything is code, traditional Ops might no longer be needed.
However, developers, accustomed to their familiar programming languages and fast local environments, were often reluctant to adopt unfamiliar DSLs. As a result, Operations teams were pushed to adopt DevOps practices, but with limited success. Today, we’re seeing a new trend where programming languages are starting to replace DSLs (e.g. Pulumi, Cloud CDK, Dagger), and practices like GitOps and Continuous Deployment are becoming mainstream. The automation of infrastructure has also driven the rise of Microservices and Monoliths has fallen from grace.
Imagine being tasked with transforming an existing on-premise legacy high-volume e-commerce application into a cloud-based microservices architecture (with 50+ services), all while maintaining the same level of agility and enabling continuous delivery and deployment. If each service is placed in its own Git repository and developed by separate teams, the resulting complexity can quickly become overwhelming, potentially paralyzing the entire delivery process—often leading to project failure. So, how can we tackle this challenge? What strategies can we adopt to manage the complexity effectively? And ultimately, what is the Desired State of IT operations having Microservices?
The key idea to manage complexity can be traced back to 2004, when Mark Burgess, a theoretical physicist, introduced the concepts of Desired State and Convergent Operators within Promise Theory. Starting with CFEngine and later evolving through systems like Kubernetes (Borg by Google) and GitOps, this idea has shaped modern infrastructure management.
At its core, the approach emphasizes that a complex system can be constructed from simpler, autonomous agents. Each agent operates independently, making promises that may provide value to others. Together, these agents exhibit complex, emergent behaviors. Burgess was inspired by statistical mechanics, where individual atoms, governed by quantum mechanical principles, collectively give rise to complex Emergent Behaviors.
These autonomous agents could be envisioned as self-sufficient, resilient entities capable of adapting quickly to environmental changes or failures. The resiliance patterns or aspects - as coined by develpers - led to the appearance of Service Meshes and Sidecar Proxies in the 2010s. Just as in web development, where popular features introduced by third-party frameworks eventually become part of browser standards, Kubernetes and Service Meshes are evolving in a similar way. This evolution is evident in the emergence of the Gateway API (with constructs like HttpRoute and GatewayClass) and the rise of Sidecarless Meshes. There is also a growing effort to make these agents as environment-agnostic as possible. While achieving this level of abstraction remains a challenge, technologies like WASI (WebAssembly System Interface) offer a promising direction forward.
GitOps principles
- 1. Declarative: a system managed by GitOps must have its Desired State expressed declaratively.
- 2. Versioned and Immutable: desired state is stored in a way that enforces immutability, versioning and retains a complete version history.
- 3. Pulled Automatically: agents automatically pull the desired state declarations from the source.
- 4. Continuously Reconciled: agents continuously observe actual system state and attempt to apply the desired state.
It's no coincidence that Git and Kubernetes are perfect fit with GitOps principles. Kubernetes Custom Resource Definitions (CRDs) serve as ideal representations of the Desired State. Git, with its versioned and immutable commits, provides a reliable source of truth for these CRDs. By sourcing CRDs directly from a Git repository, custom Kubernetes controllers can continuously monitor changes in Git, creating or updating CRDs as needed. These controllers then reconcile the cluster state with the Desired State defined in the CRDs, ensuring the system remains consistent. All of this happens automatically, making GitOps a powerful approach for managing infrastructure and applications declaratively.
In an ideal future, the only responsibility for developers and operators is to update Git repositories. By simply creating a Pull Request, the process of reviewing, quality control, and deployment to production becomes fully automated. This approach promises speed, security, and cost efficiency, allowing bug fixes to reach production in mere minutes. Modern DevOps tools like GitLab, ArgoCD, and Flux are paving the way for this vision.
At Dgital we have built a custom solution leveraging these concepts and tools, enabling canary deployments across more than 50 services. In practice we had these challenges to be solved:
Git excels at many tasks, including versioning, merging, and auditing. However, frequent automatic updates can often result in conflicts that require manual resolution. As part of implementing GitOps, we had to store certain states directly in CRDs, utilizing their optimistic locking mechanism to manage state consistency.
Editing Kubernetes YAML files directly is not ideal. We found that using frameworks like CDK8s produces better results, but adopting these tools can be challenging for those in daily operations. To address this, we developed an API to facilitate basic editing functions for these files, such as deploying a version, setting Canary to 40%, rolling back, or finalizing deployments. Additionally, we created a custom, user-friendly dashboard to make these API functions even easier to use.
Microservices projects often face the challenge of deploying everything at once or requiring developers to manage backward compatibility. These projects are typically stored in one or more monorepos, where handling and testing backward compatibility across multiple services can be extremely difficult, if not impossible.
Monorepo tools like NX, however, can calculate exactly which projects need to be deployed to a given environment. If a tool existed that could "jump" between commits, developers wouldn't need to worry about backward compatibility. Instead, their focus would be solely on maintaining consistency within the repository, which can be ensured through fast, automated test.
The real challenge arises when implementing Canary or Blue-Green deployments. During a release, traffic is split between the old and new versions, and when only a few services are changed, shared services must be able to differentiate between routing to the new and old versions. We were eventually able to solve this issue by using bucketing through Envoy Lua extensions.
Multi-service deployments are often not atomic and can result in user-facing failures that are difficult to diagnose, especially when caused by temporarily unaligned or unavailable services. To manage this complexity, teams typically rely on service meshes like Istio, Linkerd, or Cilium. However, Docker instances can consume significant resources, and service mesh sidecars add even more overhead. While Linkerd offers small and fast sidecars, they are not customizable, whereas Istio provides customization but with sidecars comparable in size to service containers. Fortunately, Linux eBPF capabilities have enabled sidecarless service meshes, with Istio's Ambient mode serving as a prime example of this approach.
Summary
It took several months to set up a working solution that met our original goals. Along the way, we experimented with and discarded many concepts. While this architecture may seem intimidating at first, and developers may initially hesitate to adopt it, the transparency and flexibility it provides in managing infrastructure make it extremely powerful and can led to real DevOps and Ops can transform into Platform Engineers.
As a general guideline, avoid using microservices unless they're absolutely necessary. However, if you can't avoid them, investing in DevOps is essential.
Links:
GitOps: https://medium.com/weaveworks/gitops-operations-by-pull-request-14e8b659b058
GitOps principles: https://opengitops.dev/
NX: https://nx.dev/
Kubernetes Custom Resources: https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/
AWS CDK: https://aws.amazon.com/cdk/
Cdk8s: https://cdk8s.io/
Flux: https://fluxcd.io/
Istio Ambient mode: https://istio.io/latest/docs/ambient/overview/
We have completed the development of the DeepX IBE Suite, which is not just a booking website but also a fare cache engine and a high-performance middleware layer that provides interfaces to any connected system. DeepX IBE Suite sits between the airline’s PSS and its sales channels, serving all required data transformation and custom needs.
This multi-functional solution enables start-up airlines to launch a sales website in only 2 months, saving them from the substantial hassle of developing their own IBE from the ground up.
For mature airlines, DeepX API and SDK can significantly reduce development and maintenance time in the case of various large projects such as PSS version upgrades, travel service provider integration, kiosk or mobile app introduction, NDC interface and else.
The engine is based on DGITAL's 10+ years of experience in the field and is a result of years of organic airline websites and PSS development for market-leading airlines.
Give us a call, and we will be happy to schedule a demo and present even more features.
We have recently introduced an AWS-based solution to one of our airline partner.
In the fast-paced and competitive airline industry, providing customers with accurate and up-to-date fare information is essential for ensuring a smooth booking experience. Fare caching is a powerful technique that can help airline websites maintain optimal performance and deliver real-time fare data efficiently. This article will explore the benefits and best practices of using fare cache on airline websites, as well as practical examples of its implementation.
Fare caching is the practice of storing flight prices and availability data in a temporary storage to improve the speed and reliability of flight search results on an airline's website. Instead of having to query the airline's reservation system for the latest pricing and availability data every time a customer searches for a flight, the airline stores this data in a cache that can be quickly accessed by the website. This enables airline websites to access and display fare information more quickly, reducing the need for repetitive requests to the PSS and minimizing the risk of slow performance while keeping the information up to date.
Benefits of Using Fare Cache on Airline Websites
Improved performance: By caching fare data, airline websites can deliver information more quickly, resulting in faster load times and a more seamless user experience.
Reduced server load: With fare caching, the number of requests to GDS or other fare sources is significantly reduced, minimizing server load and reducing the risk of downtime or slow performance during peak times.
Enhanced customer experience: Fare cache ensures that customers receive accurate and up-to-date fare information, which is crucial for making informed booking decisions.
Best Practices for Using Fare Cache
Determine appropriate caching algorithm: Fare cache is only effective if the data it contains is accurate and up-to-date. Airlines should regularly update their fare cache to ensure that customers have access to the latest pricing and availability information while also finding the optimal frequency to ensure system load savings. So the caching duration should strike a balance between data freshness and performance. Depending on the airline's booking volume, the frequency of updates can range from every few minutes to once a day.
Implement cache invalidation strategies: Regularly update the fare cache and remove outdated data to maintain accuracy and prevent potential performance issues. This means that when fare prices and availability change, the fare cache should be updated accordingly. This can be done by setting a time limit on how long data can stay in the cache, or by using an event-driven approach that invalidates the cache when specific conditions are met.
Use Multiple Cache Layers: To ensure that the fare cache is responsive and reliable, airlines should use multiple cache layers. The first layer can store the most frequently accessed data, while subsequent layers can store less frequently accessed data. This approach can help to reduce the load on the airline's reservation system while still providing fast and reliable search results for customers.
Use smart caching techniques: Implement advanced caching techniques, such as predictive caching, to anticipate user needs and pre-fetch fare data for popular routes or periods.
Monitor cache performance: Regularly analyze cache hit and miss ratios to identify areas for improvement and optimize caching strategies. By monitoring these metrics, airlines can also identify and address any issues with the fare cache before they impact customers.
Use cases for Fare Cache
Implementing fare cache on an airline website can provide several benefits for both the airline and its customers. Here are some of the key benefits.
Faster Search Results: Fare cache can significantly improve the speed of flight searches on an airline's website. By storing pricing and availability data in a cache, the website can quickly retrieve this information and display it to customers, reducing the time it takes to complete a booking.
Provide Fare Date to Third Parties: Fare cache can be used to provide data to third party systems like GDS, Metasearch or even ChatGPT to search and show accureate prices to their users and generate additional bookings.
Improved Reliability: Fare cache can also improve the reliability of flight search results. By storing data in a cache, the airline can reduce the load on its reservation system, which can help to prevent outages and other performance issues that could impact customers.
Reduced Costs: Implementing fare cache can also help airlines to reduce their costs. By reducing the load on their reservation system, airlines can avoid the need to invest in additional infrastructure to handle peak demand periods. This can help to improve the airline's profitability while still providing high-quality service to customers.
PSS System upgrade to expand multi-channel distribution
Expanding the distribution channels as an Airline is usually on the radar. Especially when travellers are more sensitive about making the decision about traveling and how much they spend on a leisure holiday. To able to effectively reach the desired target audience, retail companies need multiple sales channels, because customers has different purchase habits and preferences.
JetSMART has announced in April 2022, that they are teaming up with Amadeus to offer their services via Amadeus Travel Platform, however, to increase the number of sales channels and to make the most out of the integration, it was a reasonable decision to first upgrade its Passenger Service System (PSS) to the latest version, which have scaled the project into the next level.
We supported JetSMART during both projects and delivered the solution successfully, contributing to the success of increasing the global network of the sales and giving more opportunity for customers to buy tickets, and at the same time we modernized the ecommerce ecosystem with using the latest version of PSS.
Amadeus integration to DEEP has been completed
We are continuously improving our flagship product, DEEP Travel Reservation System with provider integrations and functional improvements. Now, we have completed the integration with Amadeus Web Services which enables DEEP to provide flight deals from over 500 airlines around the world. The integration also makes ticket issuing possible via the API, which allows any tour operators to fully automate the flight ticket booking process without any manual interaction. Customers can instantly get their confirmed tickets online. Amadeus also provides ancillary services like extra bags, legroom or seatmap.
The GDS integration extended DEEP airline offerings provided to book both dynamic packages or standalone flight tickets. DEEP Travel Reservation System is a comprehensive online reservation system for tour operators, travel agents, and airlines managing the booking process of third-party services like accommodation, airline ticket, transfer, insurance, and rental car as well as allowing businesses to add their locally contracted travel components.
About Amadeus
Amadeus is one of the top ten travel technology companies in the world, with more than 30 years of experience in the travel industry. Amadeus IT Group is a transaction processor for the global travel and tourism industry. It is structured around two key related areas — global distribution system and IT Solutions business area. More about Amadeus here: Amadeus
We are continuously extending the system features and service provider integrations according to our partners demand. Hotelbeds, Expedia and GTA have already been integrated to our dynamic packaging platform, DEEP and we have recently completed the integration of Go Global Travel.
Go Global is a leading travel technology company offering an innovative booking accommodation platform for B2B clients. Go Global has 13 multilingual offices around the globe which provide 24/7 local language customer service. With the integrated Go Global API, our clients will be able to offer a seamless travel experience to travellers at the lowest cost in over 200,000 hotels and apartments Go Global currently have in their portfolio.