Jump to content

Search the Community

Showing results for tags 'microservices'.

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Forums

  • General
    • Welcome New Members !
    • General Discussion
    • Site News
  • DevOps & SRE
    • DevOps & SRE General Discussion
    • Data Engineering, Data Science & AI
    • Development & Programming
    • CI/CD & GitOps
    • Docker, Containers, Microservices & Serverless
    • Infrastructure-as-Code
    • Kubernetes
    • Linux
    • Monitoring, Observability & Logging
    • Security
  • Cloud Providers
    • Amazon Web Services
    • Google Cloud Platform
    • Microsoft Azure
    • Red Hat OpenShift

Find results in...

Find results that contain...


Date Created

  • Start

    End


Last Updated

  • Start

    End


Filter by number of...

Joined

  • Start

    End


Group


Website URL


LinkedIn Profile URL


About Me


Development Experience


Cloud Experience


Current Role


Skills


Certifications


Favourite Tools


Interests

Found 24 results

  1. Feature flags are commonly used constructs and have been there for a while. But in the last few years, things have evolved and feature flags are playing a major role in delivering continuous risk-free releases. In general, when a new feature is not fully developed and we still want to branch off a release from the mainstream, we can hide our new feature and toggle it off in production. Another use-case is when we want to release our feature to only a small percentage of users, we set the feature 'on' for a segment/geography and set it 'off' for the rest of the world. The capability to toggle a feature on and off without doing a source code change gives the developer an extra edge to experiment with conflicting features with live traffic. Let us deep dive into more details about feature flags and an example implementation in Springboot. Things To Consider When We Are Introducing a New Feature Flag Establish a consistent naming convention across applications, to make the purpose of the feature flags easily understandable by other developers and product teams. Where to maintain feature flags? In the application property file: Toggle features based on environment. Useful for experimenting in development while keeping features off in production. In configuration server or vault: Let's imagine you are tired after a late-night release, and your ops team calls you at 4 am, to inform you the new feature is creating red alerts everywhere in monitoring tools, here comes the Feature toggle to your rescue. First, turn the feature 'off' in the config server and restart compute pods alone, In database or cache: Reading configs or flag values from a database or an external cache system like Redis, you don't have to redeploy or restart your compute, as the values can be dynamically read from the source at regular intervals, pods get updated value without any restart. You can also explore open-source or third-party SDKs built for feature flags, a handful of them are already in the market. They also come with additional advantages that help in the lifecycle management of feature flags. View the full article
  2. In the ever-evolving software delivery landscape, containerization has emerged as a transformative force, reshaping how organizations build, test, deploy, and manage their applications. Whether you are maintaining a monolithic legacy system, navigating the complexities of Service-Oriented Architecture (SOA), or orchestrating your digital strategy around application programming interfaces (APIs), containerization offers a pathway to increased efficiency, resilience, and agility. In this post, we’ll debunk the myth that containerization is solely the domain of microservices by exploring its applicability and advantages across different architectural paradigms. Containerization across architectures Although containerization is commonly associated with microservices architecture because of its agility and scalability, the potential extends far beyond, offering compelling benefits to a variety of architectural styles. From the tightly integrated components of monolithic applications to the distributed nature of SOA and the strategic approach of API-led connectivity, containerization stands as a universal tool, adaptable and beneficial across the board. Beyond the immediate benefits of improved resource utilization, faster deployment cycles, and streamlined maintenance, the true value of containerization lies in its ability to ensure consistent application performance across varied environments. This consistency is a cornerstone for reliability and efficiency, pivotal in today’s fast-paced software delivery demands. Here, we will provide examples of how this technology can be a game-changer for your digital strategy, regardless of your adopted style. Through this exploration, we invite technology leaders and executives to broaden their perspective on containerization, seeing it not just as a tool for one architectural approach but as a versatile ally in the quest for digital excellence. 1. Event-driven architecture Event-driven architecture (EDA) represents a paradigm shift in how software components interact, pivoting around the concept of events — such as state changes or specific action occurrences — as the primary conduit for communication. This architectural style fosters loose coupling, enabling components to operate independently and react asynchronously to events, thereby augmenting system flexibility and agility. EDA’s intrinsic support for scalability, by allowing components to address fluctuating workloads independently, positions it as an ideal candidate for handling dynamic system demands. Within the context of EDA, containerization emerges as a critical enabler, offering a streamlined approach to encapsulate applications alongside their dependencies. This encapsulation guarantees that each component of an event-driven system functions within a consistent, isolated environment — a crucial factor when managing components with diverse dependency requirements. Containers’ scalability becomes particularly advantageous in EDA, where fluctuating event volumes necessitate dynamic resource allocation. By deploying additional container instances in response to increased event loads, systems maintain high responsiveness levels. Moreover, containerization amplifies the deployment flexibility of event-driven components, ensuring consistent event generation and processing across varied infrastructures (Figure 1). This adaptability facilitates the creation of agile, scalable, and portable architectures, underpinning the deployment and management of event-driven components with a robust, flexible infrastructure. Through containerization, EDA systems achieve enhanced operational efficiency, scalability, and resilience, embodying the principles of modern, agile application delivery. Figure 1: Event-driven architecture. 2. API-led architecture API-led connectivity represents a strategic architectural approach focused on the design, development, and management of APIs to foster seamless connectivity and data exchange across various systems, applications, and services within an organization (Figure 2). This methodology champions a modular and scalable framework ideal for the modern digital enterprise. The principles of API-led connectivity — centering on system, process, and experience APIs — naturally harmonize with the benefits of containerization. By encapsulating each API within its container, organizations can achieve unparalleled modularity and scalability. Containers offer an isolated runtime environment for each API, ensuring operational independence and eliminating the risk of cross-API interference. This isolation is critical, as it guarantees that modifications or updates to one API can proceed without adversely affecting others, which is a cornerstone of maintaining a robust API-led ecosystem. Moreover, the dual advantages of containerization — ensuring consistent execution environments and enabling easy scalability — align perfectly with the goals of API-led connectivity. This combination not only simplifies the deployment and management of APIs across diverse environments but also enhances the resilience and flexibility of the API infrastructure. Together, API-led connectivity and containerization empower organizations to develop, scale, and manage their API ecosystems more effectively, driving efficiency and innovation in application delivery. Figure 2: API-led architecture. 3. Service-oriented architecture Service-oriented architecture (SOA) is a design philosophy that emphasizes the use of discrete services within an architecture to provide business functionalities. These services communicate through well-defined interfaces and protocols, enabling interoperability and facilitating the composition of complex applications from independently developed services. SOA’s focus on modularity and reusability makes it particularly amenable to the benefits offered by containerization. Containerization brings a new dimension of flexibility and efficiency to SOA by encapsulating these services into containers. This encapsulation provides an isolated environment for each service, ensuring consistent execution regardless of the deployment environment. Such isolation is crucial for maintaining the integrity and availability of services, particularly in complex, distributed architectures where services must communicate across different platforms and networks. Moreover, containerization enhances the scalability and manageability of SOA-based systems. Containers can be dynamically scaled to accommodate varying loads, enabling organizations to respond swiftly to changes in demand. This scalability, combined with the ease of deployment and rollback provided by container orchestration platforms, supports the agile delivery and continuous improvement of services. The integration of containerization with SOA essentially results in a more resilient, scalable, and manageable architecture. It enables organizations to leverage the full potential of SOA by facilitating faster deployment, enhancing performance, and simplifying the lifecycle management of services. Together, SOA and containerization create a powerful framework for building flexible, future-proof applications that can adapt to the evolving needs of the business. 4. Monolithic applications Contrary to common perceptions, monolithic applications stand to gain significantly from containerization. This technology can encapsulate the full application stack — including the core application, its dependencies, libraries, and runtime environment within a container. This encapsulation ensures uniformity across various stages of the development lifecycle, from development and testing to production, effectively addressing the infamous ‘it works on my machine’ challenge. Such consistency streamlines the deployment process and simplifies scaling efforts, which is particularly beneficial for applications that need to adapt quickly to changing demands. Moreover, containerization fosters enhanced collaboration among development teams by standardizing the operational environment, thereby minimizing discrepancies that typically arise from working in divergent development environments. This uniformity is invaluable in accelerating development cycles and improving product reliability. Perhaps one of the most strategic benefits of containerization for monolithic architectures is the facilitation of a smoother transition to microservices. By containerizing specific components of the monolith, organizations can incrementally decompose their application into more manageable, loosely coupled microservices. This approach not only mitigates the risks associated with a full-scale migration but also allows teams to gradually adapt to microservices’ architectural patterns and principles. Containerization presents a compelling proposition for monolithic applications, offering a pathway to modernization that enhances deployment efficiency, operational consistency, and the flexibility to evolve toward a microservices-oriented architecture. Through this lens, containerization is not just a tool for new applications but a bridge that allows legacy applications to step into the future of software development. Conclusion The journey of modern software development, with its myriad architectural paths, is markedly enhanced by the adoption of containerization. This technology transcends architectural boundaries, bringing critical advantages such as isolation, scalability, and portability to the forefront of application delivery. Whether your environment is monolithic, service-oriented, event-driven, or API-led, containerization aligns perfectly with the ethos of modern, distributed, and cloud-native applications. By embracing the adaptability and transformative potential of containerization, you can open your architectures to a future where agility, efficiency, and resilience are not just aspirations but achievable realities. Begin your transformative journey with Docker Desktop today and redefine what’s possible within the bounds of your existing architectural framework. Learn more How to Stuff Monolithic Applications Into a Container (DockerCon 2023) Subscribe to the Docker Newsletter. Explore Docker Desktop. Join the Docker community. Skill up with Docker training. View the full article
  3. Building scalable systems using microservices architecture is a strategic approach to developing complex applications. Microservices allow teams to deploy and scale parts of their application independently, improving agility and reducing the complexity of updates and scaling. This step-by-step guide outlines the process of creating a microservices-based system, complete with detailed examples. 1. Define Your Service Boundaries Objective Identify the distinct functionalities within your system that can be broken down into separate, smaller services. View the full article
  4. So far in our series on modern microservices, we have built: A simple gRPC service Added a REST/HTTP interface exposing gRPC service RESTfully and showing a glimpse of the gRPC plugin universe Introduced Buf.build to simplify plugin management We are far from productionalizing our service. A production-ready service would (at the very least) need several things: View the full article
  5. In the dynamic world of microservices architecture, efficient service communication is the linchpin that keeps the system running smoothly. To maintain the reliability, security, and performance of your microservices, you need a well-structured service mesh. This dedicated infrastructure layer is designed to cater to service-to-service communication, offering essential features like load balancing, security, monitoring, and resilience. In this comprehensive guide, we’ll delve into the world of service meshes and explore best practices for their effective management within a microservices environment... View the full article
  6. IT teams have been observing applications for their health and performance since the beginning. They observe the telemetry data (logs, metrics, traces) emitted from the application/microservice using various observability tools and make informed decisions regarding scaling, maintaining, or troubleshooting applications in the production environment. If observability is not something new and there are a plethora of monitoring and observability tools available in the market, why bother about OpenTelemetry? What makes it special such that it is getting widely adopted? And most importantly, what is in it for developers, DevOps, and SRE folks? View the full article
  7. Enterprises these days have microservices distributed across a variety of environments — on-prem, cloud, containers, VMs, and more. Applications/services in such a heterogeneous system typically communicate with each other for various purposes, like data sharing. This setup poses multiple security concerns for DevOps folks and architects, the primary one being ensuring proper authentication and establishing trust between service-to-service communication. And that is the tricky part. View the full article
  8. In this blog post, we will explore how ITS improved speed to market, business agility, and performance, by modernizing their air travel search engine. We’ll show how they refactored their monolith application into microservices, using services such as Amazon Elastic Container Service (ECS), Amazon ElastiCache for Redis, and AWS Systems Manager... View the full article
  9. Microservices architecture has become extremely popular in recent years because it allows for the creation of complex applications as a collection of discrete, independent services. Comprehensive testing, however, is essential to guarantee the reliability and scalability of the software due to the microservices’ increased complexity and distributed nature. Due to its capacity to improve scalability, flexibility, and resilience in complex software systems, microservices architecture has experienced a significant increase in popularity in recent years. The distributed nature of microservices, however, presents special difficulties for testing and quality control. In this thorough guide, we’ll delve into the world of microservices testing and examine its significance, methodologies, and best practices to guarantee the smooth operation of these interconnected parts. View the full article
  10. Microservices architecture has revolutionized modern software development, offering unparalleled agility, scalability, and maintainability. However, effectively implementing microservices necessitates a deep understanding of best practices to harness their full potential while avoiding common pitfalls. In this comprehensive guide, we will delve into the key best practices for microservices, providing detailed insights into each aspect... View the full article
  11. Let's walk through a more detailed step-by-step process with code for a more comprehensive API Gateway using YARP in ASP.NET Core. We'll consider a simplified scenario with two microservices: UserService and ProductService. The API Gateway will route requests to these services based on the path. Create two separate ASP.NET Core Web API projects for UserService and ProductService. Use the following commands... View the full article
  12. In a Kubernetes environment, a recent pod scheduling failure occurred due to a specific configuration. Which Kubernetes resource type, often associated with node constraints, might have caused this failure, especially if it wasn’t defined correctly? NodeSelector ResourceQuota PriorityClass Taint PodDisruptionBudget 201 people answered the question. And their answers are reflected in the chart below. The […] View the full article
  13. In 2022, we published Let’s Architect! Architecting microservices with containers. We covered integrations patterns and some approaches for implementing microservices using containers. In this Let’s Architect! post, we want to drill down into microservices only, by focusing on the main challenges that software architects and engineers face while working on large distributed systems structured as a set of independent services. There are many considerations to cover in detail within a broad topic like microservices. We should reflect on the organizational structure, automation pipelines, multi-account strategy, testing, communication, and many other areas. With this post we dive deep into the topic by analyzing the options for discoverability and connectivity available through Amazon VPC Lattice; then, we focus on architectural patterns for communication, mainly on asynchronous communication, as it fits very well into the paradigm. Finally, we explore how to work with serverless microservices and analyze a case study from Amazon, coming directly from the Amazon Builder’s Library... View the full article
  14. For many companies today, containers and microservices are both becoming a normal part of the industry landscape. According to a global survey put out by Statista in 2021, 19% of enterprise organizations today say they are already utilizing containers to achieve their business goals, while 92% of respondents claim microservices to be a success factor. […] The post How Can Containers Help You Use Microservices in DevOps? appeared first on DevOps.com. View the full article
  15. With rigorous development and pre-production testing, your microservices will perform as they should. However, microservices need to be continuously tested against actual end-user activity to adapt the application to changing preferences and requests. This article will cover five deployment strategies that will help developers and DevOps teams when releasing new features or making changes to […] The post 5 Testing Strategies For Deploying Microservices appeared first on DevOps.com. View the full article
  16. Microservices and microapps are both core components of today's application development. Take a look at what makes them similar, and get started on how to monitor these components effectively. View the full article
  17. GitOps is a term that has become very popular in the last few years and is easily on its way to becoming just as overloaded with myth and mystery as DevOps. In this series of articles, we will present the principles and practices of GitOps, explaining the why and how of the automated processes that aim to deliver secure, high-quality, microservice-based applications quickly and efficiently. In part 1 of the series, we introduced the main concepts of GitOps, together with the open source automation technology Tekton and ArgoCD. These tools operate on the Red Hat OpenShift platform to deliver a cloud-native continuous integration and continuous delivery process. The first article also gave an indicative structure for Git repository and ArgoCD applications that can create a secure and audited process for the delivery to production. This article will continue the series by explaining how container images produced during the continuous integration phase can be successfully managed and used within the continuous delivery phase. View the full article
  18. In cloud native computing, the applications are expected to be resilient, loosely coupled, scalable, manageable and observable. Because of containerization, there is a proliferation of microservices and they ship quickly. Microservices environments are more dynamic. In such an environment, making applications resilient means deploying the applications in a fault tolerant manner, but it also means […] The post LitmusChaos Enhances Developer Experience for Cloud Native Reliability appeared first on DevOps.com. View the full article
  19. Cortex, a provider of a platform for tracking ownership of microservices, this week announced its platform can now import services from the GitLab continuous integration/continuous delivery (CI/CD) platform. Anish Dhar, Cortex CEO, said the company’s platform is now integrated with more than 30 tools that are regularly employed by DevOps teams, including offerings from GitLab, […] The post Cortex Taps GitLab to Help DevOps Teams Manage Microservices appeared first on DevOps.com. View the full article
  20. Developing software applications that follow the microservice architecture patterns has become the de-facto standard for greenfield projects. In addition to that, these days, migrating from monolithic to microservices is a trend. View the full article
  21. Microservices have recently gained in popularity, but you may be unsure whether this architecture is right for your environment. What’s great is microservices are not necessarily a new beast, as the concepts behind them have been a solid part of software development for decades. Topics such as modular programming, separation of concerns and service-oriented architecture […] The post 6 Advantages of Microservices appeared first on DevOps.com. View the full article
  22. Company Builds on Customer Success and AWS Marketplace Offerings Allowing Direct Purchase of SaaS Offering with an AWS Account Chicago – October 13, 2020 – Instana, a leading provider of real-time application performance management (APM) and Observability solutions for cloud-native microservice applications, today announced that the company has achieved Advanced Technology Partner status in […] The post Instana Achieves Advanced Technology Partner Status in the AWS Partner Network and Membership in the APN Global Startup Program appeared first on DevOps.com. View the full article
  23. Conf42.com: Cloud Native 2021 Are you cloud native? Do you love Kubernetes? Have you gotten rid of the monolith and adopted microservices.. and then moved back to the monolith? Is your pet dog named Helm, your pet cat Prometheus or your goldfish Envoy? If any of this applies to you, we need you! Come and talk to like-minded people about all the things cloud and cloud native: running in the cloud adopting Kubernetes and related technologies microservices service meshes lessons learned from production failures Details https://www.papercall.io/conf42-cloud-native-2021
  24. Microservices and DevOps are both important trends that have become more and more valuable for enterprises. These practices are designed to offer better agility and efficiency for the enterprise. Hence, DevOps is a key factor of microservice excellence. How is Microservices relevant in DevOps? DevOps works towards continuous monitoring, testing, and deployment of software. Microservices are inherently modular as they are intended to perform a single function. Modular software fits easily into the DevOps structure; thus, incremental changes can be made without difficulties. A single microservice should be able to be easily upgraded, built, tested, deployed, and monitored. DevOps fits perfectly into a similar structure. If a project is employing a microservices-based structure, DevOps speeds up the delivery time and quality simultaneously. Moreover, DevOps practices entail the idea of breaking large problems into smaller pieces and tackling them one by one as a team. Microservices are thus relevant in DevOps as it uses small teams to create functional changes into the enterprise’s services. Microservices are all about enhancing the implementation and collaboration of small teams in a relaxed environment. This less complex environment allows continuous delivery pipelines to maintain a steady flow of deployment. Similarly, containerized microservices enable a quicker deployment and built-in functionality, thus allowing the new services to be immediately operational on any system. Automated operation enhances the microservice approach and creates a more adaptable and easily scalable environment where deployments are performed rapidly. Combining DevOps and Microservices into development and testing will increase the output of teams and the quality of the services. DevOps and Microservices in Agile development Furthermore, DevOps and Microservices approach share similar organizational structures and development cultures as well as have a common interest in cloud-based infrastructure and automation. They both have a similar desire for development, speed, and scalability, which all fit into agile development. The adoption of agile methods also led to the evolution of concepts supported microservices: Continuous Integration (CI) and Continuous Delivery (CD). CD brings quicker changes to production by using a quality-focused ideology, which speeds the deployment pipeline. DevOps and Microservices: Working for Change Having a microservices-based architecture leads inevitably to change that is often well received by those creating modern applications. These changes allow an increase in productivity at an impressive rate and deliver solutions more rapidly to those demanding flexible, scalable applications. Microservices bring some benefits to DevOps fields, such as deployability – increased agility which leads to short build, test and deploy cycles -, reliability, availability – shorter time to deliver a new version -, scalability, modifiability – more flexibility to consume new frameworks, data sources and other resources -, and management- smaller and more independent teams. Conclusion Therefore, Microservices bring more productivity to DevOps by supporting a common toolset, which can then be used for both development and operations. This toolset can establish common terminology and processes for requirements, dependencies, and problems. All of this making it easier for DevOps to operate and fix a problem. DevOps and Microservices work better when they are used together. The post How can Microservices benefit DevOps? appeared first on DevOps Online. View the full article
  • Forum Statistics

    39.8k
    Total Topics
    40k
    Total Posts
×
×
  • Create New...