Search the Community
Showing results for tags 'microservices'.
-
Data consistency is one of the most important aspects when building and maintaining any application. While multiple architecture patterns are present to build applications, microservices prevail as one of the most widely used software architectures. Using microservices architecture can enable you to develop e-commerce applications, streaming platforms, and many other applications where different microservices handle […]View the full article
-
There’s been a long debate about whether it’s better to use REST or GraphQL for building microservices. Both technologies have their proponents and critics, but when it comes to the specific needs of microservices architectures, GraphQL emerges as the clear front-runner. Here’s why. Understanding the RESTful Concerns While REST has been the go-to API style […]View the full article
-
In a bid to help businesses of all sizes embrace the new AI-driven world, Nvidia has taken the wraps off a new approach for software and microservice access that it says could change everything. The company's Nvidia Inference Microservices, or NIM, offerings will look to replace the myriad of code and services currently needed to create or run software. Instead, a NIM will collate together a collection of containerized models and their dependencies in a single package, which can then be distributed and deployed where needed. NIM-ble In his keynote speech at the recent Nvidia GTC 2024 event, company CEO Jensen Huang said that the new approach signals a shift change for businesses everywhere. "It is unlikely that you'll write it from scratch or write a whole bunch of Python code or anything like that," Huang said. "It is very likely that you assemble a team of AI." "This is how we're going to write software in the future." (Image credit: Future / Mike Moore) Huang noted that AI tools and LLMs will likely be a common sight in NIM deployments as companies across the world look to embrace the latest technologies. He gave one example of how Nvidia itself is using one such NIM to create an internal chatbot designed to solve common problems encountered when building chips, helping improve knowledge and capabilities across the board. Nvidia adds that NIMs built for portability and control, and can be deployed across not only cloud, but also on-premise data centers and even local workstations, including its RTX workstations and PCs as well as its DGX and DGX Cloud services. Developers can access AI models through APIs that adhere to the industry standards for each domain, simplifying application development, and NIM will be available as part of Nvidia AI Enterprise, the company's new platform and hub for AI services, offering a one-stop shop for businesses to understand and access new tools, with NIM AI use cases spreading across LLMs, VLMS, drug discovery, medical imaging and more. More from TechRadar Pro Nvidia CEO says don't give up learning new skills — just maybe leave programming to AI"The world's most powerful chip" – Nvidia says its new Blackwell is set to power the next generation of AIHere is our guide to the best AI writers available View the full article
-
Feature flags are commonly used constructs and have been there for a while. But in the last few years, things have evolved and feature flags are playing a major role in delivering continuous risk-free releases. In general, when a new feature is not fully developed and we still want to branch off a release from the mainstream, we can hide our new feature and toggle it off in production. Another use-case is when we want to release our feature to only a small percentage of users, we set the feature 'on' for a segment/geography and set it 'off' for the rest of the world. The capability to toggle a feature on and off without doing a source code change gives the developer an extra edge to experiment with conflicting features with live traffic. Let us deep dive into more details about feature flags and an example implementation in Springboot. Things To Consider When We Are Introducing a New Feature Flag Establish a consistent naming convention across applications, to make the purpose of the feature flags easily understandable by other developers and product teams. Where to maintain feature flags? In the application property file: Toggle features based on environment. Useful for experimenting in development while keeping features off in production. In configuration server or vault: Let's imagine you are tired after a late-night release, and your ops team calls you at 4 am, to inform you the new feature is creating red alerts everywhere in monitoring tools, here comes the Feature toggle to your rescue. First, turn the feature 'off' in the config server and restart compute pods alone, In database or cache: Reading configs or flag values from a database or an external cache system like Redis, you don't have to redeploy or restart your compute, as the values can be dynamically read from the source at regular intervals, pods get updated value without any restart. You can also explore open-source or third-party SDKs built for feature flags, a handful of them are already in the market. They also come with additional advantages that help in the lifecycle management of feature flags. View the full article
-
In the ever-evolving software delivery landscape, containerization has emerged as a transformative force, reshaping how organizations build, test, deploy, and manage their applications. Whether you are maintaining a monolithic legacy system, navigating the complexities of Service-Oriented Architecture (SOA), or orchestrating your digital strategy around application programming interfaces (APIs), containerization offers a pathway to increased efficiency, resilience, and agility. In this post, we’ll debunk the myth that containerization is solely the domain of microservices by exploring its applicability and advantages across different architectural paradigms. Containerization across architectures Although containerization is commonly associated with microservices architecture because of its agility and scalability, the potential extends far beyond, offering compelling benefits to a variety of architectural styles. From the tightly integrated components of monolithic applications to the distributed nature of SOA and the strategic approach of API-led connectivity, containerization stands as a universal tool, adaptable and beneficial across the board. Beyond the immediate benefits of improved resource utilization, faster deployment cycles, and streamlined maintenance, the true value of containerization lies in its ability to ensure consistent application performance across varied environments. This consistency is a cornerstone for reliability and efficiency, pivotal in today’s fast-paced software delivery demands. Here, we will provide examples of how this technology can be a game-changer for your digital strategy, regardless of your adopted style. Through this exploration, we invite technology leaders and executives to broaden their perspective on containerization, seeing it not just as a tool for one architectural approach but as a versatile ally in the quest for digital excellence. 1. Event-driven architecture Event-driven architecture (EDA) represents a paradigm shift in how software components interact, pivoting around the concept of events — such as state changes or specific action occurrences — as the primary conduit for communication. This architectural style fosters loose coupling, enabling components to operate independently and react asynchronously to events, thereby augmenting system flexibility and agility. EDA’s intrinsic support for scalability, by allowing components to address fluctuating workloads independently, positions it as an ideal candidate for handling dynamic system demands. Within the context of EDA, containerization emerges as a critical enabler, offering a streamlined approach to encapsulate applications alongside their dependencies. This encapsulation guarantees that each component of an event-driven system functions within a consistent, isolated environment — a crucial factor when managing components with diverse dependency requirements. Containers’ scalability becomes particularly advantageous in EDA, where fluctuating event volumes necessitate dynamic resource allocation. By deploying additional container instances in response to increased event loads, systems maintain high responsiveness levels. Moreover, containerization amplifies the deployment flexibility of event-driven components, ensuring consistent event generation and processing across varied infrastructures (Figure 1). This adaptability facilitates the creation of agile, scalable, and portable architectures, underpinning the deployment and management of event-driven components with a robust, flexible infrastructure. Through containerization, EDA systems achieve enhanced operational efficiency, scalability, and resilience, embodying the principles of modern, agile application delivery. Figure 1: Event-driven architecture. 2. API-led architecture API-led connectivity represents a strategic architectural approach focused on the design, development, and management of APIs to foster seamless connectivity and data exchange across various systems, applications, and services within an organization (Figure 2). This methodology champions a modular and scalable framework ideal for the modern digital enterprise. The principles of API-led connectivity — centering on system, process, and experience APIs — naturally harmonize with the benefits of containerization. By encapsulating each API within its container, organizations can achieve unparalleled modularity and scalability. Containers offer an isolated runtime environment for each API, ensuring operational independence and eliminating the risk of cross-API interference. This isolation is critical, as it guarantees that modifications or updates to one API can proceed without adversely affecting others, which is a cornerstone of maintaining a robust API-led ecosystem. Moreover, the dual advantages of containerization — ensuring consistent execution environments and enabling easy scalability — align perfectly with the goals of API-led connectivity. This combination not only simplifies the deployment and management of APIs across diverse environments but also enhances the resilience and flexibility of the API infrastructure. Together, API-led connectivity and containerization empower organizations to develop, scale, and manage their API ecosystems more effectively, driving efficiency and innovation in application delivery. Figure 2: API-led architecture. 3. Service-oriented architecture Service-oriented architecture (SOA) is a design philosophy that emphasizes the use of discrete services within an architecture to provide business functionalities. These services communicate through well-defined interfaces and protocols, enabling interoperability and facilitating the composition of complex applications from independently developed services. SOA’s focus on modularity and reusability makes it particularly amenable to the benefits offered by containerization. Containerization brings a new dimension of flexibility and efficiency to SOA by encapsulating these services into containers. This encapsulation provides an isolated environment for each service, ensuring consistent execution regardless of the deployment environment. Such isolation is crucial for maintaining the integrity and availability of services, particularly in complex, distributed architectures where services must communicate across different platforms and networks. Moreover, containerization enhances the scalability and manageability of SOA-based systems. Containers can be dynamically scaled to accommodate varying loads, enabling organizations to respond swiftly to changes in demand. This scalability, combined with the ease of deployment and rollback provided by container orchestration platforms, supports the agile delivery and continuous improvement of services. The integration of containerization with SOA essentially results in a more resilient, scalable, and manageable architecture. It enables organizations to leverage the full potential of SOA by facilitating faster deployment, enhancing performance, and simplifying the lifecycle management of services. Together, SOA and containerization create a powerful framework for building flexible, future-proof applications that can adapt to the evolving needs of the business. 4. Monolithic applications Contrary to common perceptions, monolithic applications stand to gain significantly from containerization. This technology can encapsulate the full application stack — including the core application, its dependencies, libraries, and runtime environment within a container. This encapsulation ensures uniformity across various stages of the development lifecycle, from development and testing to production, effectively addressing the infamous ‘it works on my machine’ challenge. Such consistency streamlines the deployment process and simplifies scaling efforts, which is particularly beneficial for applications that need to adapt quickly to changing demands. Moreover, containerization fosters enhanced collaboration among development teams by standardizing the operational environment, thereby minimizing discrepancies that typically arise from working in divergent development environments. This uniformity is invaluable in accelerating development cycles and improving product reliability. Perhaps one of the most strategic benefits of containerization for monolithic architectures is the facilitation of a smoother transition to microservices. By containerizing specific components of the monolith, organizations can incrementally decompose their application into more manageable, loosely coupled microservices. This approach not only mitigates the risks associated with a full-scale migration but also allows teams to gradually adapt to microservices’ architectural patterns and principles. Containerization presents a compelling proposition for monolithic applications, offering a pathway to modernization that enhances deployment efficiency, operational consistency, and the flexibility to evolve toward a microservices-oriented architecture. Through this lens, containerization is not just a tool for new applications but a bridge that allows legacy applications to step into the future of software development. Conclusion The journey of modern software development, with its myriad architectural paths, is markedly enhanced by the adoption of containerization. This technology transcends architectural boundaries, bringing critical advantages such as isolation, scalability, and portability to the forefront of application delivery. Whether your environment is monolithic, service-oriented, event-driven, or API-led, containerization aligns perfectly with the ethos of modern, distributed, and cloud-native applications. By embracing the adaptability and transformative potential of containerization, you can open your architectures to a future where agility, efficiency, and resilience are not just aspirations but achievable realities. Begin your transformative journey with Docker Desktop today and redefine what’s possible within the bounds of your existing architectural framework. Learn more How to Stuff Monolithic Applications Into a Container (DockerCon 2023) Subscribe to the Docker Newsletter. Explore Docker Desktop. Join the Docker community. Skill up with Docker training. View the full article
-
Building scalable systems using microservices architecture is a strategic approach to developing complex applications. Microservices allow teams to deploy and scale parts of their application independently, improving agility and reducing the complexity of updates and scaling. This step-by-step guide outlines the process of creating a microservices-based system, complete with detailed examples. 1. Define Your Service Boundaries Objective Identify the distinct functionalities within your system that can be broken down into separate, smaller services. View the full article
-
So far in our series on modern microservices, we have built: A simple gRPC service Added a REST/HTTP interface exposing gRPC service RESTfully and showing a glimpse of the gRPC plugin universe Introduced Buf.build to simplify plugin management We are far from productionalizing our service. A production-ready service would (at the very least) need several things: View the full article
-
Technical Architecture First, let's turn to the architecture, which will be explained in detail. Let's look at each of these tiers in detail. Let me explain the architecture in detail. These components are commonly associated with the architecture of applications that follow the principles of Domain-Driven Design (DDD) and Model-View-Controller (MVC) or similar architectural patterns. Let me cover this one by one: View the full article
-
In the ever-evolving landscape of software architecture, the integration of artificial intelligence (AI) into microservices architecture is becoming increasingly pivotal. This approach offers modularity, scalability, and flexibility, crucial for the dynamic nature of AI applications. In this article, we'll explore 10 key microservice design patterns that are essential for AI development, delving into how they facilitate efficient, robust, and scalable AI solutions. 1. Model as a Service (MaaS) MaaS treats each AI model as an autonomous service. By exposing AI functionalities through REST or gRPC APIs, MaaS allows for independent scaling and updating of models. This pattern is particularly advantageous in managing multiple AI models, enabling continuous integration and deployment without disrupting the entire system. View the full article
-
- microservices
- design patterns
-
(and 1 more)
Tagged with:
-
In the dynamic world of microservices architecture, efficient service communication is the linchpin that keeps the system running smoothly. To maintain the reliability, security, and performance of your microservices, you need a well-structured service mesh. This dedicated infrastructure layer is designed to cater to service-to-service communication, offering essential features like load balancing, security, monitoring, and resilience. In this comprehensive guide, we’ll delve into the world of service meshes and explore best practices for their effective management within a microservices environment... View the full article
-
IT teams have been observing applications for their health and performance since the beginning. They observe the telemetry data (logs, metrics, traces) emitted from the application/microservice using various observability tools and make informed decisions regarding scaling, maintaining, or troubleshooting applications in the production environment. If observability is not something new and there are a plethora of monitoring and observability tools available in the market, why bother about OpenTelemetry? What makes it special such that it is getting widely adopted? And most importantly, what is in it for developers, DevOps, and SRE folks? View the full article
-
- 1
-
- observability
- microservices
-
(and 1 more)
Tagged with:
-
Enterprises these days have microservices distributed across a variety of environments — on-prem, cloud, containers, VMs, and more. Applications/services in such a heterogeneous system typically communicate with each other for various purposes, like data sharing. This setup poses multiple security concerns for DevOps folks and architects, the primary one being ensuring proper authentication and establishing trust between service-to-service communication. And that is the tricky part. View the full article
-
In this blog post, we will explore how ITS improved speed to market, business agility, and performance, by modernizing their air travel search engine. We’ll show how they refactored their monolith application into microservices, using services such as Amazon Elastic Container Service (ECS), Amazon ElastiCache for Redis, and AWS Systems Manager... View the full article
-
- architecture
- search engines
-
(and 1 more)
Tagged with:
-
Microservices architecture has become extremely popular in recent years because it allows for the creation of complex applications as a collection of discrete, independent services. Comprehensive testing, however, is essential to guarantee the reliability and scalability of the software due to the microservices’ increased complexity and distributed nature. Due to its capacity to improve scalability, flexibility, and resilience in complex software systems, microservices architecture has experienced a significant increase in popularity in recent years. The distributed nature of microservices, however, presents special difficulties for testing and quality control. In this thorough guide, we’ll delve into the world of microservices testing and examine its significance, methodologies, and best practices to guarantee the smooth operation of these interconnected parts. View the full article
-
- 1
-
- architecture
- reliability
-
(and 1 more)
Tagged with:
-
Microservices architecture has revolutionized modern software development, offering unparalleled agility, scalability, and maintainability. However, effectively implementing microservices necessitates a deep understanding of best practices to harness their full potential while avoiding common pitfalls. In this comprehensive guide, we will delve into the key best practices for microservices, providing detailed insights into each aspect... View the full article
-
- best practices
- scalability
-
(and 1 more)
Tagged with:
-
Let's walk through a more detailed step-by-step process with code for a more comprehensive API Gateway using YARP in ASP.NET Core. We'll consider a simplified scenario with two microservices: UserService and ProductService. The API Gateway will route requests to these services based on the path. Create two separate ASP.NET Core Web API projects for UserService and ProductService. Use the following commands... View the full article
-
- api gateway
- yarp
-
(and 3 more)
Tagged with:
-
In a Kubernetes environment, a recent pod scheduling failure occurred due to a specific configuration. Which Kubernetes resource type, often associated with node constraints, might have caused this failure, especially if it wasn’t defined correctly? NodeSelector ResourceQuota PriorityClass Taint PodDisruptionBudget 201 people answered the question. And their answers are reflected in the chart below. The […] View the full article
-
In 2022, we published Let’s Architect! Architecting microservices with containers. We covered integrations patterns and some approaches for implementing microservices using containers. In this Let’s Architect! post, we want to drill down into microservices only, by focusing on the main challenges that software architects and engineers face while working on large distributed systems structured as a set of independent services. There are many considerations to cover in detail within a broad topic like microservices. We should reflect on the organizational structure, automation pipelines, multi-account strategy, testing, communication, and many other areas. With this post we dive deep into the topic by analyzing the options for discoverability and connectivity available through Amazon VPC Lattice; then, we focus on architectural patterns for communication, mainly on asynchronous communication, as it fits very well into the paradigm. Finally, we explore how to work with serverless microservices and analyze a case study from Amazon, coming directly from the Amazon Builder’s Library... View the full article
-
For many companies today, containers and microservices are both becoming a normal part of the industry landscape. According to a global survey put out by Statista in 2021, 19% of enterprise organizations today say they are already utilizing containers to achieve their business goals, while 92% of respondents claim microservices to be a success factor. […] The post How Can Containers Help You Use Microservices in DevOps? appeared first on DevOps.com. View the full article
-
With rigorous development and pre-production testing, your microservices will perform as they should. However, microservices need to be continuously tested against actual end-user activity to adapt the application to changing preferences and requests. This article will cover five deployment strategies that will help developers and DevOps teams when releasing new features or making changes to […] The post 5 Testing Strategies For Deploying Microservices appeared first on DevOps.com. View the full article
-
- testing
- microservices
-
(and 1 more)
Tagged with:
-
Microservices and microapps are both core components of today's application development. Take a look at what makes them similar, and get started on how to monitor these components effectively. View the full article
- 1 reply
-
- microservices
- microapps
-
(and 1 more)
Tagged with:
-
GitOps is a term that has become very popular in the last few years and is easily on its way to becoming just as overloaded with myth and mystery as DevOps. In this series of articles, we will present the principles and practices of GitOps, explaining the why and how of the automated processes that aim to deliver secure, high-quality, microservice-based applications quickly and efficiently. In part 1 of the series, we introduced the main concepts of GitOps, together with the open source automation technology Tekton and ArgoCD. These tools operate on the Red Hat OpenShift platform to deliver a cloud-native continuous integration and continuous delivery process. The first article also gave an indicative structure for Git repository and ArgoCD applications that can create a secure and audited process for the delivery to production. This article will continue the series by explaining how container images produced during the continuous integration phase can be successfully managed and used within the continuous delivery phase. View the full article
-
- 1
-
- gitops
- microservices
- (and 4 more)
-
In cloud native computing, the applications are expected to be resilient, loosely coupled, scalable, manageable and observable. Because of containerization, there is a proliferation of microservices and they ship quickly. Microservices environments are more dynamic. In such an environment, making applications resilient means deploying the applications in a fault tolerant manner, but it also means […] The post LitmusChaos Enhances Developer Experience for Cloud Native Reliability appeared first on DevOps.com. View the full article
-
Cortex, a provider of a platform for tracking ownership of microservices, this week announced its platform can now import services from the GitLab continuous integration/continuous delivery (CI/CD) platform. Anish Dhar, Cortex CEO, said the company’s platform is now integrated with more than 30 tools that are regularly employed by DevOps teams, including offerings from GitLab, […] The post Cortex Taps GitLab to Help DevOps Teams Manage Microservices appeared first on DevOps.com. View the full article
-
- gitlab
- microservices
-
(and 1 more)
Tagged with:
-
Forum Statistics
63.6k
Total Topics61.7k
Total Posts