Search the Community
Showing results for tags 'architecture'.
-
In the fast-paced world of software development, efficiency is paramount. Automating repetitive tasks is key to achieving faster delivery cycles and improved quality. This is where Jenkins comes in — a free and open-source automation server that has become synonymous with continuous integration (CI) and continuous delivery (CD). Jenkins, the open-source automation powerhouse, plays a pivotal role in the DevOps world. But have you ever wondered how it all works under the hood? This blog delves into the intricate architecture of Jenkins, breaking down its core components and how they orchestrate the automation magic. View the full article
-
The AsyncAPI specification emerged in response to the growing need for a standardized and comprehensive framework that addresses the challenges of designing and documenting asynchronous APIs. It is a collaborative effort of leading tech companies, open source communities, and individual contributors who actively participated in the creation and evolution of the AsyncAPI specification. Various approaches exist for implementing asynchronous interactions and APIs, each tailored to specific use cases and requirements. Despite this diversity, these approaches fundamentally share a common baseline of key concepts. Whether it’s messaging queues, event-driven architectures, or other asynchronous paradigms, the overarching principles remain consistent. Leveraging this shared foundation, AsyncAPI taps into a spectrum of techniques, providing developers with a unified understanding of essential concepts. This strategic approach not only fosters interoperability but also enhances flexibility across various asynchronous implementations, delivering significant benefits to developers. From planning to execution: Design and runtime phases of EDA The design time and runtime refer to distinct phases in the lifecycle of an event-driven system, each serving distinct purposes: Design time: This phase occurs during the design and development of the event-driven system, where architects and developers plan and structure the system engaging in activities around: Designing event flows Schema definition Topic or channel design Error handling and retry policies Security considerations Versioning strategies Metadata management Testing and validation Documentation Collaboration and communication Performance considerations Monitoring and observability The design phase yields assets, including a well-defined and configured messaging infrastructure. This encompasses components such as brokers, queues, topics/channels, schemas, and security settings, all tailored to meet specific requirements. The nature of these assets may vary based on the choice of the messaging system. Runtime: This phase occurs when the system is in operation, actively processing events based on the design-time configurations and settings, responding to triggers in real time. Dynamic event routing Concurrency management Scalability adjustments Load balancing Distributed tracing Alerting and notification Adaptive scaling Monitoring and troubleshooting Integration with external systems The output of this phase is the ongoing operation of the messaging platform, with messages being processed, routed, and delivered to subscribers based on the configured settings. Role of AsyncAPI AsyncAPI plays a pivotal role in the asynchronous API design and documentation. Its significance lies in standardization, providing a common and consistent framework for describing asynchronous APIs. AsyncAPI details crucial aspects such as message formats, channels, and protocols, enabling developers and stakeholders to understand and integrate with asynchronous systems effectively. It should also be noted that the AsyncAPI specification serves as more than documentation; it becomes a communication contract, ensuring clarity and consistency in the exchange of messages between different components or services. Furthermore, AsyncAPI facilitates code generation, expediting the development process by offering a starting point for implementing components that adhere to the specified communication patterns. In essence, AsyncAPI helps bridge the gap between design-time decisions and the practical implementation and operation of systems that rely on asynchronous communication. Bridging the gap Let’s explore a scenario involving the development and consumption of an asynchronous API, coupled with a set of essential requirements: Designing an asynchronous API in an event-driven architecture (EDA): Define the events, schema, and publish/subscribe permissions of an EDA service Expose the service as an asynchronous API Generating AsyncAPI specification: Use the AsyncAPI standard to generate a specification of the asynchronous API Utilizing GitHub for storage and version control: Check in the AsyncAPI specification into GitHub, leveraging it as both a storage system and a version control system Configuring GitHub workflow for document review: Set up a GitHub action designed to review pull requests (PRs) related to changes in the AsyncAPI document If changes are detected, initiate a validation process Upon a successful review and PR approval, proceed to merge the changes Synchronize the updated API design with the design time This workflow ensures that design-time and runtime components remain in sync consistently. The feasibility of this process is grounded in the use of the AsyncAPI for the API documentation. Additionally, the AsyncAPI tooling ecosystem supports validation and code generation that makes it possible to keep the design time and runtime in sync. Putting the scenario into action Let us consider Solace Event Portal as the tool for building an asynchronous API and Solace PubSub+ Broker as the messaging system. An event portal is a cloud-based event management tool that helps in designing EDAs. In the design phase, the portal facilitates the creation and definition of messaging structures, channels, and event-driven contracts. Leveraging the capabilities of Solace Event Portal, we model the asynchronous API and share the crucial details, such as message formats, topics, and communication patterns, as an AsyncAPI document. We can further enhance this process by providing REST APIs that allow for the dynamic updating of design-time assets, including events, schemas, and permissions. GitHub actions are employed to import AsyncAPI documents and trigger updates to the design-time assets. The synchronization between design-time and runtime components is made possible by adopting AsyncAPI as the standard for documenting asynchronous APIs. The AsyncAPI tooling ecosystem, encompassing validation and code generation, plays a pivotal role in ensuring the seamless integration of changes. This workflow guarantees that any modifications to the AsyncAPI document efficiently translate into synchronized adjustments in both design-time and runtime aspects. Conclusion Keeping the design time and runtime in sync is essential for a seamless and effective development lifecycle. When the design specifications closely align with the implemented runtime components, it promotes consistency, reliability, and predictability in the functioning of the system. The adoption of the AsyncAPI standard is instrumental in achieving a seamless integration between the design-time and runtime components of asynchronous APIs in EDAs. The use of AsyncAPI as the standard for documenting asynchronous APIs, along with its robust tooling ecosystem, ensures a cohesive development lifecycle. The effectiveness of this approach extends beyond specific tools, offering a versatile and scalable solution for building and maintaining asynchronous APIs in diverse architectural environments. Author Post contributed by Giri Venkatesan, Solace The post Bridging Design and Runtime Gaps: AsyncAPI in Event-Driven Architecture appeared first on Linux.com. View the full article
-
- event-driven
- architecture
-
(and 1 more)
Tagged with:
-
Streaming data pipelines have become an essential component in modern data-driven organizations. These pipelines enable real-time data ingestion, processing, transformation, and analysis. In this article, we will delve into the architecture and essential details of building a streaming data pipeline. Data Ingestion Data ingestion is the first stage of streaming a data pipeline. It involves capturing data from various sources such as Kafka, MQTT, log files, or APIs. Common techniques for data ingestion include: View the full article
-
- data streaming
- streaming
-
(and 1 more)
Tagged with:
-
Author: Frederico Muñoz (SAS Institute) This is the second interview of a SIG Architecture Spotlight series that will cover the different subprojects. In this blog, we will cover the SIG Architecture: Production Readiness subproject. In this SIG Architecture spotlight, we talked with Wojciech Tyczynski (Google), lead of the Production Readiness subproject. About SIG Architecture and the Production Readiness subproject Frederico (FSM): Hello Wojciech, could you tell us a bit about yourself, your role and how you got involved in Kubernetes? Wojciech Tyczynski (WT): I started contributing to Kubernetes in January 2015. At that time, Google (where I was and still am working) decided to start a Kubernetes team in the Warsaw office (in addition to already existing teams in California and Seattle). I was lucky enough to be one of the seeding engineers for that team. After two months of onboarding and helping with different tasks across the project towards 1.0 launch, I took ownership of the scalability area and I was leading Kubernetes to support clusters with 5000 nodes. I’m still involved in SIG Scalability as its Technical Lead. That was the start of a journey since scalability is such a cross-cutting topic, and I started contributing to many other areas including, over time, to SIG Architecture. FSM: In SIG Architecture, why specifically the Production Readiness subproject? Was it something you had in mind from the start, or was it an unexpected consequence of your initial involvement in scalability? WT: After reaching that milestone of Kubernetes supporting 5000-node clusters, one of the goals was to ensure that Kubernetes would not degrade its scalability properties over time. While non-scalable implementation is always fixable, designing non-scalable APIs or contracts is problematic. I was looking for a way to ensure that people are thinking about scalability when they create new features and capabilities without introducing too much overhead. This is when I joined forces with John Belamaric and David Eads and created a Production Readiness subproject within SIG Architecture. While setting the bar for scalability was only one of a few motivations for it, it ended up fitting quite well. At the same time, I was already involved in the overall reliability of the system internally, so other goals of Production Readiness were also close to my heart. FSM: To anyone new to how SIG Architecture works, how would you describe the main goals and areas of intervention of the Production Readiness subproject? WT: The goal of the Production Readiness subproject is to ensure that any feature that is added to Kubernetes can be reliably used in production clusters. This primarily means that those features are observable, scalable, supportable, can always be safely enabled and in case of production issues also disabled. Production readiness and the Kubernetes project FSM: Architectural consistency being one of the goals of the SIG, is this made more challenging by the distributed and open nature of Kubernetes? Do you feel this impacts the approach that Production Readiness has to take? WT: The distributed nature of Kubernetes certainly impacts Production Readiness, because it makes thinking about aspects like enablement/disablement or scalability more challenging. To be more precise, when enabling or disabling features that span multiple components you need to think about version skew between them and design for it. For scalability, changes in one component may actually result in problems for a completely different one, so it requires a good understanding of the whole system, not just individual components. But it’s also what makes this project so interesting. FSM: Those running Kubernetes in production will have their own perspective on things, how do you capture this feedback? WT: Fortunately, we aren’t talking about "them" here, we’re talking about "us": all of us are working for companies that are managing large fleets of Kubernetes clusters and we’re involved in that too, so we suffer from those problems ourselves. So while we’re trying to get feedback (our annual PRR survey is very important for us), it rarely reveals completely new problems - it rather shows the scale of them. And we try to react to it - changes like "Beta APIs off by default" happen in reaction to the data that we observe. FSM: On the topic of reaction, that made me think of how the Kubernetes Enhancement Proposal (KEP) template has a Production Readiness Review (PRR) section, which is tied to the graduation process. Was this something born out of identified insufficiencies? How would you describe the results? WT: As mentioned above, the overall goal of the Production Readiness subproject is to ensure that every newly added feature can be reliably used in production. It’s not possible to enforce that by a central team - we need to make it everyone's problem. To achieve it, we wanted to ensure that everyone designing their new feature is thinking about safe enablement, scalability, observability, supportability, etc. from the very beginning. Which means not when the implementation starts, but rather during the design. Given that KEPs are effectively Kubernetes design docs, making it part of the KEP template was the way to achieve the goal. FSM: So, in a way making sure that feature owners have thought about the implications of their proposal. WT: Exactly. We already observed that just by forcing feature owners to think through the PRR aspects (via forcing them to fill in the PRR questionnaire) many of the original issues are going away. Sure - as PRR approvers we’re still catching gaps, but even the initial versions of KEPs are better now than they used to be a couple of years ago in what concerns thinking about productionisation aspects, which is exactly what we wanted to achieve - spreading the culture of thinking about reliability in its widest possible meaning. FSM: We've been talking about the PRR process, could you describe it for our readers? WT: The PRR process is fairly simple - we just want to ensure that you think through the productionisation aspects of your feature early enough. If you do your job, it’s just a matter of answering some questions in the KEP template and getting approval from a PRR approver (in addition to regular SIG approval). If you didn’t think about those aspects earlier, it may require spending more time and potentially revising some decisions, but that’s exactly what we need to make the Kubernetes project reliable. Helping with Production Readiness FSM: Production Readiness seems to be one area where a good deal of prior exposure is required in order to be an effective contributor. Are there also ways for someone newer to the project to contribute? WT: PRR approvers have to have a deep understanding of the whole Kubernetes project to catch potential issues. Kubernetes is such a large project now with so many nuances that people who are new to the project can simply miss the context, no matter how senior they are. That said, there are many ways that you may implicitly help. Increasing the reliability of particular areas of the project by improving its observability and debuggability, increasing test coverage, and building new kinds of tests (upgrade, downgrade, chaos, etc.) will help us a lot. Note that the PRR subproject is focused on keeping the bar at the design level, but we should also care equally about the implementation. For that, we’re relying on individual SIGs and code approvers, so having people there who are aware of productionisation aspects, and who deeply care about it, will help the project a lot. FSM: Thank you! Any final comments you would like to share with our readers? WT: I would like to highlight and thank all contributors for their cooperation. While the PRR adds some additional work for them, we see that people care about it, and what’s even more encouraging is that with every release the quality of the answers improves, and questions "do I really need a metric reflecting if my feature works" or "is downgrade really that important" don’t really appear anymore. View the full article
-
In the rapidly evolving landscape of the Internet of Things (IoT), achieving seamless interoperability among a myriad of devices and systems is paramount. To tackle this challenge head-on, software-based architectures are emerging as powerful solutions. In this article, we explore the synergy between software-based architecture and the development of interoperability solutions for IoT to provide insights relevant to software developers and data engineers... View the full article
-
From customer interactions on e-commerce platforms to social media trends and from sensor data in internet of things (IoT) devices to financial market updates, streaming data encompasses a vast array of information. This ability to handle real-time flow often distinguishes successful organizations from their competitors. Harnessing the potential of streaming data processing offers organizations an opportunity to stay at the forefront of their industries, make data-informed decisions with unprecedented agility, and gain invaluable insights into customer behavior and operational efficiency. AWS provides a foundation for building robust and reliable data pipelines that efficiently transport streaming data, eliminating the intricacies of infrastructure management. This shift empowers engineers to focus their talents and energies on creating business value, rather than consuming their time for managing infrastructure... View the full article
-
- architecture
- streaming
-
(and 2 more)
Tagged with:
-
Building complex container-based architectures is not very different from programming in terms of applying design best practices and principles. The goal of this article is to present three popular extensibility architectural patterns from a developer's perspective using well-known programming principles. Let's start with the Single Responsibility Principle. According to R. Martin, "A class should have only one reason to change." But classes are abstractions used to simplify real-world problems and represent software components. Hence, a component should have only one reason to change over time. Software services and microservices in particular are also components (runtime components) and should have only one reason to change. Microservices are supposed to be a single deployable unit, meaning they are deployed independently of other components and can have as many instances as needed. View the full article
-
- k8s
- architecture
-
(and 2 more)
Tagged with:
-
Event-driven architecture. EDA and serverless functions are two powerful software patterns and concepts that have become popular in recent years with the rise of cloud-native computing. While one is more of an architecture pattern and the other a deployment or implementation detail, when combined, they provide a scalable and efficient solution for modern applications... View the full article
-
Connecting hybrid and multicloud workloads - Networking ArchitectureView the full article
-
- hybrid cloud
- multi-cloud
-
(and 2 more)
Tagged with:
-
In this blog post, we will explore how ITS improved speed to market, business agility, and performance, by modernizing their air travel search engine. We’ll show how they refactored their monolith application into microservices, using services such as Amazon Elastic Container Service (ECS), Amazon ElastiCache for Redis, and AWS Systems Manager... View the full article
-
- architecture
- search engines
-
(and 1 more)
Tagged with:
-
Microservices architecture has become extremely popular in recent years because it allows for the creation of complex applications as a collection of discrete, independent services. Comprehensive testing, however, is essential to guarantee the reliability and scalability of the software due to the microservices’ increased complexity and distributed nature. Due to its capacity to improve scalability, flexibility, and resilience in complex software systems, microservices architecture has experienced a significant increase in popularity in recent years. The distributed nature of microservices, however, presents special difficulties for testing and quality control. In this thorough guide, we’ll delve into the world of microservices testing and examine its significance, methodologies, and best practices to guarantee the smooth operation of these interconnected parts. View the full article
-
- 1
-
- architecture
- reliability
-
(and 1 more)
Tagged with:
-
Serverless architecture is becoming increasingly popular for fintech developers and CTOs looking to simplify their tech stack. The technology offers many benefits, including reduced server management complexity and lower costs due to its pay-as-you-go model. But how exactly do you implement serverless technology? In this article, I provide a comprehensive, step-by-step guide to using serverless architecture, with practical tips and real-world use cases. View the full article
-
Whether you're working with an on-premises private cloud, giants like AWS, Azure, and GCP, or exploring hybrid models, cloud-native is the way forward, ensuring your applications are always at their peak performance. So, as we sail into this new era, let's embrace cloud-native and unlock a world of possibilities! In today's blog post, we will delve into the five fundamental principles of cloud-native architecture, as articulated by Tom Grey from Google. Tom's insights provide a comprehensive understanding of the core tenets that underpin cloud-native systems. So, let's dive in and explore these principles in detail... View the full article
-
Another excellent overview from ByteByteGo https://blog.bytebytego.com/p/ep80-explaining-8-popular-network#§ibm-mq-rabbitmq-kafka-pulsar-how-do-message-queue-architectures-evolve
-
- queues
- architecture
-
(and 3 more)
Tagged with:
-
With the help of event-driven architecture (EDA) and the Open API economy, businesses can keep up with the world and operate in real-time. View the full article
-
- architecture
- real-time
-
(and 1 more)
Tagged with:
-
As we progress further into the cloud-first age, companies across the globe are shifting their approach to data management and storage. Progressing away from legacy on-site systems, we’re now seeing more people than ever before using cloud data warehouses and other third-party platforms. At present, the cloud services market currently demonstrates a 14.1% CAGR rate, with the prevalence of cloud technologies increasing every single year. As a developer, it’s vital to know how to manage and optimize cloud data architecture and get more from the tools you have available to you. Instead of sticking to legacy systems, data engineers and developers should learn how to manage cloud resources and utilize them effectively. In this article, we’ll discuss why the shift to the cloud has been so impactful and ways that developers can optimize their cloud data warehousing technology... View the full article
-
Are you tired of hearing tech jargon thrown around without any explanation? Do you want to understand what CloudOps architecture is but don’t know where to start? Look no further, because this blog article is here to help you understand everything you need to know about CloudOps architecture... View the full article
-
We are excited to announce the availability of improved AWS Well-Architected Framework guidance. In this update, we have made changes across all six pillars of the framework: Operational Excellence, Security, Reliability, Performance Efficiency, Cost Optimization, and Sustainability. In this release, we have made the implementation guidance for the new and updated best practices more prescriptive, including enhanced recommendations and steps on reusable architecture patterns targeting specific business outcomes in the Amazon Web Services (AWS) Cloud... View the full article
-
- aws
- frameworks
-
(and 2 more)
Tagged with:
-
AWS is pleased to announce an update to the AWS Well-Architected Framework, which will provide customers and partners with more prescriptive guidance on building and operating in the cloud, and enable them to stay up-to-date on the latest architectural best practices in a constantly evolving technological landscape. View the full article
-
Through the AWS documentation, books like AWS in Action or AWS training, you can gain theoretical knowledge. But beyond that, it is very valuable to learn directly from practice. In this series, we inspect real-life AWS architectures. In the 2nd volume of the series, Matt provides insights into platform engineering on AWS... View the full article
-
In 2022, we published Let’s Architect! Architecting microservices with containers. We covered integrations patterns and some approaches for implementing microservices using containers. In this Let’s Architect! post, we want to drill down into microservices only, by focusing on the main challenges that software architects and engineers face while working on large distributed systems structured as a set of independent services. There are many considerations to cover in detail within a broad topic like microservices. We should reflect on the organizational structure, automation pipelines, multi-account strategy, testing, communication, and many other areas. With this post we dive deep into the topic by analyzing the options for discoverability and connectivity available through Amazon VPC Lattice; then, we focus on architectural patterns for communication, mainly on asynchronous communication, as it fits very well into the paradigm. Finally, we explore how to work with serverless microservices and analyze a case study from Amazon, coming directly from the Amazon Builder’s Library... View the full article
-
AWS Well-Architected Tool now features direct access to AWS re:Post, a community-driven, questions-and-answers service designed to help AWS customers remove technical roadblocks, accelerate innovation, and enhance operation. AWS re:Post has 40+ topics including a community specific to AWS Well-Architected. View the full article
-
- re:post
- well-architected
-
(and 2 more)
Tagged with:
-
The redesigned AWS Architecture Center helps you find the information you need to design and operate reliable, secure, efficient, and cost-effective cloud applications, right from the start. The Architecture Center aggregates best practices, reference architecture deployments, reference architecture diagrams, and more, making it easier for you to discover what’s most important. The new Architecture Center also provides new ways for you to share feedback by voting on proposed guidance, requesting content, and more. View the full article
-
Forum Statistics
67.4k
Total Topics65.3k
Total Posts