Jump to content

Search the Community

Showing results for tags 'cloud-native'.

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Forums

  • General
    • General Discussion
    • Artificial Intelligence
    • DevOpsForum News
  • DevOps & SRE
    • DevOps & SRE General Discussion
    • Databases, Data Engineering & Data Science
    • Development & Programming
    • CI/CD, GitOps, Orchestration & Scheduling
    • Docker, Containers, Microservices, Serverless & Virtualization
    • Infrastructure-as-Code
    • Kubernetes & Container Orchestration
    • Linux
    • Logging, Monitoring & Observability
    • Security, Governance, Risk & Compliance
  • Cloud Providers
    • Amazon Web Services
    • Google Cloud Platform
    • Microsoft Azure

Find results in...

Find results that contain...


Date Created

  • Start

    End


Last Updated

  • Start

    End


Filter by number of...

Joined

  • Start

    End


Group


Website URL


LinkedIn Profile URL


About Me


Cloud Platforms


Cloud Experience


Development Experience


Current Role


Skills


Certifications


Favourite Tools


Interests

Found 10 results

  1. Developing cloud-native applications involves establishing smooth and efficient communication among diverse components. To kick things off, let's delve into how a range of tools, from XML to gRPC, enable and enhance these critical interactions. XML (often with SOAP):<order> <bookID>12345</bookID> <quantity>2</quantity> <user>JohnDoe</user> </order>PositivesHighly structured: XML's structure ensures consistent data formatting. For instance, with <bookID>12345</bookID>, you're certain that the data between the tags is the book's ID. This reduces ambiguity in data interpretation. Self-descriptive: The tags describe the data. <user>JohnDoe</user> clearly indicates the user's name, making it easier for developers to understand the data without additional documentation. NegativesVerbose: For a large order list with thousands of entries, the repeated tags can significantly increase the data size. If you had 10,000 orders, that's 10,000 repetitions of <order>, <bookID>, and so on, leading to increased bandwidth usage. Parsing can be slow: For the same 10,000 orders, the system would need to navigate through each start and end tag, consuming more processing time compared to more concise formats. JSON (commonly with REST):{ "order": { "bookID": "12345", "quantity": 2, "user": "JohnDoe" } }PositivesLightweight and easy to read: The format is concise. If you had an array of 10,000 orders, JSON would handle it without the repetitive tags seen in XML, resulting in smaller data sizes. Supported by many languages: In JavaScript, for instance, JSON is natively supported. You can convert a JSON object to a JavaScript object with a simple JSON.parse() function, making integration seamless. NegativesDoesn't support data types natively: In our example, "bookID": "12345" and "quantity": 2 look different, but JSON treats both as text. This can lead to potential type-related bugs or require additional parsing. No built-in support for streaming: If you wanted to update book prices in real-time, JSON wouldn't support this natively. You'd need workarounds or additional technologies. GraphQL:Query: { order(id: "5678") { bookID user } }Response: { "data": { "order": { "bookID": "12345", "user": "JohnDoe" } } }PositivesFetch exactly what you need: If you had a mobile app with limited screen space, you could fetch only the necessary data, like bookID and user, optimizing bandwidth and load times. Single endpoint: Instead of managing multiple endpoints like /orders, /books, and /users, you'd manage a single GraphQL endpoint, simplifying the backend architecture. NegativesOverhead of parsing and processing queries: For each query, the server needs to interpret and fetch the right data. If you had millions of requests with varied queries, this could strain the server. Might be overkill for simple APIs: If you only needed basic CRUD operations, the flexibility of GraphQL might introduce unnecessary complexity. gRPC:Protocol Buffers definition: message OrderRequest { string id = 1; } message OrderResponse { string bookID = 1; int32 quantity = 2; } service OrderService { rpc GetOrder(OrderRequest) returns (OrderResponse); }PositivesEfficient serialization with Protocol Buffers: If you expanded globally, the compact binary format of Protocol Buffers would save significant bandwidth, especially with large datasets. Supports bi-directional streaming: Imagine you are having a feature where readers could chat about a book in real-time. gRPC's streaming would allow instant message exchanges without constant polling. Strongly-typed: With int32 quantity = 2;, you're ensured that quantity is always an integer, reducing type-related errors. NegativesRequires understanding of Protocol Buffers: Your development team would need to learn a new technology, potentially slowing initial development. Might be unfamiliar: If the team was accustomed to RESTful services, transitioning to gRPC might introduce a learning curve. Let's get to today's topic. What is gRPC?Imagine you have two computers that want to talk to each other. Just like people speak different languages, computers also need a common language to communicate. gRPC is like a special phone line that lets these computers chat quickly and clearly. In technical terms, gRPC is a tool that helps different parts of a software system communicate. It's designed to be fast, efficient, and secure. Instead of sending wordy messages, gRPC sends compact, speedy notes. This makes things run smoothly, especially when you have lots of computers talking at once in big systems like online shopping sites or video games. gRPC, which stands for Google Remote Procedure Call, is an open-source communication framework designed for systems to interact seamlessly. At its core, gRPC is about enabling efficient communication between computer programs, particularly when they're located on different servers or even across global data centers.Simplified Guide to gRPCImagine you have two friends, one who knows a secret recipe (let's call them the Chef) and another who wants to learn it (let's call them the Learner). However, there's a catch: they live in different towns. gRPC is like a magical phone that doesn't just let them talk to each other but also allows the Learner to watch and learn the recipe as if they were standing right next to the Chef in the kitchen. In the world of computer programs, gRPC does something quite similar. If you've created an app (which we'll think of as the Learner) that needs to use functions or data from a program on another computer (our Chef), gRPC helps them communicate effortlessly. Here's how it works: Defining the Menu: First, you tell gRPC about the dishes (or services) you're interested in, along with the ingredients (parameters) needed for each one and what you hope to have on your plate in the end (return types).The Chef Prepares: On the server (the Chef's kitchen), the menu is put into action. The server prepares to make those dishes exactly as described, ready to whip them up on request.The Magical Phone (gRPC): This is where gRPC comes in, acting as the phone line between the Learner and the Chef. It's not just any phone; it's a special one that can transmit tastes, smells, and cooking techniques instantly.Ordering Up: The Learner (client) uses a copy of the menu (known as a stub, but it's simpler to think of it as just a "client menu") to place an order. This "client menu" knows all the dishes the Chef can make and how to ask for them.Enjoying the Dish: Once the Learner uses the magical phone to request a dish, the Chef prepares it and sends it back over the same magical connection. To the Learner, it feels like the dish was made right there in their own kitchen.In technical terms, gRPC lets different pieces of software on different machines talk to each other as though they were part of the same program. It's a way of making remote procedure calls (RPCs), where the Learner (client) calls a method on the Chef (server) as if it were local. This magic makes building and connecting distributed applications much simpler and more intuitive. Technical AspectsHere's a closer look at its technical aspects. We'll consider a cloud-native application for a food delivery service. A user wants to order food from a restaurant using this app. Protocol Buffers: To represent an order, instead of a lengthy JSON, we use a concise Protocol Buffer definition. This ensures that the order details are transmitted efficiently between the user's device and the restaurant's system. message FoodOrder { string dishName = 1; int32 quantity = 2; string specialInstructions = 3; }gRPC uses Protocol Buffers (often shortened to "protobuf") as its primary mechanism for defining services and the structure of the data messages. Protobuf is a binary serialization format, making it both smaller and faster than traditional text-based formats like JSON or XML. Streaming Capabilities: As the restaurant prepares the order, the user can receive real-time updates on the cooking status. This is achieved using gRPC's streaming. This means the user gets instant notifications like "Cooking", "Packing", and "Out for Delivery" without constantly asking the server. rpc OrderUpdates(string orderId) returns (stream StatusUpdate);Language Agnostic: The user's app might be written in Java (for Android) or Swift (for iOS), but the restaurant's system uses Python. Thanks to gRPC's multi-language support, when the user places an order, both systems communicate flawlessly, irrespective of their programming languages. Deadlines/Timeouts: Imagine you're exploring new restaurants on the app. You don't want to wait indefinitely for results to load; you expect a prompt response. Here, gRPC's deadline feature plays a crucial role. When the app requests a list of restaurants from the server, it sets a deadline. This deadline is the app saying, "I can wait this long for a response, but no longer." For example, the app might set a deadline of 3 seconds for fetching the restaurant list. This deadline is communicated to the server, ensuring that the request is either completed in time or terminated with a DEADLINE_EXCEEDED error. This approach respects the user's time, providing a fail-fast mechanism that allows the app to quickly decide on an alternative course of action, such as displaying a helpful message or trying a different query. response = client.GetRestaurantList(timeout=3.0) In others, you might set a deadline based on the current time plus a duration: Deadline deadline = Deadline.after(3, TimeUnit.SECONDS); List<Restaurant> response = client.getRestaurantList(deadline); Closing RemarksWe've taken a trip through the world of communication tools in cloud-native app development, exploring everything from the structured world of XML, the simplicity of JSON, the flexibility of GraphQL, to the efficiency of gRPC. Each of these tools plays a key role in helping our apps talk to each other in the vast world of the internet. Diving into gRPC, we find it's more than just a way to send messages. It's like a bridge that connects different parts of our digital world, making it easy for them to work together, no matter the language they speak or where they are. To master the fundamentals of Cloud Native and Kubernetes, enroll in our KCNA course at KodeKloud: Explore the KCNA Learning Path. View the full article
  2. We are racing toward the finish line at KubeCon + CloudNativeCon Europe, March 19 – 22, 2024 in Paris, France. Join the Docker “pit crew” at Booth #J3 for an incredible racing experience, new product demos, and limited-edition SWAG. Meet us at our KubeCon booth, sessions, and events to learn about the latest trends in AI productivity and best practices in cloud-native development with Docker. At our KubeCon booth (#J3), we’ll show you how building in the cloud accelerates development and simplifies multi-platform builds with a side-by-side demo of Docker Build Cloud. Learn how Docker and Test Containers Cloud provide a seamless integration within the testing framework to improve the quality and speed of application delivery. It’s not all work, though — join us at the booth for our Megennis Motorsport Racing experience and try to beat the best! Take advantage of this opportunity to connect with the Docker team, learn from the experts, and contribute to the ever-evolving cloud-native landscape. Let’s shape the future of cloud-native technologies together at KubeCon! Deep dive sessions from Docker experts Is Your Image Really Distroless? — Docker software engineer Laurent Goderre will dive into the world of “distroless” Docker images on Wednesday, March 20. In this session, Goderre will explain the significance of separating build-time and run-time dependencies to enhance container security and reduce vulnerabilities. He’ll also explore strategies for configuring runtime environments without compromising security or functionality. Don’t miss this must-attend session for KubeCon attendees keen on fortifying their Docker containers. Simplified Inner and Outer Cloud Native Developer Loops — Docker Staff Community Relations Manager Oleg Šelajev and Diagrid Customer Success Engineer Alice Gibbons tackle the challenges of developer productivity in cloud-native development. On Wednesday, March 20, they will present tools and practices to bridge the gap between development and production environments, demonstrating how a unified approach can streamline workflows and boost efficiency across the board. Engage, learn, and network at these events Security Soiree: Hands-on cloud-native security workshop and party — Join Sysdig, Snyk, and Docker on March 19 for cocktails, team photos, music, prizes, and more at the Security Soiree. Listen to a compelling panel discussion led by industry experts, including Docker’s Director of Security, Risk & Trust, Rachel Taylor, followed by an evening of networking and festivities. Get tickets to secure your invitation. Docker Meetup at KubeCon: Development & data productivity in the age of AI — Join us at our meetup during KubeCon on March 21 and hear insights from Docker, Pulumi, Tailscale, and New Relic. This networking mixer at Tonton Becton Restaurant promises candid discussions on enhancing developer productivity with the latest AI and data technologies. Reserve your spot now for an evening of casual conversation, drinks, and delicious appetizers. See you March 19 – 22 at KubeCon + CloudNativeCon Europe We look forward to seeing you in Paris — safe travels and prepare for an unforgettable experience! Learn more New to Docker? Create an account. Learn about Docker Build Cloud. Subscribe to the Docker Newsletter. Read about what rolled out in Docker Desktop 4.27, including synchronized file shares, Docker Init GA, a private marketplace for extensions, Moby 25, support for Testcontainers with ECI, Docker Build Cloud, and Docker Debug Beta. View the full article
  3. Welcome to our recap of KubeCon Chicago 2023, where GitOps and cloud-native tech took center stage. The Flux team attended this year’s KubeCon and presented several talks. From mastering multi-tenancy in Kubernetes with Flux to navigating the complexities of large-scale operations with Argo CD & Flux, there’s something for everyone. We’re also including links to the videos for you to watch and learn at your own pace. We are thrilled to share two monumental achievements from this year's KubeCon that underscore our commitment and influence in the cloud-native sphere. Firstly, our team was honored with the prestigious “Small but Mighty” award, a testament to our significant impact within the cloud-native community. This recognition is not just an award; it's a symbol of our dedication, innovation, and the tangible difference we've made in this dynamic field. We have more on that below. Equally exhilarating is the widespread adoption of GitOps, a revolutionary term and framework pioneered by Weaveworks. This year, GitOps has ‘crossed the chasm and cleared the adoption threshold.” This marks a pivotal moment in our journey – seeing our brainchild evolve into a cornerstone technology reshaping global cloud-native practices. GitOps Goes Mainstream Coinciding with KubeCon Chicago was the release of the CNCF's 2023 GitOps Microsurvey report titled “Learning on the Job as GitOps Goes Mainstream.” The report provided insightful revelations about the state and adoption of GitOps in the cloud-native community. It confirmed that 100% of respondents plan to embrace GitOps within the next 6 months to 2 years. Additionally, 60% of respondents have been seriously using GitOps for over a year, demonstrating its increasing significance in operating cloud-native applications and Kubernetes environments. The survey explored the reasons behind adopting this methodology, the benefits it offers, and the challenges faced by the community, indicating a significant shift towards GitOps in the cloud-native ecosystem. Explore the results in depth and read the commentary by Alexis Richardson, who coined the term ‘GitOps’. GitOps Automation with Flux CD Backstage Plugin Weaveworks and the Flux CD Backstage plugin were featured in the new Backstage Marketplace launch sponsored by Spotify. Created to enhance the developer experience within Backstage, this plugin offers several features for application developers and platform teams including understanding Flux CD resources, tracking Flux CD deployments, and viewing Flux CD resources. Learn all about it here. This latest integration is a testament to Flux CD’s large and growing GitOps ecosystem. Many people use the popular GitOps tool without knowing because it’s embedded in other tools such as GitLab, AKS and Azure Arc, Tanzu and EKS Anywhere. Flux CD KubeCon Sessions Orchestrating Multi-Tenancy Kubernetes Environments with Flux Speaker: Priyanka (Pinky) Ravi, Developer Experience Developer, Weaveworks In the realm of modern software development, where quick and seamless delivery is paramount, Flux CD has become a pivotal GitOps open-source toolkit within the Kubernetes framework, aimed at streamlining and controlling deployments. The talk presented by Priyanka Ravi at the conference delved into how Flux CD plays a vital role in managing and scaling multi-tenant Kubernetes environments, a key factor in handling intricate application networks. The presentation offered a thorough exploration of the functionalities of Flux CD, focusing on its solutions for the complexities associated with multi-tenant environments. The audience was enriched with practical examples and insights, gaining a comprehensive perspective on leveraging Flux CD for secure, efficient, and consistent delivery of software across varied tenant workloads. Harnessing Argo & Flux: The Quest to Scale Add-Ons Beyond 10k Clusters Speakers: Joaquin Rodriguez, Microsoft and Priyanka "Pinky" Ravi, Weaveworks Flamingo, the Flux Subsystem for Argo (FSA), plays an important role in the GitOps world. Flamingo's purpose is to integrate Flux CD with Argo CD, two prominent tools in the GitOps space. This integration allows for a unified and efficient management of GitOps workflows. The session tackled the complexities of managing cluster add-ons across diverse environments such as private clouds, public clouds, and edge computing. It highlighted the challenges faced in large-scale operations, such as inefficiencies, increased costs, and security risks. The speakers delved into how leveraging Argo CD, Flux CD, and Flamingo can effectively scale operations beyond 10,000 clusters. This scaling addresses critical aspects like enhanced scale, efficient logging, and comprehensive monitoring. The discussion also covered how Flux and Flamingo play a role in the lifecycle management of cluster add-ons at this scale, and the integration of the Argo CD API into a cluster lifecycle management solution. View the full article
  4. Tools and platforms form the backbone of seamless software delivery in the ever-evolving world of Continuous Integration and Continuous Deployment (CI/CD). For years, Jenkins has been the stalwart, powering countless deployment pipelines and standing as the go-to solution for many DevOps professionals. But as the tech landscape shifts towards cloud-native solutions, AWS CodePipeline emerges as a formidable contender. Offering deep integration with the expansive AWS ecosystem and the agility of a cloud-based platform, CodePipeline is redefining the standards of modern deployment processes. This article dives into the transformative power of AWS CodePipeline, exploring its advantages over Jenkins and showing why many are switching to this cloud-native tool. Brief Background About CodePipeline and Jenkins At its core, AWS CodePipeline is Amazon Web Services' cloud-native continuous integration and continuous delivery service, allowing users to automate the build, test, and deployment phases of their release process. Tailored to the vast AWS ecosystem, CodePipeline leverages other AWS services, making it a seamless choice for teams already integrated with AWS cloud infrastructure. It promises scalability, maintenance ease, and enhanced security, characteristics inherent to many managed AWS services. On the other side of the spectrum is Jenkins – an open-source automation server with a storied history. Known for its flexibility, Jenkins has garnered immense popularity thanks to its extensive plugin system. It's a tool that has grown with the CI/CD movement, evolving from a humble continuous integration tool to a comprehensive automation platform that can handle everything from build to deployment and more. Together, these two tools represent two distinct eras and philosophies in the CI/CD domain. View the full article
  5. Cloud-native integration platforms have emerged as potent drivers of business transformation, enabling seamless connections between diverse applications and systems. This grants enterprises remarkable agility, scalability, and operational efficiency. This informative blog delves into the world of leading cloud-native integration platforms, spearheading significant changes in the business arena. By enhancing customer experiences and streamlining internal processes, these platforms have the capacity to revolutionize modern business operations at their essence. The adoption of cloud-native integration platforms represents a strategic move to meet the evolving demands of the digital era. These platforms not only facilitate smooth connections but also enable organizations to adeptly navigate complexity, respond to change promptly, and effortlessly enhance their capabilities. They effectively integrate diverse systems and applications. View the full article
  6. Here are 5 trends that startups should keep an eye on ... https://www.snowflake.com/blog/five-trends-changing-startup-ecosystem/
  7. Whether you're working with an on-premises private cloud, giants like AWS, Azure, and GCP, or exploring hybrid models, cloud-native is the way forward, ensuring your applications are always at their peak performance. So, as we sail into this new era, let's embrace cloud-native and unlock a world of possibilities! In today's blog post, we will delve into the five fundamental principles of cloud-native architecture, as articulated by Tom Grey from Google. Tom's insights provide a comprehensive understanding of the core tenets that underpin cloud-native systems. So, let's dive in and explore these principles in detail... View the full article
  8. Over the past few years, there has been a surge in demand for Kubernetes and cloud-native architecture skills. This is largely due to the increased adoption of microservice architecture by organizations seeking greater application agility, scalability, and resilience. One sure way of gaining a competitive advantage for opportunities in this field is by attaining the Kubernetes and Cloud Native Associate (KCNA) certification. Based on KodeKloud’s Kubernetes and Cloud-Native Associate (KCNA) exam preparation course, this blog post covers how Kubernetes works, its component, and its role in the cloud-native world... View the full article
  9. Analyst firm Forrester recently predicted that 2022 “will see big organizations move decisively away from lift-and-shift approaches to the cloud, embracing cloud-native technologies instead.” According to Gartner, more than 85% of enterprises “will embrace a cloud-first principle by 2025 and will not be able to fully execute on their digital strategies without the use of […] The post Moving From Lift-and-Shift to Cloud-Native appeared first on DevOps.com. View the full article
  10. Docker’s Peter McKee hosts serverless wizard and open sorcerer Yaron Schneider for a high-octane tour of DAPR (as in Distributed Application Runtime) and how it leverages Docker. A principal software engineer at Microsoft in Redmond, Washington, Schneider co-founded the DAPR project to make developers’ lives easier when writing distributed applications. DAPR, which Schneider defines as “essentially a sidecar with APIs,” helps developers build event-driven, resilient distributed applications on-premises, in the cloud, or on an edge device. Lightweight and adaptable, it tries to target any type of language or framework, and can help you tackle a host of challenges that come with building microservices while keeping your code platform agnostic. How do I reliably handle a state in a distributed environment? How do I deal with network failures when I have a lot of distributed components in my ecosystem? How do I fetch secrets securely? How do I scope down my applications? Tag along as Schneider, who also co-founded KEDA (Kubernetes-based event-driven autoscaling), demos how to “dapr-ize” applications while he and McKee discuss their favorite languages and Schneider’s collection of rare Funco pops! Watch the video (Duration 0:50:36): The post Video: Docker Build: Simplify Cloud-native Development with Docker & DAPR appeared first on Docker Blog. View the full article
  • Forum Statistics

    43.4k
    Total Topics
    42.7k
    Total Posts
×
×
  • Create New...