Jump to content

Search the Community

Showing results for tags 'json'.

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Forums

  • General
    • General Discussion
    • Artificial Intelligence
    • DevOpsForum News
  • DevOps & SRE
    • DevOps & SRE General Discussion
    • Databases, Data Engineering & Data Science
    • Development & Programming
    • CI/CD, GitOps, Orchestration & Scheduling
    • Docker, Containers, Microservices, Serverless & Virtualization
    • Infrastructure-as-Code
    • Kubernetes & Container Orchestration
    • Linux
    • Logging, Monitoring & Observability
    • Security, Governance, Risk & Compliance
  • Cloud Providers
    • Amazon Web Services
    • Google Cloud Platform
    • Microsoft Azure

Find results in...

Find results that contain...


Date Created

  • Start

    End


Last Updated

  • Start

    End


Filter by number of...

Joined

  • Start

    End


Group


Website URL


LinkedIn Profile URL


About Me


Cloud Platforms


Cloud Experience


Development Experience


Current Role


Skills


Certifications


Favourite Tools


Interests

Found 12 results

  1. Developing cloud-native applications involves establishing smooth and efficient communication among diverse components. To kick things off, let's delve into how a range of tools, from XML to gRPC, enable and enhance these critical interactions. XML (often with SOAP):<order> <bookID>12345</bookID> <quantity>2</quantity> <user>JohnDoe</user> </order>PositivesHighly structured: XML's structure ensures consistent data formatting. For instance, with <bookID>12345</bookID>, you're certain that the data between the tags is the book's ID. This reduces ambiguity in data interpretation. Self-descriptive: The tags describe the data. <user>JohnDoe</user> clearly indicates the user's name, making it easier for developers to understand the data without additional documentation. NegativesVerbose: For a large order list with thousands of entries, the repeated tags can significantly increase the data size. If you had 10,000 orders, that's 10,000 repetitions of <order>, <bookID>, and so on, leading to increased bandwidth usage. Parsing can be slow: For the same 10,000 orders, the system would need to navigate through each start and end tag, consuming more processing time compared to more concise formats. JSON (commonly with REST):{ "order": { "bookID": "12345", "quantity": 2, "user": "JohnDoe" } }PositivesLightweight and easy to read: The format is concise. If you had an array of 10,000 orders, JSON would handle it without the repetitive tags seen in XML, resulting in smaller data sizes. Supported by many languages: In JavaScript, for instance, JSON is natively supported. You can convert a JSON object to a JavaScript object with a simple JSON.parse() function, making integration seamless. NegativesDoesn't support data types natively: In our example, "bookID": "12345" and "quantity": 2 look different, but JSON treats both as text. This can lead to potential type-related bugs or require additional parsing. No built-in support for streaming: If you wanted to update book prices in real-time, JSON wouldn't support this natively. You'd need workarounds or additional technologies. GraphQL:Query: { order(id: "5678") { bookID user } }Response: { "data": { "order": { "bookID": "12345", "user": "JohnDoe" } } }PositivesFetch exactly what you need: If you had a mobile app with limited screen space, you could fetch only the necessary data, like bookID and user, optimizing bandwidth and load times. Single endpoint: Instead of managing multiple endpoints like /orders, /books, and /users, you'd manage a single GraphQL endpoint, simplifying the backend architecture. NegativesOverhead of parsing and processing queries: For each query, the server needs to interpret and fetch the right data. If you had millions of requests with varied queries, this could strain the server. Might be overkill for simple APIs: If you only needed basic CRUD operations, the flexibility of GraphQL might introduce unnecessary complexity. gRPC:Protocol Buffers definition: message OrderRequest { string id = 1; } message OrderResponse { string bookID = 1; int32 quantity = 2; } service OrderService { rpc GetOrder(OrderRequest) returns (OrderResponse); }PositivesEfficient serialization with Protocol Buffers: If you expanded globally, the compact binary format of Protocol Buffers would save significant bandwidth, especially with large datasets. Supports bi-directional streaming: Imagine you are having a feature where readers could chat about a book in real-time. gRPC's streaming would allow instant message exchanges without constant polling. Strongly-typed: With int32 quantity = 2;, you're ensured that quantity is always an integer, reducing type-related errors. NegativesRequires understanding of Protocol Buffers: Your development team would need to learn a new technology, potentially slowing initial development. Might be unfamiliar: If the team was accustomed to RESTful services, transitioning to gRPC might introduce a learning curve. Let's get to today's topic. What is gRPC?Imagine you have two computers that want to talk to each other. Just like people speak different languages, computers also need a common language to communicate. gRPC is like a special phone line that lets these computers chat quickly and clearly. In technical terms, gRPC is a tool that helps different parts of a software system communicate. It's designed to be fast, efficient, and secure. Instead of sending wordy messages, gRPC sends compact, speedy notes. This makes things run smoothly, especially when you have lots of computers talking at once in big systems like online shopping sites or video games. gRPC, which stands for Google Remote Procedure Call, is an open-source communication framework designed for systems to interact seamlessly. At its core, gRPC is about enabling efficient communication between computer programs, particularly when they're located on different servers or even across global data centers.Simplified Guide to gRPCImagine you have two friends, one who knows a secret recipe (let's call them the Chef) and another who wants to learn it (let's call them the Learner). However, there's a catch: they live in different towns. gRPC is like a magical phone that doesn't just let them talk to each other but also allows the Learner to watch and learn the recipe as if they were standing right next to the Chef in the kitchen. In the world of computer programs, gRPC does something quite similar. If you've created an app (which we'll think of as the Learner) that needs to use functions or data from a program on another computer (our Chef), gRPC helps them communicate effortlessly. Here's how it works: Defining the Menu: First, you tell gRPC about the dishes (or services) you're interested in, along with the ingredients (parameters) needed for each one and what you hope to have on your plate in the end (return types).The Chef Prepares: On the server (the Chef's kitchen), the menu is put into action. The server prepares to make those dishes exactly as described, ready to whip them up on request.The Magical Phone (gRPC): This is where gRPC comes in, acting as the phone line between the Learner and the Chef. It's not just any phone; it's a special one that can transmit tastes, smells, and cooking techniques instantly.Ordering Up: The Learner (client) uses a copy of the menu (known as a stub, but it's simpler to think of it as just a "client menu") to place an order. This "client menu" knows all the dishes the Chef can make and how to ask for them.Enjoying the Dish: Once the Learner uses the magical phone to request a dish, the Chef prepares it and sends it back over the same magical connection. To the Learner, it feels like the dish was made right there in their own kitchen.In technical terms, gRPC lets different pieces of software on different machines talk to each other as though they were part of the same program. It's a way of making remote procedure calls (RPCs), where the Learner (client) calls a method on the Chef (server) as if it were local. This magic makes building and connecting distributed applications much simpler and more intuitive. Technical AspectsHere's a closer look at its technical aspects. We'll consider a cloud-native application for a food delivery service. A user wants to order food from a restaurant using this app. Protocol Buffers: To represent an order, instead of a lengthy JSON, we use a concise Protocol Buffer definition. This ensures that the order details are transmitted efficiently between the user's device and the restaurant's system. message FoodOrder { string dishName = 1; int32 quantity = 2; string specialInstructions = 3; }gRPC uses Protocol Buffers (often shortened to "protobuf") as its primary mechanism for defining services and the structure of the data messages. Protobuf is a binary serialization format, making it both smaller and faster than traditional text-based formats like JSON or XML. Streaming Capabilities: As the restaurant prepares the order, the user can receive real-time updates on the cooking status. This is achieved using gRPC's streaming. This means the user gets instant notifications like "Cooking", "Packing", and "Out for Delivery" without constantly asking the server. rpc OrderUpdates(string orderId) returns (stream StatusUpdate);Language Agnostic: The user's app might be written in Java (for Android) or Swift (for iOS), but the restaurant's system uses Python. Thanks to gRPC's multi-language support, when the user places an order, both systems communicate flawlessly, irrespective of their programming languages. Deadlines/Timeouts: Imagine you're exploring new restaurants on the app. You don't want to wait indefinitely for results to load; you expect a prompt response. Here, gRPC's deadline feature plays a crucial role. When the app requests a list of restaurants from the server, it sets a deadline. This deadline is the app saying, "I can wait this long for a response, but no longer." For example, the app might set a deadline of 3 seconds for fetching the restaurant list. This deadline is communicated to the server, ensuring that the request is either completed in time or terminated with a DEADLINE_EXCEEDED error. This approach respects the user's time, providing a fail-fast mechanism that allows the app to quickly decide on an alternative course of action, such as displaying a helpful message or trying a different query. response = client.GetRestaurantList(timeout=3.0) In others, you might set a deadline based on the current time plus a duration: Deadline deadline = Deadline.after(3, TimeUnit.SECONDS); List<Restaurant> response = client.getRestaurantList(deadline); Closing RemarksWe've taken a trip through the world of communication tools in cloud-native app development, exploring everything from the structured world of XML, the simplicity of JSON, the flexibility of GraphQL, to the efficiency of gRPC. Each of these tools plays a key role in helping our apps talk to each other in the vast world of the internet. Diving into gRPC, we find it's more than just a way to send messages. It's like a bridge that connects different parts of our digital world, making it easy for them to work together, no matter the language they speak or where they are. To master the fundamentals of Cloud Native and Kubernetes, enroll in our KCNA course at KodeKloud: Explore the KCNA Learning Path. View the full article
  2. Data analytics involves storing, managing, and processing data from different sources and analyzing it thoroughly to develop solutions for our business problems. While JSON helps to interchange data between different web applications and sources through API connectors, Snowflake assists you in analyzing that data with its intuitive features. Therefore, JSON Snowflake data migration is crucial […]View the full article
  3. C# is an incredible powerful free and open-source programming language that powers a wide array of applications, including complex video game engines. However, whether you are a building a massively complex application or a simple console application, you might come across instances where you need to transmit the data. One of the most powerful data interchange formats is JSON. It offers a lightweight and human-readable format while supporting the complex data layout. In this tutorial, we will learn how to use the provided C# tools and features to read and parse the data from a JSON file in a C# application. Sample JSON File Let us start by setting up our JSON file that we are going to use for demonstration purposes. In our case, the JSON file is as follows: { "database": { "name": "admin", "type": "SQL", "server": "localhost:5904", "creds": { "username": "root", "password": "mysql" } } } In this case, we have a basic JSON file with nested values which allows us to demonstrate how to read an arrayed JSON file. Installing the Newtonsoft.JSON Package In order to quickly and efficiently parse and work with JSON data in C#, we will make use of a “.NET” external library. In this case, we use the Newtonsoft.JSON package to read and work with JSON data. Before using it, we need to ensure that we have it installed. We can run the following command in the NuGet Package Console as follows: Install-Package Newtonsoft.Json This should download and allow you to access the features from the Newtonsoft JSON package. Read and Deserialize the JSON File Once we have everything ready, we can proceed and read the JSON file. We can add the source code as follows: using System; using System.IO; class Program { static void Main() { string jsonFilePath = "config.json"; string jsonString; if (File.Exists(jsonFilePath)) { jsonString = File.ReadAllText(jsonFilePath); Console.WriteLine(jsonString); } else { Console.WriteLine("File not found!"); } } } In the given example code, we start by defining the path to the JSON file. Next, we define a new variable to store the raw JSON string that we read. Finally, we use the File.ReadAllText to read the JSON file. The next step is to deserialize the JSON string. Deserializing allows us to format JSON into a valid C# object. We can do this by creating a class that represents the structure of the JSON data from the file. An example code is as follows: using System; using System.IO; using Newtonsoft.Json; public class Creds { public string Username { get; set; } public string Password { get; set; } } public class Database { public string Name { get; set; } public string Type { get; set; } public string Server { get; set; } public Creds Creds { get; set; } } public class RootObject { public Database Database { get; set; } } class Program { static void Main() { string jsonFilePath = "config.json"; string jsonString; if (File.Exists(jsonFilePath)) { jsonString = File.ReadAllText(jsonFilePath); RootObject dbInfo = JsonConvert.DeserializeObject(jsonString); Console.WriteLine($"Database Name: {dbInfo.Database.Name}"); Console.WriteLine($"Type: {dbInfo.Database.Type}"); Console.WriteLine($"Server: {dbInfo.Database.Server}"); Console.WriteLine($"Username: {dbInfo.Database.Creds.Username}"); Console.WriteLine($"Password: {dbInfo.Database.Creds.Password}"); } else { Console.WriteLine("File not found!"); } } } In the given example, we define three main classes. The first is the “Credentials” class, the second is the “Database” class, and lastly, we have the “RootObject” classes. Each class maps the structure of the JSON data and the corresponding types. Finally, we use the JsonConvert.DeserializedObject method to convert the resulting JSON string from the file into a C# object. Read the JSON Arrays As you can guess, a JSON data in the real world is not as simplistic as shown in the previous example. One of the most complex features that you will encounter in JSON is the arrays or nested JSON objects. Let us see how we can handle such a layout in C#. In our case, we are dealing with a JSON file as follows: [ { "name": "admin", "type": "SQL", "server": "localhost:5094", "creds": { "username": "root", "password": "mysql" } }, { "name": "users", "type": "MongoDB", "server": "localhost:5095", "creds": { "username": "root", "password": "postgres" } } ] Once we define the previous code, we can proceed and deserialize the JSON file and read the file. An example code is as follows: using Newtonsoft.Json; using System; using System.Collections.Generic; using System.IO; public class Creds { public string Username { get; set; } public string Password { get; set; } } public class Database { public string Name { get; set; } public string Type { get; set; } public string Server { get; set; } public Creds Creds { get; set; } } class Program { static void Main() { string jsonFilePath = "config.json"; string jsonString; if (File.Exists(jsonFilePath)) { jsonString = File.ReadAllText(jsonFilePath); List databases = JsonConvert.DeserializeObject<List>(jsonString); foreach (var db in databases) { Console.WriteLine($"Database Name: {db.Name}"); Console.WriteLine($"Type: {db.Type}"); Console.WriteLine($"Server: {db.Server}"); Console.WriteLine($"Username: {db.Creds.Username}"); Console.WriteLine($"Password: {db.Creds.Password}"); Console.WriteLine(); } } else { Console.WriteLine("File not found!"); } } } You might notice that the previous program does not provide much difference than a regular JSON parsing. The main different thing is that in the JsonConvert.DeserializeObject method, we convert the JSON string into a List of Database object where each object contains information about each array from the JSON list. The resulting output is as follows: Database Name: admin Type: SQL Server: localhost:5094 Username: root Password: mysql Database Name: users Type: MongoDB Server: localhost:5095 Username: root Password: postgres Conclusion In this tutorial, we covered all the features of reading and working with JSON files in a C# application using the Newtonsoft.JSON application. View the full article
  4. HashiCorp Terraform provides a couple functions for working with JSON. These are the jsonencode and jsondecode functions and they grant the ability to encode and decode JSON. This can be a powerfull tool for several scenarios where you may need to work with JSON data within a Terraform project. This article shows some simple examples […] The article Terraform: How to work with JSON (jsondecode, jsonencode, .tfvars.json) appeared first on Build5Nines. View the full article
  5. Have you ever tried to load a JSON file into BigQuery only to find out the file wasn’t in the proper newline delimited format (each line being an object, not an array) that BigQuery expects? Well you might still be able to load the file using only BigQuery (no other tools necessary) and a little bit of creativity! View the full article
  6. Catalog API (CAPI) introduced a new request/response attribute “DetailsDocument” that accepts and returns a JSON object. As a developer, building on top of the APIs, you can send JSON object in CAPI StartChangeSet API request and get a JSON object in the response of DescribeEntity and DescribeChangeSet API. This capability will exist along side the current experience of sending and receiving a string object in the “Details” attribute of StartChangeSet, DescribeChangeSet and DescribeEntity APIs respectively. View the full article
  7. Developers using SAM CLI to author their serverless application with Lambda functions can now create and use Lambda test events to test their function code. Test events are JSON objects that mock the structure of requests emitted by AWS services to invoke a Lambda function and return an execution result, serving to validate a successful operation or to identify errors. Previously, Lambda test events were only available in the Lambda console. With this launch, developers using SAM CLI can create and access a test event from their AWS account and share it with other team members. View the full article
  8. “JSON is a popular data-interchange format that has been widely adopted across the development community. However, JSON has strict schema and formatting tools. JSON does not natively support multi-line strings. This can be quite frustrating when looking for flexibility and storing large texts. Let us discuss a simple workaround and how we can use multiline strings.” Using a JSON Array A creative way of inserting a multi-line string in JSON is to use an array. We can separate each line as an individual element of the array. For example, let us say we have a sample text as shown: Lorem ipsum dolor sit amet. Aut eveniet sequi sit sequi dicta vel impedit architecto 33 voluptas tempore et optio libero qui molestiae possimus. Aut cupiditate animi ut commodi natus ut nesciunt mollitia ea ipsam iusto a ipsa odit a laboriosam neque et vero totam. If we want to convert the above block of text into JSON, we can convert each line into an array element. For example: [ "Lorem ipsum dolor sit amet.", "Aut eveniet sequi sit sequi dicta vel impedit architecto,", "33 voluptas tempore et optio libero qui molestiae possimus.", "Aut cupiditate animi ut commodi natus ut nesciunt mollitia ea,", "ipsam iusto a ipsa odit a laboriosam neque et vero totam." ] Note how each line of the above string is converted into an array element separated by a comma. You can then use a method in your preferred programming language to construct the string back to its individual block. Conclusion This tutorial provides a simple but creative way of creating a multi-line string in JSON. View the full article
  9. Amazon ElastiCache for Redis and Amazon MemoryDB for Redis now support natively storing and accessing data in the JavaScript Object Notation (JSON) format. With this launch, application developers can effortlessly store, fetch, and update their JSON data inside Redis without needing to manage custom code for serialization and deserialization. Using ElastiCache and MemoryDB, you can now efficiently retrieve and update specific portions of a JSON document without needing to manipulate the entire object, which can help improve performance and help reduce cost. You can also search your JSON document contents using the JSONPath query syntax. View the full article
  10. Amazon Relational Database Service (Amazon RDS) Data API can now return results in a new simplified JSON format that makes it easier to convert JSON string to an object in your application. Previously, Amazon RDS Data API returned a JSON string as an array of data type and value pairs. This required developers to write custom code to parse the response and extract the values in order to manually translate the JSON string into an object. Instead, the new format returns an array of column names and values, which makes it easier for common JSON parsing libraries to convert the response JSON string to an object. The previous JSON format is still supported and existing applications using Amazon RDS Data API will work unchanged. To learn more about the new format and how to use it see our documentation. View the full article
  11. Amazon EventBridge Schema Registry now adds support for JSON Schema, allowing customers to validate, annotate, and manipulate JSON documents conforming to JSON Schema Draft 4 specification. You now have access to more specifications when creating schemas and can use JSON Schema to create strongly typed events. You can also implement use cases such as client-side validation using a JSON Schema validator before publishing events on the EventBridge bus. View the full article
  12. AWS WAF can now natively parse request body JSON content, allowing you to inspect specific keys or values of the JSON content with AWS WAF rules. This capability helps you protect your APIs by checking for valid JSON structure, inspecting the JSON content for common threats against your application, and reducing false positives by inspecting only the keys or values in the JSON content. View the full article
  • Forum Statistics

    43.3k
    Total Topics
    42.7k
    Total Posts
×
×
  • Create New...