Jump to content

Search the Community

Showing results for tags 'mongodb'.

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Forums

  • General
    • General Discussion
    • Artificial Intelligence
    • DevOpsForum News
  • DevOps & SRE
    • DevOps & SRE General Discussion
    • Databases, Data Engineering & Data Science
    • Development & Programming
    • CI/CD, GitOps, Orchestration & Scheduling
    • Docker, Containers, Microservices, Serverless & Virtualization
    • Infrastructure-as-Code
    • Kubernetes & Container Orchestration
    • Linux
    • Logging, Monitoring & Observability
    • Security, Governance, Risk & Compliance
  • Cloud Providers
    • Amazon Web Services
    • Google Cloud Platform
    • Microsoft Azure

Find results in...

Find results that contain...


Date Created

  • Start

    End


Last Updated

  • Start

    End


Filter by number of...

Joined

  • Start

    End


Group


Website URL


LinkedIn Profile URL


About Me


Cloud Platforms


Cloud Experience


Development Experience


Current Role


Skills


Certifications


Favourite Tools


Interests

  1. March 26, 2024: Today, Canonical announced the release of Charmed MongoDB, an enterprise solution for MongoDB® that comes with advanced automation features, multi-cloud capabilities and comprehensive support. MongoDB® is one of the most widely used databases worldwide. It provides powerful capabilities for scaling, consistency and fault tolerance , making it a popular choice for organisations of all sizes and in various industries. Charmed MongoDB is an enterprise drop-in replacement for the MongoDB® Community version with the advanced features organisations need in their production environment. Charmed MongoDB Product Video “As part of our open source data solution portfolio, Charmed MongoDB is designed to meet the demands of modern deployments”, said Cedric Gegout, VP of Product at Canonical. “Organisations can deploy Charmed MongoDB with confidence, knowing they are backed by Canonical’s commitment to performance in any cloud environment, alongside 10 years of support and security maintenance.” Hyper-automated MongoDB®, available on any cloud The Charmed MongoDB operator deploys and runs MongoDB® on physical, virtual machines (VM) and other cloud and cloud-like environments, including AWS, Azure, OpenStack and VMWare. The solution comes with automation features that simplify the deployment, scaling, design, and management of MongoDB®, ensuring reliability. In addition to these capabilities, Charmed MongoDB offers enterprise-level features such as high availability, sharding, audit logging, backup and restore, user management, and Transport Layer Security (TLS). Secured and supported for 10 years For organisations looking for fast security patching against Common Vulnerabilities and Exposures (CVEs), Charmed MongoDB offers comprehensive security maintenance. Canonical’s Charmed MongoDB offers a cost-effective, subscription model that includes 10 years of security maintenance and 24/7 support, providing the stability and peace of mind necessary for organisations to run MongoDB® in production. Simple pricing per node Charmed MongoDB is part of Canonical’s data solutions portfolio. Customers purchase 24/7 or weekday enterprise support on a per-node basis through the Ubuntu Pro + Support plan, which covers all applications within the portfolio, including Charmed Kafka and Charmed Spark as well as solutions for AI offered by Canonical such as Charmed Kubeflow and Charmed MLFlow. This convenient subscription per node and lack of software licence fees makes Canonical’s offering compelling for organisations looking to run database solutions like MongoDB® with more control over their TCO. Budgeting and financial planning are straightforward and predictable. Get started with Charmed MongoDB To get started with Charmed MongoDB, users can refer to the documentation available at Charmhub. For more information about Charmed MongoDB, visit canonical.com/data/mongodb. Canonical is also delighted to offer Charmed MongoDB training in collaboration with Cloudbase Solutions. This program is designed to help individuals get started with Charmed MongoDB through in-person or virtual training. Additional resources Webinar: MongoDB® for Modern Data Management Whitepaper: MongoDB® Security and Support Whitepaper: MongoDB® for enterprise data management Learn more about Charmed MongoDB Managed Service Learn more about Data Solutions Advisory at Canonical Trademark Notice “MongoDB” is a trademark or registered trademark of MongoDB Inc. Other trademarks are property of their respective owners. Charmed MongoDB is not sponsored, endorsed, or affiliated with MongoDB, Inc. View the full article
  2. MongoDB has made its multi-cloud developer data platform MongoDB Atlas available in six additional cloud regions in Canada, Germany, Israel, Italy, and Poland — now said to be the most widely available developer data platform in the world. With this expansion, MongoDB Atlas is now available in 117 cloud regions across Amazon Web Services (AWS),... Read more » The post MongoDB expands availability of MongoDB Atlas to six more cloud regions appeared first on Cloud Computing News. View the full article
  3. Amazon DocumentDB (with MongoDB compatibility) Elastic Clusters now support readable secondaries, ability to configure the shard instance count, and ability start and stop clusters. These new features help you scale read workloads and improve usage efficiency of your Elastic Clusters. View the full article
  4. Amazon DocumentDB (with MongoDB compatibility) Elastic Clusters now support automated backups and the ability to copy snapshots. These new features enhance your application resilience and recovery objectives of your Elastic Clusters. View the full article
  5. Amazon DocumentDB (with MongoDB compatibility) announces support for partial indexes. With partial indexes, developers can create an index on a subset of documents that meet a specific filter criterion. By indexing a subset of data, partial indexes can reduce query times and improve performance during index creation and management. View the full article
  6. This article is about MongoDB in C++, the most powerful and widely used database in our programming world that stores the data in JSON format. MongoDB is an open-source and document-oriented NoSQL database that offers us a flexible approach to storing and managing the records in the database. The user can insert(), delete(), and update() the queries using MongoDB in C++. Let’s learn how the MongoDB driver is installed and used in C++ to manage the database of any system with the help of proper examples for more understanding. How to Install the MongoDB Driver in C++ We will learn how to install the Mongo driver in C++. The official Mongo driver that is used for C++ is the MongoDB C++11 driver which can be installed in your system with a C++ environment. We must install the MongoDB driver library and connect the database to the C++ projects using a URL string. The MongoDB driver is appropriately functional and has built-in management methods that automatically connect the database on user request and reconnect the connection if lost. The MongoDB driver provides full authentication and authorization of the user request that is handled in C++ to the database. Create a MongoDB Database in the System Install the MongoDB setup in our system. After installing the MongoDB, from “C:\Program Files”, open the bin folder from the MongoDB folder. Copy the address of the bin folder address and add the environment variable PATH in Windows to activate the NoSQL MongoDB database. Ensure that MongoDB Compass is installed that has the mentioned user interface. We can see the address of this database, and we can access this database through the local host whose port number is “27017”. Open the command prompt in your system. Run the command -> mongo –version to show the version of MongoDB. Create a New Database in MongoDB Using Cmd We can easily create the new database in MongoDB by just running the command in the cmd of our system. We run the command that is mentioned in the following: > use mydb Show All Running Databases in MongoDB To show all the running databases in the MongoDB, we can run the following mentioned command in our cmd to show all the running databases: > Show dbs To launch the MongoDB server, we just need to follow and fulfill the requirements on the terminal. We can also get the collection of “Mongo” in the current default database which is “test” with records already in it. Only those databases with some data or records are retrieved or shown in the show database. Example: Connecting MongoDB in C++ Here, we connect this NoSQL MongoDB database to interact with C++. We first need to connect to your system’s MongoDB server. Make sure that the C++ setup and MongoDB are active in the system. The MongoDB driver library in C++ is now installed on your system. We can run the essential libraries in our code along the MongoDB C++ driver as “mongodbcxx/client.hpp” and “monodbcxx/instances.hpp”. In the MongoDB libraries, we use the “client” function that contains the URI “mongodb://localhost:27017”. If this URI is correct, display the message as “connected to MongoDB”. The MongoDB that runs locally is only accessible on port “27017” as displayed in the previous MongoDB screenshot. Maintain the CRUD in MongoDB CRUD is the main operation that is needed in the database management system. We can do nothing without CRUD in C++. In a database, CRUD means create, read, update, and delete the records from the database to high performance of the database. Insert the Data in the MongoDB Database C++ We can easily add the records to any new or existing database. We only create new tables in the database easily in C++ by defining the essential MongoDB libraries to connect with the database. After that, we write the connection code in C++ and then write the insert database query in C++ to insert the records in the database. MongoDB is created as a powerful driver that handles the C++ program which is “MongoDB driver C++” and the library that handles all the C++ operations whose name is “mongocxx”. Using the libraries, we create an instance of the C++ driver. Using the insert_one() method, we add the data to the NoSQL database. Delete the Data from the Database In every step, make this thing clear that the MongoDB connection is established and working fine. We access the MongoDB database using the “mongocxx” library and its useful methods that are derived to delete the data from the database in C++ language. We can access the database and its collection easily using the attributes of mongocxx, just like “mongodbcxx::database” with the “db” alias and “mongodbcxx::collection” for collection with the “colle” alias. After that, create the filter for every situation for the document that you definitely want to delete and then specify the criteria for deletion in MongoDB C++. Pass the filter in the “delete” function to remove the record from the database. Update the Records in the Database An update means we can change the existing records in the database. We can easily update the record from the database using the “update” method that is defined in the MongoDB C++ driver instance. Conclusion At the end of the article, we can say that the usage of NoSQL MongoDB is increasing rapidly because of its high efficiency and performance. MongoDB has developed the MongoDB driver to execute or deal with the C++ language. With the help of MongoDB, the users can easily add, delete, update, and show the records, tables, and databases without having any storage or space issues in the system. MongoDB takes its virtual space and easily deals with the C++ language using its special-purpose libraries. Hopefully, this article is very helpful and easy to learn. Remember to use smart techniques or databases to build new programs and applications to make the system more reliable. View the full article
  7. Amazon DocumentDB (with MongoDB compatibility) now supports text search, making it easy to run text search queries on extensive string data using a native text index. You can now perform text searches of specific terms or phrases on large string data using $text and $search operators, assign different significance levels to the indexed fields using weights, and sort the search results based on relevance using $meta operator. View the full article
  8. Amazon DocumentDB (with MongoDB compatibility) now adds support for maintenance notifications to provide users visibility into scheduled maintenance activities on their Amazon DocumentDB clusters. Users can now receive near real time notifications of scheduled maintenance activities through health events in AWS Health Dashboard (AHD) in AWS console and through emails. View the full article
  9. Financial institutions handle vast amounts of sensitive and confidential data, including customer information, transaction details, and regulatory compliance records. A trusted database ensures the security and privacy of this sensitive information, protecting it from unauthorised access, breaches, or cyber threats. MongoDB is the ideal fit, and it’s one of the most widely used databases in the financial services industry. It provides a sturdy, adaptable and trustworthy foundation. It also can safeguard sensitive customer data while facilitating swift responses to rapidly evolving situations. This security and stability can be enhanced even further with Charmed MongoDB. MongoDB use cases Customer Data Management Within financial services, MongoDB is often used as the database for customer data management. MongoDB can store and manage customer profiles, account information and transaction histories. Its flexibility allows for handling diverse data types related to customer records, and it can easily accommodate changes in customer data structures as regulations and business requirements evolve. Its document-oriented model allows for flexible and dynamic schemas, adapting to diverse customer information. Online Banking Another area where MongoDB offers a robust database platform is online banking – which covers any web or mobile banking apps. These applications often need to store and retrieve customer account information, transaction history, and payment data as fast as possible. MongoDB can provide the back-end data storage for these applications, supporting fast and secure access. Its scalability ensures a seamless user experience, and its flexible data model accommodates the evolving requirements of digital banking platforms. Fraud detection The financial sector is particularly susceptible to fraudulent activities due to factors that make it an attractive target for malicious actors. Some key reasons include high-stake transactions, insider threats, data breaches, and more. Organisations in this sector rely on fraud detection systems, which in turn require a suitable database. MongoDB can store transaction data, user behaviour patterns, and other information that is essential for fraud detection. In addition, it can handle high volumes of data and perform real-time analytics – enabling rapid detection of fraudulent activities even for businesses operating on a massive scale. Explore how MongoDB can support your projects in financial, telecommunications and automotive industries in our recent guide: MongoDB® for enterprise data management Canonical services for the financial sector Canonical offers comprehensive financial services (FinServ) open source solutions and expertise. From hybrid multi-cloud strategy to Kubernetes, we can help you accelerate innovation, drive business agility and reduce costs. Contact us for your MongoDB and database needs Canonical for your MongoDB journey With Charmed MongoDB – an enhanced, open source and fully-compatible, drop-in replacement for the MongoDB Community Edition with advanced enterprise features – Canonical can support you at every stage of your MongoDB journey (from Day 0 to Day 2). Our services include: Design Proof of Concept (PoC) – We work with you to build a PoC before you invest in infrastructure for advanced use cases. This allows you to assess your return on investment carefully. Data solutions training – Train your team to use our data solutions, from understanding all features to customisation options, deployment and admin tasks. Deployment Data and/or infrastructure migration – Migration from one infrastructure or database to the new Canonical Charmed MongoDB solution. MongoDB deployment – Deploy, set up, configure Charmed MongoDB in your production environment using the Charmed MongoDB Machine or Kubernetes operator. Maintenance, management and support 24x7x365 Support and industry-leading SLAs – Get hands-on monitoring and support for your MongoDB application, with guaranteed SLA. Management of MongoDB in the production environment – Self manage or outsource the life-cycle management of your mission-critical database application to Canonical’s experts. Further reading MongoDB® Toolkit: A guide to MongoDB security and support Charmed MongoDB: the operator you need for managing your document database Running MongoDB on Kubernetes Trademark Notice “MongoDB” is a trademark or registered trademark of MongoDB Inc. Other trademarks are property of their respective owners. Charmed MongoDB is not sponsored, endorsed, or affiliated with MongoDB, Inc. View the full article
  10. Many organizations are opting to run MongoDB in the AWS cloud to gain improved scalability and reliability for their MongoDB deployment.View the full article
  11. This post was co-authored by Patrick McCluskey, Vice President at MongoDB. As we welcome the era of AI, it’s important to remember that data is the fuel that powers AI. With this understanding, it’s clear why our goal is to make Azure the best destination for data. In Azure, customers benefit from a comprehensive portfolio of database products including relational, non-relational, open source, and caching solutions. We have also established deep partnerships, like the one we have with MongoDB Inc., to enable digital transformation using their databases as managed offerings in Azure. MongoDB is a leading data platform company that gives developers an intuitive way to model their data. We’ve partnered with MongoDB for years but this year we deepened our partnership significantly, culminating in a multiyear strategic partnership agreement. We’re incredibly proud of the work we’ve done together to make Azure a great place to run MongoDB Atlas. In the past six months alone, MongoDB has become one of our top performing Azure Marketplace partners driven by the adoption of the MongoDB Atlas on Azure pay-as-you-go self-service offering. Microsoft’s mission is to empower everyone to achieve more, and we know that our customers like using MongoDB to build applications. In year one of our strategic partnership, we collaborated with MongoDB to make it even easier for our joint customers to do more with Microsoft services and MongoDB Atlas on Azure. We’ve enabled developers to use MongoDB Atlas in 40+ Azure regions globally—with our most recent new location being in Doha, Qatar, which we announced last month at our Ignite conference. And we know it’s not just about the data center, it’s also critically important to make it easy for developers to get started with MongoDB Atlas on Azure. GitHub Copilot makes it easy to build MongoDB applications on Azure due to its proficiency in making code suggestions and we’re working together to further improve GitHub Copilot’s performance using MongoDB schema, among other things. See how MongoDB and GitHub Copilot come together in this video. Learn More Explore tools for bringing your vision to life with Azure chevron_right We’ve already seen customers reaping the benefits of our strategic partnership. For example, our joint work with Temenos helped to enable their banking customers to reach record-high scale. And in another case, Mural, a collaborative intelligence company, shared their experience of building with MongoDB Atlas and Microsoft Azure to help their customers collaborate better and smarter. MongoDB at Microsoft Ignite 2023 We continue to make investments to improve the customer experience of running MongoDB Atlas on Azure. In November, at Microsoft Ignite 2023, Microsoft and MongoDB announced three significant integrations: Microsoft Semantic Kernel, Microsoft Fabric, and Entity Framework (EF) Core. Let’s look at how customers can benefit from each one. Semantic Kernel is an open source SDK that enables combining AI services like OpenAI, Azure OpenAI, and Hugging Face with programming languages like C# and Python. At Ignite, MongoDB announced native support for MongoDB Atlas Vector Search in Semantic Kernel. MongoDB Atlas Vector Search allows customers to integrate operational data and vectors in a unified and fully managed platform. Now, customers can use Semantic Kernel to incorporate Atlas Vector Search in applications. This enables, for example, using Atlas Vector Search to interact with retrieval-augmented generation (RAG) in their work with large language models (LLMs), thereby reducing the risk of AI hallucinations, among other benefits. Microsoft Fabric can reshape how your teams work with data by bringing everyone together on a single, AI-powered platform built for the era of AI. MongoDB Atlas is the operational data layer for many applications, these customers use MongoDB Atlas to store data from internal enterprise applications, customer-facing services, and third-party APIs across multiple channels. With connectors for Microsoft Fabric pipelines and Dataflow Gen2, our customers can now combine MongoDB Atlas data with relational data from traditional applications and unstructured data from sources like logs, clickstreams, and more. At Microsoft Ignite, we saw exciting announcements making this integration seamless and easy to use for MongoDB customers. During the first keynote, Microsoft announced that Microsoft Fabric is now generally available, and shared a new frictionless way to add and manage existing cloud data warehouses and databases, like MongoDB, in Fabric called Mirroring. Now, MongoDB customers can replicate a snapshot of their database to OneLake and OneLake will automatically keep this replica in sync in near real-time. You can read more on how to unlock the value of data in MongoDB Atlas with the intelligent analytics of Microsoft Fabric here. Millions of developers depend on C# to write their applications, and a large percentage of these use Entity Framework (EF) Core, a lightweight, extensible, open source and cross-platform version of the popular Entity Framework data access technology. MongoDB announced that MongoDB Provider for EF Core is now available in Public Preview. This makes it possible for developers using EF Core to build C#/.NET applications with MongoDB while continuing to use their preferred APIs and design patterns. In each case, we’ve worked closely with MongoDB to ensure developers, data engineers, and data scientists can easily connect their MongoDB data to Microsoft services. Image of Satya Nadella, Microsoft Chairman and CEO, presenting at Microsoft Ignite. A year of strengthened collaboration These new integrations follow a banner year of collaboration between Microsoft and MongoDB. Beyond Microsoft Ignite, we’ve shared a lot of excellent developer news: The MongoDB for VS Code Extension was made generally available in August 2023. VS Code is the world’s most popular integrated development environment (IDE), and developers downloaded the MongoDB extension over 1 million times during its public preview. This free, downloadable extension makes it easy for developers to build applications and manage data in MongoDB directly from VS Code. MongoDB integrated with a range of services across the Microsoft Intelligent Data Platform (MIDP), including: Azure Synapse Analytics, to make it easier to analyze operational data Microsoft Purview, so users can connect to and safeguard MongoDB data Power BI, making it possible for data analysts to natively transform, analyze, and share dashboards that incorporate live MongoDB Atlas data Data Federation: Atlas Data Federation can now be deployed in Microsoft Azure and supports Microsoft Azure Blob Storage in private preview. Microsoft AI and cloud partners help customers achieve more. This is achieved by partners working together on that Microsoft Intelligent Data Platform, which includes Databases, Analytics, AI, and Governance. Jointly published tutorials and more covering: Building serverless functions with MongoDB Atlas and Azure Functions using .NET and C#, NodeJS, or Java; Creating MongoDB applications using Azure App Service and NodeJS, Python, C#, or Java; Building Flask and MongoDB applications with Azure App Container Apps; Developing IoT data hubs for smart manufacturing with MongoDB Atlas and Azure IoT; Connecting MongoDB Atlas and Azure Data Studio to allow Azure customers to work with their data stored in Atlas on Azure alongside data stored in other Azure data services It has been a great year for Microsoft and MongoDB, together making it easier for organizations of all sizes to do more with their data. Learn more about MongoDB Atlas on Azure Begin using MongoDB Atlas on Azure through the Azure Marketplace at no cost. Read more about MongoDB Atlas on Azure. Learn how to migrate your existing environment to MongoDB Atlas. The post Key customer benefits of the Microsoft and MongoDB expanded partnership appeared first on Microsoft Azure Blog. View the full article
  12. What is MongoDB? What is MongoDB MongoDB is an open-source popular NoSQL database management system that is planned to control unstructured or semi-structured data. It falls under the category of document-oriented databases, and it uses a flexible, JSON-like format called BSON (Binary JSON) to store data. MongoDB is known for its scalability, flexibility, and ability to handle large amounts of data with dynamic schemas. Key Features of MongoDB: Document-Oriented: MongoDB stores data in flexible, JSON-like documents, allowing for the representation of complex hierarchical relationships and data structures. Schema-less: Unlike traditional relational databases, MongoDB is schema-less, meaning that each document in a collection can have a different structure. This flexibility is particularly useful when dealing with evolving or dynamic data. Scalability: MongoDB is planned to scale horizontally by sharding data across multiple servers. This enables it to handle large amounts of data and high levels of traffic. Indexes: MongoDB supports the creation of indexes on fields, improving the performance of queries by facilitating faster data retrieval. Query Language: MongoDB uses a powerful query language that supports rich queries, including filtering, sorting, and aggregation. Queries are expressed using a JSON-like syntax. Aggregation Framework: MongoDB provides a flexible and powerful aggregation framework that allows for the processing and transformation of data within the database itself. Geospatial Indexing: MongoDB supports geospatial indexing, making it suitable for applications that require location-based queries. Automatic Sharding: MongoDB can automatically distribute data across multiple servers to achieve horizontal scalability and better performance. Replication: MongoDB supports replica sets, which provide data redundancy and high availability. In the event of a node failure, another node can take over to ensure continuous service. GridFS: MongoDB includes GridFS, a specification for storing and retrieving large files such as images, videos, and audio files. What is top use cases of MongoDB? Top Use Cases of MongoDB: Content Management Systems (CMS): MongoDB is well-suited for content management systems where content can have varying structures. Its flexibility allows for easy adaptation to changing content requirements. Real-Time Analytics: MongoDB’s ability to handle large volumes of data and its support for complex queries make it suitable for real-time analytics applications, providing businesses with insights into their data. Catalogs and Product Data: E-commerce platforms benefit from MongoDB when managing product catalogs and handling diverse product data with different attributes. Mobile Applications: MongoDB is commonly used as a backend database for mobile applications. Its support for flexible schemas and JSON-like documents align well with the needs of mobile app development. Internet of Things (IoT): MongoDB is suitable for handling the massive amounts of data generated by IoT devices. Its scalability and ability to store diverse types of data make it a good fit for IoT applications. Social Media Platforms: Social media platforms leverage MongoDB for storing user profiles, social connections, and activity data. Its scalability is crucial for handling large user bases and frequent updates. Log and Event Data Storage: MongoDB’s ability to handle high write throughput makes it well-suited for storing log and event data generated by applications and systems. User Data Management: MongoDB is used for managing user data in applications where user profiles, preferences, and activity logs need to be stored and queried. Location-Based Services: Applications that rely on geospatial data, such as mapping and location-based services, benefit from MongoDB’s support for geospatial indexing and queries. Data Hub and Aggregation: MongoDB serves as a central data hub for aggregating and analyzing data from various sources. Its aggregation framework is valuable for processing and transforming data within the database. Enterprise Content Management: MongoDB can be used in enterprise content management systems to handle diverse types of documents, facilitate versioning, and support collaboration among users. Healthcare Applications: MongoDB is utilized in healthcare applications for managing patient data, medical records, and other healthcare-related information. Its flexibility accommodates the diverse nature of healthcare data. MongoDB’s flexibility and scalability make it suitable for a wide range of applications, especially those dealing with dynamic, rapidly evolving, or large-scale data. It has become a popular choice in the development community for various use cases. What are feature of MongoDB? Features of MongoDB Features of MongoDB: Document-Oriented: MongoDB stores data in BSON (Binary JSON) documents, which are flexible, hierarchical, and allow for the representation of complex data structures. Schema-less: MongoDB is schema-less, allowing each document within a collection to have a different structure. This flexibility is well-suited for applications with evolving or dynamic data models. Dynamic Schemas: MongoDB’s dynamic schemas accommodate changes in data structures without requiring a predefined schema. New fields can be added to documents on the fly. Scalability: MongoDB is designed for horizontal scalability, allowing it to distribute data across multiple servers through sharding. This enables the system to handle large amounts of data and high traffic. Query Language: MongoDB uses a rich query language that supports a variety of queries, including filtering, sorting, and aggregation. Queries are expressed in a JSON-like syntax. Indexes: MongoDB supports the creation of indexes on fields to improve query performance. Indexes can be created on single fields or compound indexes on multiple fields. Aggregation Framework: MongoDB provides a powerful aggregation framework that allows for data processing and transformation within the database. It supports pipeline stages for filtering, grouping, sorting, and projecting data. Geospatial Indexing: MongoDB includes geospatial indexing, making it suitable for applications that require location-based queries, such as mapping and geolocation services. Replication: MongoDB supports replica sets for high availability and data redundancy. Replica sets consist of primary and secondary nodes, and automatic failover ensures continuous service in the event of a node failure. Automatic Sharding: MongoDB can automatically distribute data across multiple shards to achieve horizontal scaling. Sharding is beneficial for balancing the load and improving performance. GridFS: MongoDB’s GridFS specification allows for the storage and retrieval of large files, such as images, videos, and audio files, making it suitable for handling large binary data. Security Features: MongoDB provides security features, including authentication, authorization, role-based access control, and encryption. It allows administrators to define access control policies. Text Search: MongoDB includes a full-text search feature that allows users to perform complex text searches on string content within documents. Capped Collections: MongoDB supports capped collections, which are fixed-size collections with a circular structure. Once a collection reaches its size limit, older documents are automatically replaced by new ones. Journaling: MongoDB uses journaling to provide durability and crash recovery. Write operations are first recorded in a journal before being applied to the data files. What is the workflow of MongoDB? Workflow of MongoDB: Installation: Install MongoDB on a server or a cluster of servers. MongoDB supports various operating systems, and installation steps may vary depending on the platform. Configuration: Configure MongoDB settings, such as storage options, network settings, and security configurations, based on the requirements of the application. Start MongoDB Server: Start the MongoDB server process. This process may involve starting multiple instances if setting up a replica set or a sharded cluster. Connect to MongoDB: Connect to MongoDB using a client application or a driver that is compatible with the programming language of choice (e.g., MongoDB drivers for Node.js, Python, Java, etc.). Data Modeling: Design the data model by defining collections and documents based on the application’s data requirements. Decide on the structure of documents and relationships between them. Insert Data: Insert data into MongoDB by creating documents and adding them to collections. MongoDB automatically creates collections and databases if they don’t exist. Query Data: Query MongoDB to retrieve data based on specific criteria. Use the query language to filter, sort, and project data as needed. Indexes and Optimization: Create indexes on fields to improve query performance. Analyze and optimize queries to ensure efficient data retrieval. Aggregation: Use the aggregation framework to perform complex data processing, transformation, and analysis within the database. Utilize aggregation pipeline stages for various operations. Replication Setup: If high availability is required, set up a replica set by configuring multiple MongoDB nodes. Monitor the replica set for health and automatic failover. Sharding (Optional): If horizontal scaling is needed, set up sharding to distribute data across multiple shards. Configure and monitor the sharded cluster for balanced data distribution. Security Configuration: Implement security features such as authentication, authorization, and encryption. Define user roles and access control policies to secure the MongoDB deployment. Backup and Restore: Implement regular backup procedures to ensure data integrity and disaster recovery. MongoDB provides tools for backing up and restoring data. Monitoring and Optimization: Monitor the MongoDB deployment using tools or built-in features to identify performance bottlenecks, resource usage, and potential issues. Optimize configurations as needed. Scale as Needed: Scale the MongoDB deployment by adding more nodes, adjusting sharding configurations, or making other changes based on changing requirements. The workflow of MongoDB involves setting up and configuring the database, designing the data model, interacting with data through queries and updates, and ensuring the ongoing performance, availability, and security of the MongoDB deployment. How MongoDB Works & Architecture? MongoDB Works & Architecture MongoDB is a popular open-source NoSQL database system known for its flexibility, scalability, and performance. Here’s a breakdown of how it works and its architecture: 1. Data Model: Document-oriented: Unlike relational databases (RDBMS), MongoDB stores data in documents, JSON-like structures with key-value pairs. Flexible schema: Documents can have different structures and can evolve over time without affecting existing data. Nested documents: Documents can contain other documents, enabling complex data relationships. Collections: Documents are grouped into collections, similar to tables in RDBMS. 2. Storage Engine: BSON (Binary JSON): Documents are stored in a binary format called BSON for efficient storage and retrieval. WiredTiger: MongoDB uses the WiredTiger storage engine for high performance and scalability. Replication: MongoDB supports replication for data redundancy and fault tolerance. 3. Query Language: MongoDB Query Language (MQL): MQL is used to query and manipulate data in MongoDB. JSON-like syntax: MQL uses a syntax similar to JSON, making it easy to learn and use. Rich query operators: MQL supports a wide range of operators for filtering, sorting, and aggregation. 4. Architecture: Client-server architecture: MongoDB consists of a client application that interacts with a server instance. Drivers: Drivers are available for various programming languages to connect to MongoDB and execute queries. Replication: MongoDB can be configured for replication, allowing multiple servers to maintain copies of the data for improved availability and scalability. Sharding: MongoDB can be sharded, splitting data across multiple servers to handle large datasets efficiently. 5. Benefits: Flexibility: Document-oriented data model allows for flexible and dynamic data structures. Scalability: MongoDB can handle large datasets and high volumes of data. Performance: MongoDB is known for its quickly read and write speeds. High availability: Replication and sharding ensure data availability and fault tolerance. Open-source: MongoDB is available for free under an open-source license. Points to Remember: MongoDB is a powerful NoSQL database system suitable for various applications. Its flexible data model, scalability, and performance make it a popular choice for modern applications. Utilize available resources and tutorials to learn MQL, understand MongoDB architecture, and leverage its capabilities effectively. By understanding how MongoDB works and its architecture, you can make informed decisions about using it for your projects and develop efficient data-driven applications. How to Install and Configure MongoDB? Installing and Configuring MongoDB: Installing and configuring MongoDB involves downloading the appropriate installation package, setting up the database server, and configuring its options. Following is a step-by-step guide: 1. Download MongoDB: Navigate to the official MongoDB download page Select the appropriate platform: Choose the correct version based on your operating system (OS) and architecture. Download the desired package: You can choose between DEB, RPM, or tar archive formats. 2. Install MongoDB: Follow the installation instructions for your chosen package format: DEB and RPM: Use the package manager on your system (e.g., apt-get, yum) to install the downloaded package. tar archive: Extract the downloaded archive and follow the instructions in the README file. Create the data directory: MongoDB stores its data in a dedicated directory. Create this directory with appropriate permissions for the MongoDB user. Start the MongoDB server: Run the mongod command to start the MongoDB server. 3. Configure MongoDB: Edit the MongoDB configuration file: This file typically resides in /etc/mongod.conf or /data/db/mongod.cfg depending on your installation method. Configure options: Modify settings like port number, data directory, security options, and access control rules. Create administrative user: Create a user with administrative privileges for managing the database. You can do this using the mongo shell or a dedicated management tool. 4. Access and Manage MongoDB: Connect to the MongoDB server: Use the mongo shell to connect to the server and start interacting with the database. Create and manage databases: Use commands like CREATE DATABASE, USE, and DROP DATABASE to manage databases. Create and manage collections: Use commands like CREATE COLLECTION, INSERT, UPDATE, DELETE, and FIND to manage data within collections. Utilize available tools: Explore MongoDB Compass and other management tools for a user-friendly interface and advanced functionality. By following these steps and referring to specific platform guides, you can successfully install, configure, and manage your MongoDB database, paving the way for data-driven applications and efficient information storage. Fundamental Tutorials of MongoDB : Getting started Step by Step Fundamental Tutorials of MongoDB Let’s have a look at some step-by-step fundamental tutorials to get you started with MongoDB: 1. Installation and Setup: Download MongoDB: Visit the official MongoDB download page and select the appropriate platform and version. Install MongoDB: Follow the installation instructions for your chosen platform (e.g., Debian/Ubuntu, macOS, Windows). Start the MongoDB server: Run mongod in your terminal to start the server. Connect to the MongoDB server: Use mongo in your terminal to connect to the server and start interacting with the database. 2. Creating and Managing Databases: Create a database: Apply the CREATE DATABASE command to create a new database. Switch to a database: Use the USE command to switch to a specific database for further operations. List databases: Use the show dbs command to list all available databases. Drop a database: Use the DROP DATABASE command to delete a database. 3. Creating and Managing Collections: Create a collection: Use the CREATE COLLECTION command to create a new collection within the active database. Insert documents: Use the INSERT command to add new documents to a collection. Query documents: Use the FIND command with various filters and options to retrieve specific documents. Update documents: Use the UPDATE command to modify existing documents. Delete documents: Use the DELETE command to remove documents from a collection. Drop a collection: Use the DROP COLLECTION command to delete a collection. 4. Basic Data Manipulation: Insert single document: Use db.collectionName.insertOne({key1: value1, key2: value2}) to insert a single document. Insert multiple documents: Use db.collectionName.insertMany([{doc1}, {doc2}]) to insert multiple documents. Find all documents: Use db.collectionName.find({}) to retrieve all documents in a collection. Find documents with specific criteria: Use db.collectionName.find({key: value}) to find documents based on specific filters. Update documents with specific criteria: Use db.collectionName.update({key: value}, {$set: {updatedKey: updatedValue}}) to update documents matching the filter. Delete documents with specific criteria: Use db.collectionName.deleteMany({key: value}) to delete documents matching the filter. 5. Exploring Data and Aggregation: Count documents: Use db.collectionName.countDocuments({}) to count the total number of documents in a collection. Sort documents: Use db.collectionName.find().sort({key: 1/-1}) to sort documents based on a specific key in ascending/descending order. Aggregate data: Use aggregation pipeline stages like $match, $project, and $group to perform complex data analysis and calculations. 6. Advanced Topics: Authentication and Security: Learn how to set up user accounts and manage access control for your database. Replication and Sharding: Explore techniques for data redundancy, high availability, and scaling your MongoDB infrastructure. MongoDB Compass: Utilize this GUI tool for managing your database, visualizing data, and performing various operations. MongoDB Drivers: Explore drivers available for various programming languages to interact with your MongoDB database from your applications. Important Tips: Practice is key! Experiment with different commands and explore various functionalities to get comfortable with MongoDB. Start with the basics and gradually progress to more advanced topics. Don’t hesitate to seek help and ask questions within the online community. Refer to the extensive documentation and resources available for learning and troubleshooting. By following these steps and engaging with the available resources, you can acquire the fundamental knowledge and skills needed to effectively interact with and manage your MongoDB database. This paves the way for developing data-driven applications and harnessing the power of MongoDB for your projects. The post What is MongoDB and use cases of MongoDB? appeared first on DevOpsSchool.com. View the full article
  13. In the ever-evolving landscape of database technology, MongoDB stands out as the unrivalled leader in document databases, and it is the first-choice database solution for organisations across industries. Its pivotal role in the technological infrastructure of countless enterprises underscores its status as a mission-critical asset. As we navigate the dynamic demands of business operations, enterprises are setting their sights on running MongoDB on Infrastructure as a Service (IaaS) and Kubernetes (K8s). This strategic move is a gateway to unlocking the containerisation, virtualisation, and orchestration benefits for MongoDB instances. The result? A streamlined approach to MongoDB management and scalability that fortifies the resilience of the database. However, achieving these benefits is a highly complex undertaking. To make the most of MongoDB on IaaS and K8s, you need to be able to operate and manage it in a production environment on any infrastructure, and you need a way to automate repeatable operational work. This is where operators come in. An operator is an application containing code that takes over automated database management tasks. Picture it as your technological virtuoso, orchestrating a grand performance that includes setting up high availability, implementing robust security measures like transport layer security (TLS), automating database deployment, configuring initial user management, and even handling the backup and restore operations. With a primary mission of simplifying the MongoDB experience, an operator is your backstage pass to a world where MongoDB isn’t just a database – it’s a seamlessly operated database powerhouse. Today, I am happy to announce that we are launching the new Charmed MongoDB operator that can run in Kubernetes (K8s) and Virtual Machines (VM) as a beta. The operator is available to everyone for free so you can secure and automate your MongoDB databases’ deployment and maintenance across private and public clouds. Why use Charmed MongoDB Charmed MongoDB is an enhanced, and fully-compatible drop-in replacement for MongoDB Community Edition with advanced MongoDB features. It simplifies the deployment, scaling, design and management of MongoDB in production in a reliable way. These enterprise features in the operator are available for free to use. Database operations features MongoDB user managementDatabase high availability with replicationEasy-to-use application integrationSecure communications with TLSDatabase backup and restoreDatabase observability feature Run MongoDB on any cloud The Charmed MongoDB operator deploys and runs MongoDB on physical, virtual machines (VM) and other cloud and cloud-like environments, including AWS, Azure, OpenStack and VMWare. Charmed MongoDB is hosted in Ubuntu. The operator is based on Juju, an open source orchestration engine for software operators that enables the deployment, integration and lifecycle management of applications at any scale on any infrastructure. To support applications running in Kubernetes, Canonical also maintains two CNCF-certified Kubernetes distributions: Charmed Kubernetes and MicroK8s, which help simplify and accelerate the deployment of Kubernetes. Enterprise security and support The MongoDB Community version doesn’t guarantee support for database Common Vulnerabilities and Exposure (CVE) patching, making it unsuitable for enterprise use cases. With Charmed MongoDB, Canonical offers 10 years of security maintenance alongside 24/7 support through a cost-effective, per-node subscription model – delivering the stability and peace of mind that organisations need to run MongoDB in production. We recently published a whitepaper that focuses on improving database security posture and streamlining operations with MongoDB. Our goal is to make it simple for anyone and everyone to operate MongoDB in both development and production environments in a secure and supportable manner. Try the beta today To get started, you just need to be running Ubuntu OS, meet the minimum system requirements, and be familiar with basic terminal commands and MongoDB concepts such as replication and users. You can set up your environment using Juju. Simple deployment steps for Charmed MongoDB in your Ubuntu VM: juju deploy mongodb --channel 6/beta Simple deployment of Charmed MongoDB for K8s: juju deploy mongodb-k8s --channel 6/beta Learn to use Charmed MongoDB using these tutorials for the K8s operator and IaaS operator. You can also check out the Github pages for feature requests and filling bugs for K8s operator and IaaS operator. Stay tuned for more Charmed MongoDB is a continuously developing project; we’re constantly adding rich new features. So, be on the lookout for updates and enhancements in our future blog posts. Sign up for the Canonical Charmed MongoDB beta program to get early access to our projects and to help shape Canonical’s data products as they get introduced to the world. You can also contact us to learn more. Further Reading Running MongoDB on Kubernetes A guide to MongoDB security and support What is NoSQL and what are database operators? How to secure your database Trademark Notice “MongoDB” is a trademark or registered trademark of MongoDB Inc. Other trademarks are property of their respective owners. Charmed MongoDB is not sponsored, endorsed, or affiliated with MongoDB, Inc. View the full article
  14. Today, we are announcing the general availability of vector search for Amazon DocumentDB (with MongoDB compatibility), a new built-in capability that lets you store, index, and search millions of vectors with millisecond response times within your document database. Vector search is an emerging technique used in machine learning (ML) to find similar data points to given data by comparing their vector representations using distance or similarity metrics. Vectors are numerical representation of unstructured data created from large language models (LLM) hosted in Amazon Bedrock, Amazon SageMaker, and other open source or proprietary ML services. This approach is useful in creating generative artificial intelligence (AI) applications, such as intuitive search, product recommendation, personalization, and chatbots using Retrieval Augmented Generation (RAG) model approach. For example, if your data set contained individual documents for movies, you could semantically search for movies similar to Titanic based on shared context such as “boats”, “tragedy”, or “movies based on true stories” instead of simply matching keywords. With vector search for Amazon DocumentDB, you can effectively search the database based on nuanced meaning and context without spending time and cost to manage a separate vector database infrastructure. You also benefit from the fully managed, scalable, secure, and highly available JSON-based document database that Amazon DocumentDB provides. Getting started with vector search on Amazon DocumentDB The vector search feature is available on your Amazon DocumentDB 5.0 instance-based clusters. To implement a vector search application, you generate vectors using embedding models for fields inside your document and store vectors side by side your source data inside Amazon DocumentDB. Next, you create a vector index on a vector field that will help retrieve similar vectors and can search the Amazon DocumentDB database using semantic search. Finally, user-submitted queries are converted to vectors using the same embedding model to get semantically similar documents and return them to the client. Let’s look at how to implement a simple semantic search application using vector search on Amazon DocumentDB. Step 1. Create vector embeddings using the Amazon Titan Embeddings model Let’s use the Amazon Titan Embeddings model to create an embedding vector. Amazon Titan Embeddings model is available in Amazon Bedrock, a serverless generative AI service. You can easily access it using a single API and without managing any infrastructure. prompt = "I love dog and cat." response = bedrock_runtime.invoke_model( body= json.dumps({"inputText": prompt}), modelId='amazon.titan-embed-text-v1', accept='application/json', contentType='application/json' ) response_body = json.loads(response['body'].read()) embedding = response_body.get('embedding') The returned vector embedding will look similar to this: [0.82421875, -0.6953125, -0.115722656, 0.87890625, 0.05883789, -0.020385742, 0.32421875, -0.00078201294, -0.40234375, 0.44140625, ...] Step 2. Insert vector embeddings and create a vector index You can add generated vector embeddings using the insertMany( [{},...,{}] ) operation with a list of the documents that you want added to your collection in Amazon DocumentDB. db.collection.insertMany([ {sentence: "I love a dog and cat.", vectorField: [0.82421875, -0.6953125,...]}, {sentence: "My dog is very cute.", vectorField: [0.05883789, -0.020385742,...]}, {sentence: "I write with a pen.", vectorField: [-0.020385742, 0.32421875,...]}, ... ]); You can create a vector index using the createIndex command. Amazon DocumentDB performs an approximate nearest neighbor (ANN) search using the inverted file with flat compression (IVFFLAT) vector index. The feature supports three distance metrics: euclidean, cosine, and inner product. We will use the euclidean distance, a measure of the straight-line distance between two points in space. The smaller the euclidean distance, the closer the vectors are to each other. db.collection.createIndex ( { vectorField: "vector" }, { "name": "index name", "vectorOptions": { "dimensions": 100, // the number of vector data dimensions "similarity": "euclidean", // Or cosine and dotProduct "lists": 100 } } ); Step 3. Search vector embeddings from Amazon DocumentDB You can now search for similar vectors within your documents using a new aggregation pipeline operator within $search. The example code to search “I like pets” is as follows: db.collection.aggregate ({ $search: { "vectorSearch": { "vector": [0.82421875, -0.6953125,...], // Search for ‘I like pets’ "path": vectorField, "k": 5, "similarity": "euclidean", // Or cosine and dotProduct "probes": 1 // the number of clusters for vector search } } }); This returns search results such as “I love a dog and cat.” which is semantically similar. To learn more, see Amazon DocumentDB documentation. To see a more practical example—a semantic movie search with Amazon DocumentDB—find the Python source codes and data-sets in the GitHub repository. Now available Vector search for Amazon DocumentDB is now available at no additional cost to all customers using Amazon DocumentDB 5.0 instance-based clusters in all AWS Regions where Amazon DocumentDB is available. Standard compute, I/O, storage, and backup charges will apply as you store, index, and search vector embeddings on Amazon DocumentDB. To learn more, see the Amazon DocumentDB documentation and send feedback to AWS re:Post for Amazon DocumentDB or through your usual AWS Support contacts. — Channy View the full article
  15. StatefulSet is a Kubernetes workload API specifically used for managing stateful applications. This is a comprehensive guide to setting up and using StatefulSets, where we look at the following topics: What is a StatefulSet, and when should you use it? Example: Setting up and running MongoDB as a StatefulSet Limitations of StatefulSets and what to watch out for Best practices while implementing StatefulSets Stateless and Stateful Applications Let's start with distinguishing stateless and stateful applications. A stateless application is one in which every request is treated as a new, isolated transaction independent of any previous transactions. It does not store session-specific data between requests, either on the client side or the server side. View the full article
  16. Insurance companies have seen a tremendous shift in modernization. Traditionally known for the use of legacy systems, leading carriers are modernizing their infrastructure... View the full article
  17. MongoDB and AWS extended their existing alliance to provide examples of curated code to train the Amazon CodeWhisperer generative AI tool.View the full article
  18. Running MongoDB on Kubernetes Containers are a lightweight, portable, and consistent way to package applications and their dependencies. Containers provide an isolated environment, ensuring an application runs reliably across different environments. Enterprises and tech-savvy individuals are using container technologies because of their benefits. However, container orchestration tools have become necessary to manage clusters with the rise in container usage. Kubernetes, or k8s for short, is the most known container orchestrator and has grown into a feature-rich cloud-native platform. Kubernetes helps manage the lifecycle of containers, particularly in large, dynamic environments. It automates containerised workloads and services’ deployment, networking, scaling, and availability. Running containers – lightweight and usually ephemeral by nature – in small numbers is easy to do manually. However, managing them at scale in production environments can be a significant challenge without a container orchestration platform’s automation. On the database front, organisations want to build and run scalable database applications in public, private and hybrid environments. This is why containerised databases like MongoDB can run in Kubernetes and benefit from portability, helping teams minimise vendor lock-in, get DevOps friendliness, scalability and cost-effectiveness. Why run MongoDB on Kubernetes? Running MongoDB on Kubernetes can be complex yet valuable, as it allows you to containerise and orchestrate MongoDB instances for scalability, resilience, and easier management. Here are some of the benefits: Scalability: Kubernetes makes it easier to scale MongoDB horizontally by adding or removing replicas of your database instances. This allows you to dynamically handle increased workloads and traffic as your application grows or as usage spikes. High availability: Kubernetes provides features like replica sets and StatefulSets that ensure high availability for your MongoDB instances. In case of node failures, Kubernetes can automatically reschedule pods to maintain the desired number of replicas, helping to prevent downtime. Orchestration: Kubernetes abstracts the underlying infrastructure, making managing and orchestrating your MongoDB deployment easier. In declarative configuration files, you can define your MongoDB resources, including pods, services, and storage. Running MongoDB in Kubernetes can be very beneficial, but what exactly do you need to have a production-ready database running in Kubernetes? There can be multiple considerations besides those listed above, like security, deployment readiness, backup and restore, monitoring, and more. Ubuntu as your host for MongoDB on Kubernetes Canonical is currently maintaining a project called Charmed MongoDB, a K8s operator that contains code that takes over automated tasks to manage MongoDB hosted in Ubuntu. The K8s operator is also called a K8s charm, a business logic encapsulated in reusable software packages that automate every aspect of an application’s life, in this case, for MongoDB. The operator is based on Juju, an open source orchestration engine for software operators that enables the deployment, integration and lifecycle management of applications at any scale on any infrastructure. To support applications running in Kubernetes, Canonical also maintains two other CNCF-certified Kubernetes distributions: Charmed Kubernetes and MicroK8s, which help simplify and accelerate the deployment of Kubernetes. Get started The Charmed MongoDB K8s Operator delivers automated operations management from day 0 to day 2 for MongoDB. Charmed MongoDB is a continuously developing project; we constantly offer richer features over time. To get started, you must have the environment: the Ubuntu OS, a minimum set of CPU, storage and RAM. You must also be familiar with basic terminal commands and MongoDB concepts such as replication and users. Afterwards, you need to set up your environment using Microk8s and Juju. You can then use Charmed MongoDB to manage the following operations: Deploy MongoDB using a single command.Access the admin database directly.Add high availability with replication.Change the admin password.Automatically create MongoDB users via Juju relations.Enable secure communications with TLS. Learn to use Charmed MongoDB (K8s operator) in your machine. Conclusion Running database clusters in public, private and hybrid environments provides multiple benefits. Kubernetes provides the additional advantages of portability, reduced vendor lock-in, DevOps friendliness, scalability and cost-effectiveness. While there are many advantages to running MongoDB in Kubernetes, it’s important to note that managing a distributed database in a containerised environment comes with its challenges, and careful planning, monitoring, and optimisation are required for a successful deployment. Additionally, it helps to stay up-to-date with best practices and evolving technologies to make the most of this approach. Canonical offers security patching, support, advisory and managed services for databases like MongoDB so you can seamlessly deploy and run your database in Kubernetes. Contact us to learn more. Trademark Notice “MongoDB” is a trademark or registered trademark of MongoDB Inc. Other trademarks are property of their respective owners. Charmed MongoDB is not sponsored, endorsed, or affiliated with MongoDB, Inc. View the full article
  19. Eliminate any MongoDB collection with ease using Java. Our concise and effective steps will help you through the process, making sure that you get the job done in no time. No more struggling with tedious and time-consuming tasks. Trust our solution to deliver the assertive approach you need to remove your collection quickly and efficiently. In this video tutorial series, you will gain a deep understanding of Java Integration with MongoDB. Java integration with MongoDB allows developers to leverage the power of MongoDB, a popular NoSQL database, in their Java applications. MongoDB is designed to handle large amounts of unstructured data and provides scalability and flexibility. View the full article
  20. Amazon DocumentDB (with MongoDB compatibility) now supports an in-place major version upgrade (MVU) from Amazon DocumentDB versions 3.6 and 4.0 to version 5.0. Instead of backing up and restoring the database to the new version or relying on database migration tools, you can perform an in-place major version upgrade with a few clicks using the AWS console, the latest AWS Software Development Kit (SDK), or Command Line Interface (CLI). With in-place MVU, no new clusters are created in the process and you can continue using the existing cluster endpoints for your applications. View the full article
  21. MongoDB MapReduce is purposely used to process a large-scale data in MongoDB. MongoDB supports the MapReduce database command for MapReduce computation. The first phase includes the map action, and the second phase includes the reduce action. The map operation deals with each document, producing interim results of key-value pairs. One document should be processed at a time using the map function. The reduce operation, on the other hand, takes a key and an array of values and performs aggregation or computation on the values to produce the final result. Create a Collection to Perform the MapReduce Operation Let’s get started in performing the MapReduce() function on the specific collection. Here, we switch our default collection to the “student” collection where we can insert some documents to perform the MapReduce() function. >use student Insert the Documents into the Collection to Perform the MapReduce Operation Now that our “student” collection is created, we can insert the documents at the same time using the insertMany() function. The query is mentioned in the following where we insert two documents that contain the same fields but with different values: db.student.insertMany([ { "_id":1, "name": "Smith", "age": 24, "marks": 9, "status": "Pass" }, { "_id":2, "name": "Joe", "age": 21, "marks": 3, "status": "Pass" } ]) The output here shows that the documents are inserted with their IDs accordingly into the “student” collection: Apply the MapReduce Operation to Get the Sum of the Specified Field Now, let’s use the MapReduce() function to have the sum of the marks fields and store a result in a collection that we set in the query. Here’s the MapReduce() function query that is applied to the “student” collection: var map=function(){ emit(this.name,this.marks)}; var reduce=function(name,marks){ return Array.sum(marks);}; db.student.mapReduce(map,reduce,{out :"result"}); db.result.find() We define the “map” variable using the var modifier and pass the map function() to it. This function here is anonymous as no name is assigned to it. After that, we call the emit() function inside the body of the anonymous function, a built-in function provided by MongoDB for map-reduce operations. We use it to generate the key-value pairs for the reduce phase. Here, we output the key and value of each student’s field as their name and their mark. Next, we create the “reduce” variable where the function() is set to receive the key which is the student name, and an array of values of marks as input and return the total marks for each student using the Array.sum() function. Lastly, we call the mapReduce() operation on the “student” collection and store the output into the “result” collection, and then call out that collection with the find() method. The output is displayed where the “name” field and the marks of the student appear as a key-value pair by the mapReduce() function: Apply the MapReduce Operation to Get the Average of the Specified Field Now is the example of performing the mapReduce() function to get the average of the marks that are grouped by the “age” field. The code is the same as the previous example, but we use the Arrayhere.avg() function. var map=function(){ emit(this.age,this.marks)}; var reduce=function(age,marks){ return Array.avg(marks);}; db.student.mapReduce(map,reduce,{out :"output"}); db.output.find() There, we define the variable map for the mapping of the “age” field and the “marks” field. For this, we employ the anonymous function() where no parameter is passed but, inside the body, the emit function is deployed to emit the “age” and the “marks” fields as the key-value pair for the reduce function. Then, we create the reduce function where the function() is passed with the argument as “age” and “marks” fields. These parameters are passed by MongoDB during the reduce phase when it groups the key-value pairs that are generated by the map phase based on their keys. Then, we have the function() body where the return statement is defined to call out the function upon execution. Here, we use the Array.avg() function which calculates the average of the values in the “marks” array, which is an array of all the values that are associated with a particular key. Finally, the result is stored in the values in the “output” collection for each age group. The following output is retrieved as the key-value pair of the age and the average of the marks after the execution of the mapReduce operation: Apply the MapReduce Operation to Get the Even_Id Field Moreover, we have a query that uses the mapReduce() function to display the alternate documents and generate the even field values. db.student.mapReduce( function () { odd_counter++; var id= this._id; delete this._id; if ( odd_counter % i == 0 ) emit(id, this ); }, function() {}, { "scope": { "odd_counter": 0, "i": 2 }, "out": { "inline": 1 } } ) As seen in the query, we initiate the mapReduce operation here on the “student” collection. After that, we define the function with the empty parameters and increment the value of the odd_counter variable inside it which is used within the scope of the map function. Then, we store the existing IDs of the document in a “field_id” variable. Every record in the collection has a specific identification number or “_id” in the “_id” field. Next, we employ the delete operation on the “_id” field from the current document to exclude the original “_id” field from being included in the emitted documents. The “if” method is defined after that where we check if the counter is an odd multiple of “i”. If the condition becomes true, the emit() function emits a new key-value pair. Here, the key is the original “id” of the document and the value is the current document, excluding the “_id” field. This means that only the documents whose _id is an odd multiple of “i” should be emitted. Next, we call the empty function() since no argument is passed. After that, we set the scope for the map function which defines the variables that are accessible within the map function. In this case, we initialize two variables: the odd_counter with an initial value of 0 and “i” with an initial value of 2. After the mapReduce() operation, the results are shown in the output where the even _id is achieved according to the specified condition: Conclusion The mapReduce() function is covered with the example illustration in this article. The examples include the sum and average of the values to get the grouped output by the mapReduce() function. Then, we performed the mapReduce() function to display the even values from the documents. Now, we can use the mapReduce() function to handle the large-scale data in MongoDB. Note that it may not always be the most efficient option for a large-scale data processing in MongoDB. View the full article
  22. When querying records with arrays, including nested or sub-fields, the $elemMatch operator in MongoDB is a potent utility. It enables you to declare a set of requirements that a minimum of a single component in an array field of a record must meet. This is very helpful when you filter or extract records from a complicated data set using a particular set of constraints. Whenever you have a list of integrated records and are looking through the list of records that meet certain criteria, this is helpful. Therefore, we will be discussing its use with the help of query code examples. Create Collection Starting with this guide, we should have a collection with some nested fields. Therefore, we prefer to create a new collection named “Movie” that will contain the data regarding the movie types and the main actors that suit the type. We have added a total of 4 main records with the fields _id, type (of the movie), and “actor”. Each “actor” field in every document contains an array of two records. Each of the records of the array contains 2 fields inside i.e. name and age. This way, the collection contains the data of 2 actors for each type of movie. The insertMany() function has been executed to insert these records and the acknowledgment has been displayed by the MongoDB. db.Movie.insertMany([ { "_id": 1, "type": "Comedy", "actor": [ { "name": "Ken", "age": 40 }, { "name": "Cillian", "age": 40 } ] }, { "_id": 2, "type": "Fiction", "actor": [ { "name": "Ken", "age": 35 }, { "name": "Robert", "age": 29 } ] }, { "_id": 3, "type": "Suspense", "actor": [ { "name": "Cillian", "age": 30 }, { "name": "Ema", "age": 32 } ] }, { "_id": 4, "type": "Melo-Drama", "actor": [ { "name": "R.D", "age": 44 }, { "name": "William", "age": 37 } ] } ]); Although the acknowledgment has been displayed, we must reassure ourselves that the records we have added are correct according to our personal preferences. So, MongoDB’s find() function is handy to check the collection records. The output of the Movie collection has been showing all four records on the MongoDB shell screen. db.Movie.find() Example 01: elemMatch on Single Field Let’s move towards applying the elemMatch operator on the collection fields. For this, we will be starting with the single-field constraint. Thus, the elemMatch() operator has been applied to the sub-field “name” of the column “actor”. The condition says that there should be one name, “ken” in all the output records. The elemMatch() operator has been applied with the help of the find() function. The output for this query searches the whole collection for the name “ken” in the “actor” field and returns the specific record. Therefore, a total of 2 records have been found where the name field of the “actor” field is “ken”. It doesn’t matter if the other record of the array-field “actor” has some other names like “Cillian” and “Robert”. db.Movie.find({ "actor": { $elemMatch: { "name": "Ken" } } }) We have applied the emenMatch operator in the above illustration to the name field. Now, we can also apply the elemMatch operator to other fields like “age”. Here, you can add the exact value of the field “age” or use the logical operators i.e. greater than, less than. Therefore, we have been using the $gt (greater than) operator to search for the records in the “Movie” collection where the array-field “actor” has a greater than “35” value in its “age” sub-field. The returned result contains two records with the “age” subfield containing greater than 35 values. db.Movie.find({ "actor": { $elemMatch: { "age": { $gt: 35 } } } }) Example 02: elemMatch on Multiple Fields The above illustration depicts the use of the elemMatch operator on the single fields of any array-like field in a straightforward way. But you can also apply the elemMatch operator of MongoDB on the multiple fields of the array-like fields. Therefore, we will use it on a new query containing the “find()” function. This find() function has been applied to the array-type “actor” field of the “Movie” collection. After this, the elemMatch approaches have been applied to the “name” and “age” fields. There is a need to understand that this time the elemMatch operator has been restricting the search by applying strict constraints i.e. as the name has been set to “Cillian” and the “age” field should have a value greater than “30” via the $gt (greater than) operator usage here. The only records of the Movie collection will be outputted where the possibility of an “actor” column with “name” Cillian” and “age” greater than 30 within one of its embedded documents. If any record missed at least one of the specified conditions, that record will be ignored completely. Therefore, 3 of the collection records have been ignored even though they have one record that satisfies one of the conditions specified in the “elemMatch” query, and only a single record (_id: 1) has been displayed on our shell screen. This record contains the name “Cillian” in one of its records while the age “40” is within both the embedded records. db.Movie.find({ "actor": { $elemMatch: { "name": "Cillian", "age": { $gt: 30 } } } }) Example 03: elemMatch on Multiple Conditions You can also try the elemMatch operator on the same field with multiple conditions. For instance, we have been applying it to the “name” field embedded in the “actor” field of the collection “Movie”. This time, we will search for the records named “Ken” and “Cillian”. Any of the records with the value “ken” or “Cillian” in the name field will be output. For this, we have applied the elemMatch operator to two expressions. The output of this query has been displaying two records, i.e., one with both specified values and the other with one match value. db.Movie.find({ "actor": { $elemMatch: { "name": "Ken" } }, "actor": { $elemMatch: { "name": "Cillian" } } }) Similarly, you can utilize $ and $or logical operators on the queries holding the elemMatch operator. The illustrations below show the use of $or operators in a query using the same conditions on the “name” field. The result returns three records. db.Movie.find({ "actor": { $elemMatch: { $or: [ { "name": "Ken" }, { "name": "Cillian" } ] } } }) Conclusion The robust search operator $elemMatch in MongoDB may be employed to locate records that include arrays and to impose specific requirements on the members of those arrays. Within the provided query examples in this guide, the $elemMatch operation ensures that the identical array component satisfies all provided constraints when searching embedded records inside arrays. View the full article
  23. In MongoDB, particular fields may be eliminated from the records in a collection using the $unset keyword. It enables you to carefully delete the columns from the records that you no longer require or wish to erase. This operator is beneficial when you want to alter your data model or delete the out-of-date data from your records. This guide will be a helpful material for those who want to learn the basics of using the $unset operator. Create a Collection To unset the fields of collection, we should have some data in our collection. Therefore, we already created a collection entitled “Data” and added a total of five records to it. Those five records are displayed on the MongoDB shell via the “find” function query. The records contain the unique “_id” field for the records. Also, we have the string type “name” field along with the “Score” field array-type. In some records, the “score” field is null, empty, an array or string, and has no record. test> db.Data.find({}) Example 1: Let’s say you want an update on certain records of the “Data” collection like deleting some records. For this, you must use the $unset operator of MongoDB within the update command. For instance, we want to unset the record of the “name” and “score” fields of the “Data” collection where the “score” field is “null”. As we need to update a single record, we should employ MongoDB’s “updateOne” function on the “Data” collection. The first argument of the “updateOne” function is used to identify the record that you want to update by specifying the unique value of one of its fields. In our case, this unique value field is “score”, i.e. to update the record where the “score” field is “null”. The other parameter of the updateOne() function uses the $unset operator to remove the existing values of the specified field names, i.e. the empty inverted commas. In this case, we want to unset the “name” and “score” fields as displayed. The query execution returns the acknowledgement, i.e. one record is updated. db.Data.updateOne( { score: null }, { $unset: { name: "", score: "" } } ) After the successful update of one record, we check for the “Data” collection again via the “find” method. The output displays that all the records are untouched except the one with only the unique identifier “_id” left because all the other fields are unset using MongoDB’s “unset” operator. test> db.Data.find({}) Example 2: We witnessed a single value field update using the “unset” operator in the previous example. But, we need to learn about unsetting the nested fields. We need to have the nested field records in our database collection for this. So, we create a “Teacher” collection and insert multiple records in it at once which also contains the nested “sub” field and the “shift” field array-type. The records are added successfully as per the output. db.Teacher.insertMany([ { "name" : "John", "pay" : 799, "sub" : { "math" : 4, "comp" : 9.5, "phy" : 17 },"shift":["mor","eve"]}, { "name" : "Ana", "pay" : 899, "sub" : { "math" : 22, "comp" : 10.5, "phy" : 15 },"shift":["mor","eve", "night"]}, { "name" : "Sam", "pay" : 899, "sub" : { "math" : 12, "comp" : 9, "phy" : 14 },"shift":["mor","eve"]}, { "name" : "Paul", "pay" : 699, "sub" : { "math" : 18, "comp" : 8, "phy" : 12 },"shift":["mor","eve", "night"]}, { "name" : "Zoni", "pay" : 599, "sub" : { "math" : 14, "comp" : 4, "phy" : 16 },"shift":["mor","eve"]}, ]) After adding these records, we have to confirm that these records are added perfectly via the “find” function application on the “Teacher” collection. The output of this method displays all five records separately in the form of documents along with their embedded fields like name, pay, sub, and shift. test> db.Teacher.find({}) In this illustration, we will show you how to unset an embedded field of any record from the collection. For this, we choose the record where the “name” field has the “Sam” value. The method of using the “unset” operator is the same as we used in the first illustration via the updateOne() MongoDB function. Within the $unset operator, we specify the “pay” field to empty the inverted commas. Now, to unset the nested “sub” field, we need to specify the embedded “match” field along with the parent field using the “dot” product, i.e. “sub.match”. In this way, the embedded field is unset. db.Teacher.updateOne( { name: "Sam" }, { $unset: { pay: "", "sub.math": "" } } ) After executing the previous query successfully, we try the “findOne()” function of MongoDB to only display the specific single record from the “Teacher” collection where the “name” is “Sam”; it is just an updated record. The output image displays the single record along with its updated fields, i.e. the “pay” field is removed after the application of “unset” and the “sub” field has no embedded “math” value anymore. test> db.Teacher.findOne({name: "Sam"}, {}) Example 3: After reviewing the examples of unsetting the single nested and non-nested records, we will demonstrate how to update multiple records simultaneously. Let’s say you want to update the array-type “shift” field of the “Teacher” collection for now. First, we only display the “shift” field of the “Teacher” collection. This field gets a different number of values in its array format, i.e. 2 or 3. test> db.Teacher.find({}, {shift:1}) As the arrays store the values using the indexes, we need to utilize the index number of values for the “shift” field to unset its values. Also, to unset the “shift” field of all the collection records, we should utilize the “updateMany()” function in the “Db” command. To update all the records of the collection, there is no need to specify the condition in the first argument. Within the second argument, we apply the “$unset” operator on the index “0” of the “shift” array field using the “dot” product, i.e. “shift.0”. The update query gets a successful execution as per the output. db.Teacher.updateMany({}, { $unset: { "shift.0": "" } }) Let’s check for the “Teacher” collection records for only its array-type “shift” field via the “find” method as used in the following image. The output displays the index “0” of the “shift” field as “null”; unset always sets the array fields to “null” if it is left empty. This is because, in arrays, the values are stored at specific locations using indexes. Therefore, removing a value doesn’t completely remove its place. Although, you can set a new value for the field. test> db.Teacher.find({}, {shift:1}) Conclusion This guide discovers the different methods of utilizing the $unset operator on the database collections in MongoDB. We discussed the simplest examples of unset to remove a single independent field, nested field value, or an array value via indexes. View the full article
  24. MongoDB authentication is a safety tool that aids in preventing unwanted manipulation of MongoDB databases. Possible security breaches are avoided since it makes sure that only verified and approved individuals can communicate with the database. In MongoDB, verification entails confirming a user’s identity and any related rights before allowing them access to carry out a given activity on the database. Lacking authorization, anybody with access to the MongoDB instance could be able to read, alter, or delete information with no limitations, constituting a serious security concern. Therefore, this article will elaborate on the method of setting up MongoDB Authentication. Enable Authentication in MongoDB Configuration File Starting with the first step of this guide, you must ensure that the MongoDB has been successfully installed and configured on your system. Without this step, authentication is not possible. After a successful installation, navigate within the “bin” folder of the MongoDB destination folder and locate the “mongod.cfg” file within it. This file contains the settings for configuring the MongoDB server on your system. Therefore, it usually required administrative rights to alter or update it. So, open the mongod.cfg file with administrative rights to update it. After successfully opening it at your end, look for the option “security” and make sure that its authorization is already enabled. If not, update it to “enabled”. Check Data Directory Path You must verify that the data directory D:\data\db\ already exists on your system. If it doesn’t exist by now, you ought to create one. Note that the data directory will be used to store the database information from now on. Specify Data Directory Path If the data directory is already set up, check that the MongoDB settings file (mongod.conf) has the proper path given. In the configuration document, look for the storage part and adjust the dbPath parameter to the appropriate path. In our case, the correct path is “D:\data\db\”. Save this file and close it for now. Restarting the MongoDB server after applying edits to the configuration file will make the changes effective. Create Admin User Now, it’s time to connect to your MongoDB instance without authentication (if you can) and generate a new administrative user with the required privileges. For this, we will be employing the MongoDB shell tool that has been downloaded separately from the site. This “zip” folder comes with two files. Double-tap to open the mongosh.exe, which is the basic MongoDB shell. After opening the MongoDB shell, it will ask for the MongoDB connection string, i.e., name of the database to be used for user creation. It is recommended to use the “admin” database, which has more rights than any other database. So, switch to the “admin” database by simply adding the keyword “admin” and pressing “Enter”. After successful entry within the “admin” database, it’s time to create a new admin user in it. For the creation of an administrative user, MongoDB provides the createUser() function to employ within the “db” instruction. It contains a total of three parameters. The “user” parameter holds the name of a user to be created, i.e., replace <adminUser1> with the username of your choice. The “pwd” parameter holds the password for the user to be created, i.e., admin1234. You can set the password according to your preference. The “roles” array specifies the rights of the admin user. To grant full access to the “adminuser1”, we are setting its role to “root” and specifying the db as “admin”. The acknowledgment { ok: 1 } indicates the successful creation of the admin user. db.createUser({ user: "adminuser1", pwd: "admin1234", roles: [{ role: "root", db: "admin" }] }) Restart MongoDB After successfully creating a MongoDb admin user, it is recommended to restart the MongoDB server to properly apply the authentication settings. To do this, exit the MongoDB shell and open your system’s Command prompt. Navigate to the “bin” folder located in the MognoDB shell folder and open it using the “cmd” tool. MongoDB Authentication Now, it’s time to authorize and connect with your MongoDB. To do this, you should use the admin user generated from the previous step. To authenticate, the connection query string should start with the keyword “mongosh” followed by the parameter “—authenticationDatabase”, which is used to specify the authentication database i.e. “admin”. Along with that, the parameter “-u” has been used to specify the username for user authentication in the “Admin” database; in our case, its “adminuser1”. The last parameter “-p” is used to add a password for user authentication. After executing the connection string, the system will prompt you to enter the password and you have to enter it correctly. If the password and username are correct, you will successfully authenticate and access the MongoDB shell, as shown in the image. mongosh --authenticationDatabase admin -u adminuser1 -p Create Additional Users and Roles After connecting to the MongoDB test database with a user “adminuser1” successfully, you are now able to create users and assign different roles to them. This is not limited to the working database, you can also create users for other databases. For instance, we are using the “createUser()” function to create another user named “james” in the new database “myDatabase”. We set the password for this user as ‘James123” and limited its role to read and write access for that specific database. db.createUser({ user: "james", pwd: "james@123", roles: [{ role: "readWrite", db: "myDatabase" }] }) Test Authentication It’s high time to disconnect from your MongoDB server for a while and reconnect with it using the newly created user’s credentials. This is to guarantee that the authentication was successful. Using the “admin” database, we have authenticated “peter” but the authentication for “james” is unsuccessful because it was created using the “test” database. After switching to the “test” database, we have authenticated the “james” user, as shown in the attached image. db.auth('peter', 'peter01') db.auth('james', 'james@123') use test Conclusion This guide has demonstrated a clear and concise method to set up any MongoDB authentication. To illustrate this process, we have provided several query examples, like creating a super user, assigning it a password and rights followed by the use of it on databases. View the full article
  • Forum Statistics

    42.8k
    Total Topics
    42.2k
    Total Posts
×
×
  • Create New...