From 1cb3de7dd0e275cf50b8d62a7bacbb9d01516bf9 Mon Sep 17 00:00:00 2001 From: Melanie Ballard Date: Tue, 10 Jun 2025 16:05:54 -0400 Subject: [PATCH 1/7] DOCSP-50497 redo file deletion and new pull req with correct c# reference --- source/aggregation.txt | 329 +++++++++---------------- source/aggregation/pipeline-stages.txt | 239 ++++++++++++++++++ 2 files changed, 361 insertions(+), 207 deletions(-) create mode 100644 source/aggregation/pipeline-stages.txt diff --git a/source/aggregation.txt b/source/aggregation.txt index c4f2504e4..874ab0ddc 100644 --- a/source/aggregation.txt +++ b/source/aggregation.txt @@ -14,245 +14,160 @@ Aggregation .. contents:: On this page :local: + :backlinks: none :depth: 2 :class: singlecol .. toctree:: + Pipeline Stages Filtered Subset Group & Total Unpack Arrays & Group One-to-One Join Multi-Field Join - + .. _nodejs-aggregation-overview: Overview -------- -In this guide, you can learn how to use **aggregation operations** in -the MongoDB Node.js driver. +In this guide, you can learn how to use the MongoDB Node.js Driver to perform +**aggregation operations**. -Aggregation operations are expressions you can use to produce reduced -and summarized results in MongoDB. MongoDB's aggregation framework -allows you to create a pipeline that consists of one or more stages, -each of which performs a specific operation on your data. +Aggregation operations process data in your MongoDB collections and return +computed results. The MongoDB Aggregation framework is modeled on the +concept of data processing pipelines. Documents enter a pipeline comprised of one or +more stages, and this pipeline transforms the documents into an aggregated result. + +To learn more about the aggregation stages supported by the Node.js Driver, see :ref:`Aggregation Stages <>`. +.. todo-- add link here Analogy ~~~~~~~ -You can think of the aggregation pipeline as similar to an automobile factory. -Automobile manufacturing requires the use of assembly stations organized -into assembly lines. Each station has specialized tools, such as -drills and welders. The factory transforms and -assembles the initial parts and materials into finished products. - -The **aggregation pipeline** is the assembly line, **aggregation -stages** are the assembly stations, and **expression operators** are the -specialized tools. - -Comparing Aggregation and Query Operations -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -Using query operations, such as the ``find()`` method, you can perform the following actions: - -- Select *which documents* to return -- Select *which fields* to return -- Sort the results - -Using aggregation operations, you can perform the following actions: - -- Perform all query operations -- Rename fields -- Calculate fields -- Summarize data -- Group values - -Aggregation operations have some :manual:`limitations `: - -- Returned documents must not violate the :manual:`BSON-document size limit ` +The aggregation pipeline is similar to an automobile factory assembly line. An +assembly lines has stations with specialized tools that are used to perform +specific tasks. For example, when building a car, the assembly line begins with +a frame. As the car frame moves though the assembly line, each station adds a +new part. The factory transforms and assembles the initial parts, resulting in +finished cars. + +The *aggregation pipeline* is the assembly line, the *aggregation stages* are +the assembly stations, and the *expression operators* are the specialized tools. + +Compare Aggregation and Find Operations +--------------------------------------- + +The following table lists the different tasks you can perform with find +operations compared to what you can achieve with aggregation +operations. The aggregation framework provides expanded functionality +that allows you to transform and manipulate your data. + +.. list-table:: + :header-rows: 1 + :widths: 50 50 + + * - Find Operations + - Aggregation Operations + + * - | Select *certain* documents to return + | Select *which* fields to return + | Sort the results + | Limit the results + | Count the results + - | Select *certain* documents to return + | Select *which* fields to return + | Sort the results + | Limit the results + | Count the results + | Group the results + | Rename fields + | Compute new fields + | Summarize data + | Connect and merge data sets + +Server Limitations +------------------ + +Consider the following :manual:`limitations ` when performing aggregation operations: + +- Returned documents must not violate the :manual:`BSON document size limit ` of 16 megabytes. -- Pipeline stages have a memory limit of 100 megabytes by default. You can exceed this - limit by setting the ``allowDiskUse`` property of ``AggregateOptions`` to ``true``. See - the `AggregateOptions API documentation <{+api+}/interfaces/AggregateOptions.html>`__ - for more details. - -.. important:: $graphLookup exception - - The :manual:`$graphLookup - ` stage has a strict - memory limit of 100 megabytes and will ignore ``allowDiskUse``. - -References -~~~~~~~~~~ - -To view a full list of expression operators, see :manual:`Aggregation -Operators ` in the Server manual. - -To learn about assembling an aggregation pipeline and view examples, see -:manual:`Aggregation Pipeline ` in the -Server manual. - -To learn more about creating pipeline stages, see :manual:`Aggregation -Stages ` in the Server manual. - -Runnable Examples ------------------ - -The example uses sample data about restaurants. The following code -inserts data into the ``restaurants`` collection of the ``aggregation`` -database: - -.. literalinclude:: /code-snippets/aggregation/agg.js - :start-after: begin data insertion - :end-before: end data insertion - :language: javascript - :dedent: - -.. tip:: - - For more information on connecting to your MongoDB deployment, see the :doc:`Connection Guide `. +- Pipeline stages have a memory limit of 100 megabytes by default. If required, + you can exceed this limit by enabling the `AllowDiskUse + `__ + property of the ``AggregateOptions`` object that you pass to the + ``Aggregate()`` method. +.. Aggregation Example -~~~~~~~~~~~~~~~~~~~ - +------------------- +.. To perform an aggregation, pass a list of aggregation stages to the ``collection.aggregate()`` method. - -In the example, the aggregation pipeline uses the following aggregation stages: - -- A :manual:`$match ` stage to filter for documents whose - ``categories`` array field contains the element ``Bakery``. - -- A :manual:`$group ` stage to group the matching documents by the ``stars`` - field, accumulating a count of documents for each distinct value of ``stars``. - +.. +.. note:: + .. + This example uses the ``sample_restaurants.restaurants`` collection + from the :atlas:`Atlas sample datasets `. To learn how to create a + free MongoDB Atlas cluster and load the sample datasets, see the :ref:`Get Started ` guide. +.. +The following code example produces a count of the number of bakeries in each borough +of New York City. To do so, the aggregation pipeline uses the following aggregation stages: +.. +- A :manual:`$match ` stage to filter + for documents whose ``cuisine`` field contains the element ``Bakery``. +.. +- A :manual:`$group ` stage to group the + matching documents by the ``borough`` field, accumulating a count of documents + for each distinct value in the ``borough`` field. +.. .. literalinclude:: /code-snippets/aggregation/agg.js :start-after: begin aggregation :end-before: end aggregation :language: javascript :dedent: - +.. This example produces the following output: - +.. .. code-block:: json :copyable: false - - { _id: 4, count: 2 } - { _id: 3, count: 1 } - { _id: 5, count: 1 } - -For more information, see the `aggregate() API documentation <{+api+}/classes/Collection.html#aggregate>`__. - -.. _node-aggregation-tutorials-landing: -.. _node-aggregation-tutorials: - -Aggregation Tutorials ---------------------- - -Aggregation tutorials provide detailed explanations of common -aggregation tasks in a step-by-step format. The tutorials are adapted -from examples in the `Practical MongoDB Aggregations book -`__ by Paul Done. - -Each tutorial includes the following sections: - -- **Introduction**, which describes the purpose and common use cases of the - aggregation type. This section also describes the example and desired - outcome that the tutorial demonstrates. - -- **Before You Get Started**, which describes the necessary databases, - collections, and sample data that you must have before building the - aggregation pipeline and performing the aggregation. - -- **Tutorial**, which describes how to build and run the aggregation - pipeline. This section describes each stage of the completed - aggregation tutorial, and then explains how to run and interpret the - output of the aggregation. - -At the end of each aggregation tutorial, you can find a link to a fully -runnable Node.js code file that you can run in your environment. - -.. tip:: - - To learn more about performing aggregations, see the - :ref:`node-aggregation` guide. - -.. _node-agg-tutorial-template-app: - -Aggregation Template App -~~~~~~~~~~~~~~~~~~~~~~~~ - -Before you begin following an aggregation tutorial, you must set up a -new Node.js app. You can use this app to connect to a MongoDB -deployment, insert sample data into MongoDB, and run the aggregation -pipeline in each tutorial. - -.. tip:: - - To learn how to install the driver and connect to MongoDB, - see the :ref:`node-get-started-download-and-install` and - :ref:`node-get-started-create-deployment` steps of the - Quick Start guide. - -Once you install the driver, create a file called -``agg_tutorial.js``. Paste the following code in this file to create an -app template for the aggregation tutorials: - -.. literalinclude:: /includes/aggregation/template-app.js - :language: javascript - :copyable: true - -.. important:: - - In the preceding code, read the code comments to find the sections of - the code that you must modify for the tutorial you are following. - - If you attempt to run the code without making any changes, you will - encounter a connection error. - -For every tutorial, you must replace the connection string placeholder with -your deployment's connection string. - -.. tip:: - - To learn how to locate your deployment's connection string, see the - :ref:`node-get-started-connection-string` step of the Quick Start guide. - -For example, if your connection string is -``"mongodb+srv://mongodb-example:27017"``, your connection string assignment resembles -the following: - -.. code-block:: javascript - :copyable: false - - const uri = "mongodb+srv://mongodb-example:27017"; - -To run the completed file after you modify the template for a -tutorial, run the following command in your shell: - -.. code-block:: bash - - node agg_tutorial.js - -Available Tutorials -~~~~~~~~~~~~~~~~~~~ - -- :ref:`node-aggregation-filtered-subset` -- :ref:`node-aggregation-group-total` -- :ref:`node-aggregation-arrays` -- :ref:`node-aggregation-one-to-one` -- :ref:`node-aggregation-multi-field` - -Additional Examples -~~~~~~~~~~~~~~~~~~~ - -To view step-by-step explanations of common aggregation tasks, see the -:ref:`node-aggregation-tutorials-landing`. - -You can find another aggregation pipeline example in the `Aggregation -Framework with Node.js Tutorial -`_ -blog post on the MongoDB website. +.. + { _id = 'Bronx', count = 71 } + { _id = 'Brooklyn', count = 173 } + { _id = 'Staten Island', count = 20 } + { _id = 'Missing', count = 2 } + { _id = 'Manhattan', count = 221 } + { _id = 'Queens', count = 204 } + +Additional information +---------------------- + +To view a full list of expression operators, see +:manual:`Aggregation Operators `. + +.. +To learn more about assembling an aggregation pipeline and view examples, see +:manual:`Aggregation Pipeline `. +.. +To learn more about creating pipeline stages and view examples, see +:manual:`Aggregation Stages `. + +To learn about explaining MongoDB aggregation operations, see +:manual:`Explain Results ` and +:manual:`Query Plans `. + +.. +API Documentation +~~~~~~~~~~~~~~~~~ +.. +For more information about the aggregation operations discussed in this guide, see the +following API documentation: +.. +- `Collection() `__ +- `aggregate() `__ +- `AggregateOptions `__ +.. try to find $match and $group api links .. \ No newline at end of file diff --git a/source/aggregation/pipeline-stages.txt b/source/aggregation/pipeline-stages.txt new file mode 100644 index 000000000..8bebad2f2 --- /dev/null +++ b/source/aggregation/pipeline-stages.txt @@ -0,0 +1,239 @@ +.. _node-aggregation-pipeline-stages: + +=============== +Aggregation Pipeline Stages +=============== + +.. contents:: On this page + :local: + :backlinks: none + :depth: 2 + :class: singlecol + +.. facet:: + :name: genre + :values: reference + +.. meta:: + :keywords: node.js, code example, transform, pipeline + :description: Learn the different possible stages of the aggregation pipeline in the {+driver-short+}. + +Overview +------------ + +On this page, you can learn how to create an aggregation pipeline and pipeline +stages by using methods in the Node.js Driver. + +Build an Aggregation Pipeline +----------------------------- + +You can use the {+driver-short+} to build an aggregation pipeline by INSERT +HERE. See the following sections to learn more about each of these approaches. + +.. to do these sections + +Aggregation Stage Methods +------------------------- + +The following table lists the stages in the aggregation pipeline. The methods +retain the same name when used in Node.js. To learn more about an aggregation +stage and see a code example in a Node.js application, follow the link from the +stage name to its reference page in the {+mdb-server+} manual. + +.. list-table:: + :header-rows: 1 + :widths: 50 50 + + * - Aggregation Stage + - Description + + * - :manual:`$bucket ` + - Categorizes incoming documents into groups, called buckets, + based on a specified expression and bucket boundaries. + + * - :manual:`$bucketAuto ` + - Categorizes incoming documents into a specific number of + groups, called buckets, based on a specified expression. + Bucket boundaries are automatically determined in an attempt + to evenly distribute the documents into the specified number + of buckets. + + * - :manual:`$changeStream ` + - Returns a change stream cursor for the + collection. This stage can occur only once in an aggregation + pipeline and it must occur as the first stage. + + * - :manual:`$changeStreamSplitLargeEvent ` + - Splits large change stream events that exceed 16 MB into smaller fragments returned + in a change stream cursor. + + You can use ``$changeStreamSplitLargeEvent`` only in a ``$changeStream`` pipeline, and + it must be the final stage in the pipeline. + + * - :manual:`$count ` + - Returns a count of the number of documents at this stage of + the aggregation pipeline. + + * - :manual:`$densify ` + - Creates new documents in a sequence of documents where certain values in a field are missing. + + * - :manual:`$documents ` + - Returns literal documents from input expressions. + + * - :manual:`$facet ` + - Processes multiple aggregation pipelines + within a single stage on the same set + of input documents. Enables the creation of multi-faceted + aggregations capable of characterizing data across multiple + dimensions, or facets, in a single stage. + + * - :manual:`$geoNear ` + - Returns documents in order of nearest to farthest from a + specified point. This method adds a field to output documents + that contains the distance from the specified point. + + * - :manual:`$graphLookup ` + - Performs a recursive search on a collection. This method adds + a new array field to each output document that contains the traversal + results of the recursive search for that document. + + * - :manual:`$group ` + - Groups input documents by a specified identifier expression + and applies the accumulator expressions, if specified, to + each group. Consumes all input documents and outputs one + document per each distinct group. The output documents + contain only the identifier field and, if specified, accumulated + fields. + + * - :manual:`$limit ` + - Passes the first *n* documents unmodified to the pipeline, + where *n* is the specified limit. For each input document, + outputs either one document (for the first *n* documents) or + zero documents (after the first *n* documents). + + * - :manual:`$lookup ` + - Performs a left outer join to another collection in the + *same* database to filter in documents from the "joined" + collection for processing. + + * - :manual:`$match ` + - Filters the document stream to allow only matching documents + to pass unmodified into the next pipeline stage. + For each input document, outputs either one document (a match) or zero + documents (no match). + + * - :manual:`$merge ` + - Writes the resulting documents of the aggregation pipeline to + a collection. The stage can incorporate (insert new + documents, merge documents, replace documents, keep existing + documents, fail the operation, process documents with a + custom update pipeline) the results into an output + collection. To use this stage, it must be + the last stage in the pipeline. + + * - :manual:`$out ` + - Writes the resulting documents of the aggregation pipeline to + a collection. To use this stage, it must be + the last stage in the pipeline. + + * - :manual:`$project ` + - Reshapes each document in the stream, such as by adding new + fields or removing existing fields. For each input document, + outputs one document. + + * - :manual:`$rankFusion ` + - Uses a rank fusion algorithm to combine results from a Vector Search + query and an Atlas Search query. + + * - :manual:`$replaceRoot ` + - Replaces a document with the specified embedded document. The + operation replaces all existing fields in the input document, + including the ``_id`` field. Specify a document embedded in + the input document to promote the embedded document to the + top level. + + The ``$replaceWith`` stage is an alias for the ``$replaceRoot`` stage. + + * - :manual:`$replaceWith ` + - Replaces a document with the specified embedded document. + The operation replaces all existing fields in the input document, including + the ``_id`` field. Specify a document embedded in the input document to promote + the embedded document to the top level. + + The ``$replaceWith`` stage is an alias for the ``$replaceRoot`` stage. + + * - :manual:`$sample ` + - Randomly selects the specified number of documents from its + input. + + * - :manual:`$search ` + - Performs a full-text search of the field or fields in an + :atlas:`Atlas ` + collection. + + This stage is available only for MongoDB Atlas clusters, and is not + available for self-managed deployments. To learn more, see + :atlas:`Atlas Search Aggregation Pipeline Stages + ` in the Atlas documentation. + + * - :manual:`$searchMeta ` + - Returns different types of metadata result documents for the + :atlas:`Atlas Search ` query against an + :atlas:`Atlas ` + collection. + + This stage is available only for MongoDB Atlas clusters, + and is not available for self-managed deployments. To learn + more, see :atlas:`Atlas Search Aggregation Pipeline Stages + ` in the Atlas documentation. + + * - :manual:`$set ` + - Adds new fields to documents. Like the ``Project()`` method, + this method reshapes each + document in the stream by adding new fields to + output documents that contain both the existing fields + from the input documents and the newly added fields. + + * - :manual:`$setWindowFields ` + - Groups documents into windows and applies one or more + operators to the documents in each window. + + * - :manual:`$skip ` + - Skips the first *n* documents, where *n* is the specified skip + number, and passes the remaining documents unmodified to the + pipeline. For each input document, outputs either zero + documents (for the first *n* documents) or one document (if + after the first *n* documents). + + * - :manual:`$sort ` + - Reorders the document stream by a specified sort key. The documents remain unmodified. + For each input document, outputs one document. + + * - :manual:`$sortByCount ` + - Groups incoming documents based on the value of a specified + expression, then computes the count of documents in each + distinct group. + + * - :manual:`$unionWith ` + - Combines pipeline results from two collections into a single + result set. + + * - :manual:`$unwind ` + - Deconstructs an array field from the input documents to + output a document for *each* element. Each output document + replaces the array with an element value. For each input + document, outputs *n* Documents, where *n* is the number of + array elements. *n* can be zero for an empty array. + + * - :manual:`$vectorSearch ` + - Performs an :abbr:`ANN (Approximate Nearest Neighbor)` or + :abbr:`ENN (Exact Nearest Neighbor)` search on a + vector in the specified field of an + :atlas:`Atlas ` collection. + + This stage is available only for MongoDB Atlas clusters, and is not + available for self-managed deployments. To learn more, see + :ref:`Atlas Vector Search `. + + + From 2c34f85e8e987c113d9b2f5a43180b714f3e632d Mon Sep 17 00:00:00 2001 From: Melanie Ballard Date: Tue, 10 Jun 2025 16:21:27 -0400 Subject: [PATCH 2/7] DOCSP-50497 fixing pages, removing comments, and finishing outline --- source/aggregation.txt | 63 +------------------------- source/aggregation/pipeline-stages.txt | 51 +++++++++++++++++++-- 2 files changed, 49 insertions(+), 65 deletions(-) diff --git a/source/aggregation.txt b/source/aggregation.txt index 874ab0ddc..e9b3ccf51 100644 --- a/source/aggregation.txt +++ b/source/aggregation.txt @@ -41,8 +41,7 @@ computed results. The MongoDB Aggregation framework is modeled on the concept of data processing pipelines. Documents enter a pipeline comprised of one or more stages, and this pipeline transforms the documents into an aggregated result. -To learn more about the aggregation stages supported by the Node.js Driver, see :ref:`Aggregation Stages <>`. -.. todo-- add link here +To learn more about the aggregation stages supported by the Node.js Driver, see :ref:`Aggregation Stages `. Analogy ~~~~~~~ @@ -102,72 +101,12 @@ Consider the following :manual:`limitations property of the ``AggregateOptions`` object that you pass to the ``Aggregate()`` method. -.. -Aggregation Example -------------------- -.. -To perform an aggregation, pass a list of aggregation stages to the -``collection.aggregate()`` method. -.. -.. note:: - .. - This example uses the ``sample_restaurants.restaurants`` collection - from the :atlas:`Atlas sample datasets `. To learn how to create a - free MongoDB Atlas cluster and load the sample datasets, see the :ref:`Get Started ` guide. -.. -The following code example produces a count of the number of bakeries in each borough -of New York City. To do so, the aggregation pipeline uses the following aggregation stages: -.. -- A :manual:`$match ` stage to filter - for documents whose ``cuisine`` field contains the element ``Bakery``. -.. -- A :manual:`$group ` stage to group the - matching documents by the ``borough`` field, accumulating a count of documents - for each distinct value in the ``borough`` field. -.. -.. literalinclude:: /code-snippets/aggregation/agg.js - :start-after: begin aggregation - :end-before: end aggregation - :language: javascript - :dedent: -.. -This example produces the following output: -.. -.. code-block:: json - :copyable: false -.. - { _id = 'Bronx', count = 71 } - { _id = 'Brooklyn', count = 173 } - { _id = 'Staten Island', count = 20 } - { _id = 'Missing', count = 2 } - { _id = 'Manhattan', count = 221 } - { _id = 'Queens', count = 204 } - Additional information ---------------------- To view a full list of expression operators, see :manual:`Aggregation Operators `. -.. -To learn more about assembling an aggregation pipeline and view examples, see -:manual:`Aggregation Pipeline `. -.. -To learn more about creating pipeline stages and view examples, see -:manual:`Aggregation Stages `. - To learn about explaining MongoDB aggregation operations, see :manual:`Explain Results ` and :manual:`Query Plans `. - -.. -API Documentation -~~~~~~~~~~~~~~~~~ -.. -For more information about the aggregation operations discussed in this guide, see the -following API documentation: -.. -- `Collection() `__ -- `aggregate() `__ -- `AggregateOptions `__ -.. try to find $match and $group api links .. \ No newline at end of file diff --git a/source/aggregation/pipeline-stages.txt b/source/aggregation/pipeline-stages.txt index 8bebad2f2..0f7945f22 100644 --- a/source/aggregation/pipeline-stages.txt +++ b/source/aggregation/pipeline-stages.txt @@ -30,7 +30,39 @@ Build an Aggregation Pipeline You can use the {+driver-short+} to build an aggregation pipeline by INSERT HERE. See the following sections to learn more about each of these approaches. -.. to do these sections +.. note:: + .. + This example uses the ``sample_restaurants.restaurants`` collection + from the :atlas:`Atlas sample datasets `. To learn how to create a + free MongoDB Atlas cluster and load the sample datasets, see the :ref:`Get Started ` guide. + +The following code example produces a count of the number of bakeries in each borough +of New York City. To do so, the aggregation pipeline uses the following aggregation stages: + +- A :manual:`$match ` stage to filter + for documents whose ``cuisine`` field contains the element ``Bakery``. +- A :manual:`$group ` stage to group the + matching documents by the ``borough`` field, accumulating a count of documents + for each distinct value in the ``borough`` field. + +.. literalinclude:: /code-snippets/aggregation/agg.js + :start-after: begin aggregation + :end-before: end aggregation + :language: javascript + :dedent: + +This example produces the following output: + +.. code-block:: json + :copyable: false + + { _id = 'Bronx', count = 71 } + { _id = 'Brooklyn', count = 173 } + { _id = 'Staten Island', count = 20 } + { _id = 'Missing', count = 2 } + { _id = 'Manhattan', count = 221 } + { _id = 'Queens', count = 204 } + Aggregation Stage Methods ------------------------- @@ -42,9 +74,9 @@ stage name to its reference page in the {+mdb-server+} manual. .. list-table:: :header-rows: 1 - :widths: 50 50 + :widths: 20 80 - * - Aggregation Stage + * - Stage - Description * - :manual:`$bucket ` @@ -235,5 +267,18 @@ stage name to its reference page in the {+mdb-server+} manual. available for self-managed deployments. To learn more, see :ref:`Atlas Vector Search `. +API Documentation +~~~~~~~~~~~~~~~~~ + +To learn more about assembling an aggregation pipeline, see :manual:`Aggregation +Pipeline ` in the MongoDB Server manual. + +To learn more about creating pipeline stages, see :manual:`Aggregation Stages +` in the MongoDB Server manual. +For more information about the methods and classes used on this page, see the +following API documentation: +- `Collection() `__ +- `aggregate() `__ +- `AggregateOptions `__ From 5275d65f38bd172a000568e57bdd9c4aed6d6e41 Mon Sep 17 00:00:00 2001 From: Melanie Ballard Date: Fri, 13 Jun 2025 13:42:54 -0400 Subject: [PATCH 3/7] DOCSP-50497 define framework with no concrete example, save for server pages --- source/aggregation/pipeline-stages.txt | 46 +++++++------------------ source/code-snippets/aggregation/agg.js | 17 ++------- 2 files changed, 15 insertions(+), 48 deletions(-) diff --git a/source/aggregation/pipeline-stages.txt b/source/aggregation/pipeline-stages.txt index 0f7945f22..495bb1800 100644 --- a/source/aggregation/pipeline-stages.txt +++ b/source/aggregation/pipeline-stages.txt @@ -22,53 +22,33 @@ Overview ------------ On this page, you can learn how to create an aggregation pipeline and pipeline -stages by using methods in the Node.js Driver. +stages by using methods in the {+driver-short+}. Build an Aggregation Pipeline ----------------------------- -You can use the {+driver-short+} to build an aggregation pipeline by INSERT -HERE. See the following sections to learn more about each of these approaches. +You can use the {+driver-short+} to build an aggregation pipeline by adding +aggregation stages and operations to the aggregation framework. See the +following code to learn how to format the framework. -.. note:: - .. - This example uses the ``sample_restaurants.restaurants`` collection - from the :atlas:`Atlas sample datasets `. To learn how to create a - free MongoDB Atlas cluster and load the sample datasets, see the :ref:`Get Started ` guide. -The following code example produces a count of the number of bakeries in each borough -of New York City. To do so, the aggregation pipeline uses the following aggregation stages: +.. code-block:: javascript -- A :manual:`$match ` stage to filter - for documents whose ``cuisine`` field contains the element ``Bakery``. -- A :manual:`$group ` stage to group the - matching documents by the ``borough`` field, accumulating a count of documents - for each distinct value in the ``borough`` field. + // Defines the aggregation pipeline + const pipeline = [ + { $match: { ... } }, + { $group: { ... } } + ]; -.. literalinclude:: /code-snippets/aggregation/agg.js - :start-after: begin aggregation - :end-before: end aggregation - :language: javascript - :dedent: - -This example produces the following output: - -.. code-block:: json - :copyable: false - - { _id = 'Bronx', count = 71 } - { _id = 'Brooklyn', count = 173 } - { _id = 'Staten Island', count = 20 } - { _id = 'Missing', count = 2 } - { _id = 'Manhattan', count = 221 } - { _id = 'Queens', count = 204 } + // Executes the aggregation pipeline + const results = coll.aggregate(pipeline); Aggregation Stage Methods ------------------------- The following table lists the stages in the aggregation pipeline. The methods -retain the same name when used in Node.js. To learn more about an aggregation +are formatted as they are listed when used in Node.js. To learn more about an aggregation stage and see a code example in a Node.js application, follow the link from the stage name to its reference page in the {+mdb-server+} manual. diff --git a/source/code-snippets/aggregation/agg.js b/source/code-snippets/aggregation/agg.js index 62fff7cc3..baa68a079 100644 --- a/source/code-snippets/aggregation/agg.js +++ b/source/code-snippets/aggregation/agg.js @@ -10,24 +10,11 @@ async function run() { const db = client.db("aggregation"); const coll = db.collection("restaurants"); - // Create sample documents - const docs = [ - { stars: 3, categories: ["Bakery", "Sandwiches"], name: "Rising Sun Bakery" }, - { stars: 4, categories: ["Bakery", "Cafe", "Bar"], name: "Cafe au Late" }, - { stars: 5, categories: ["Coffee", "Bakery"], name: "Liz's Coffee Bar" }, - { stars: 3, categories: ["Steak", "Seafood"], name: "Oak Steakhouse" }, - { stars: 4, categories: ["Bakery", "Dessert"], name: "Petit Cookie" }, - ]; - - // Insert documents into the restaurants collection - const result = await coll.insertMany(docs); - // end data insertion - // begin aggregation // Define an aggregation pipeline with a match stage and a group stage const pipeline = [ - { $match: { categories: "Bakery" } }, - { $group: { _id: "$stars", count: { $sum: 1 } } } + { $match: { cuisine: "Bakery" } }, + { $group: { _id: "$borough", count: { $sum: 1 } } } ]; // Execute the aggregation From af35eb90f38cbb8aa7248b9b3463553f757baf65 Mon Sep 17 00:00:00 2001 From: Michael Morisi Date: Tue, 10 Jun 2025 15:53:02 -0400 Subject: [PATCH 4/7] DOCSP-46858: Add custom AWS credential documentation (#1160) --- .../authentication/aws-custom-credentials.js | 29 +++++++++++ source/security/authentication/aws-iam.txt | 49 ++++++++++++------- 2 files changed, 61 insertions(+), 17 deletions(-) create mode 100644 source/code-snippets/authentication/aws-custom-credentials.js diff --git a/source/code-snippets/authentication/aws-custom-credentials.js b/source/code-snippets/authentication/aws-custom-credentials.js new file mode 100644 index 000000000..7b4e012fb --- /dev/null +++ b/source/code-snippets/authentication/aws-custom-credentials.js @@ -0,0 +1,29 @@ +{ + // start-custom-credentials + const { MongoClient } = require('mongodb'); + const { fromNodeProviderChain } = require('@aws-sdk/credential-providers'); + + const client = new MongoClient('?authMechanism=MONGODB-AWS', { + authMechanismProperties: { + AWS_CREDENTIAL_PROVIDER: fromNodeProviderChain() + } + }); + // end-custom-credentials +} + +{ + // start-custom-credentials-function + const { MongoClient } = require('mongodb'); + + const client = new MongoClient('?authMechanism=MONGODB-AWS', { + authMechanismProperties: { + AWS_CREDENTIAL_PROVIDER: async () => { + return { + accessKeyId: process.env.ACCESS_KEY_ID, + secretAccessKey: process.env.SECRET_ACCESS_KEY + } + } + } + }); + // end-custom-credentials-function +} \ No newline at end of file diff --git a/source/security/authentication/aws-iam.txt b/source/security/authentication/aws-iam.txt index d647633ff..168c7171e 100644 --- a/source/security/authentication/aws-iam.txt +++ b/source/security/authentication/aws-iam.txt @@ -153,23 +153,38 @@ The driver checks for your credentials in the following sources in order: .. literalinclude:: /code-snippets/authentication/aws-env-variable.js :language: javascript -.. important:: Retrieval of AWS Credentials - - Starting in MongoDB version 4.11, when you install the optional - ``aws-sdk/credential-providers`` dependency, the driver uses the AWS SDK - to retrieve credentials from the environment. As a result, if you - have a shared AWS credentials file or config file, the driver will - use those credentials by default. - - You can override this behavior by performing one of the following - actions: - - - Set ``AWS_SHARED_CREDENTIALS_FILE`` variable in your shell to point - to your credentials file. - - Set the equivalent environment variable in your application to point - to your credentials file. - - Create an AWS profile for your MongoDB credentials and set the - ``AWS_PROFILE`` environment variable to that profile name. +Specifying AWS Credentials +-------------------------- + +When you install the optional ``aws-sdk/credential-providers`` dependency, the driver +retrieves credentials in a priority order defined by the AWS SDK. If you have a shared AWS +credentials file or config file, the driver uses those credentials by default. + +.. tip:: + + To learn more about how the ``aws-sdk/credential-providers`` dependency retrieves + credentials, see the `AWS SDK documentation `__. + +To manually specify the AWS credentials to retrieve, you can set the ``AWS_CREDENTIAL_PROVIDER`` +property to a defined credential provider from the AWS SDK. The following example passes a provider chain +from the AWS SDK to the AWS authentication mechanism: + +.. literalinclude:: /code-snippets/authentication/aws-custom-credentials.js + :language: javascript + :start-after: // start-custom-credentials + :end-before: // end-custom-credentials + :dedent: + +To use a custom provider, you can pass any asynchronous function that returns your credentials +to the ``AWS_CREDENTIAL_PROVIDER`` authentication mechanism property. The following example shows how to pass +a custom provider function that fetches credentials from environment variables to the +AWS authentication mechanism: + +.. literalinclude:: /code-snippets/authentication/aws-custom-credentials.js + :language: javascript + :start-after: // start-custom-credentials-function + :end-before: // end-custom-credentials-function + :dedent: API Documentation ----------------- From a1ee2c02af1bcd30cd722abe6400a04b8157afb3 Mon Sep 17 00:00:00 2001 From: Melanie Ballard Date: Wed, 18 Jun 2025 12:01:00 -0400 Subject: [PATCH 5/7] DOCSP-35201 Transaction Retry Logic in Core API (#1164) * DOCSP-35201 add section linking to solutions for transaction retry logic in node Core API * DOCSP-35201 adding more details for retry logic * DOCSP-35201 edit for clarity, add list of functions * DOCSP-35201 restructuring for clarity and brevity * DOCSP-35201 fixing links * DOCSP-35201 remove double error in title * DOCSP-35201 fix sentences * DOCSP-35201 style guide changes/wording * DOCSP-35201 add API to title * Update source/crud/transactions.txt Co-authored-by: Mike Woofter <108414937+mongoKart@users.noreply.github.com> * DOCSP-35201 grammar changes --------- Co-authored-by: Mike Woofter <108414937+mongoKart@users.noreply.github.com> --- source/crud/transactions.txt | 56 +++++++++++++++++++++++++----------- 1 file changed, 40 insertions(+), 16 deletions(-) diff --git a/source/crud/transactions.txt b/source/crud/transactions.txt index 4a82050d4..5a18b9c0e 100644 --- a/source/crud/transactions.txt +++ b/source/crud/transactions.txt @@ -304,19 +304,43 @@ the ``startTransaction()`` method: Transaction Errors ------------------ -If you are using the Core API to perform a transaction, you must incorporate -error-handling logic into your application for the following errors: - -- ``TransientTransactionError``: Raised if a write operation errors - before the driver commits the transaction. To learn more about this error, see the - :manual:`TransientTransactionError description - ` on - the Driver API page in the Server manual. -- ``UnknownTransactionCommitResult``: Raised if the commit operation - encounters an error. To learn more about this error, see the - :manual:`UnknownTransactionCommitResult description - ` on - the Driver API page in the Server manual. - -The Convenient Transaction API incorporates retry logic for these error -types, so the driver retries the transaction until there is a successful commit. \ No newline at end of file +Because MongoDB transactions are :website:`ACID compliant +`, the driver might produce errors during operation +to ensure your data maintains consistent. If the following errors occur, your +application must retry the transaction: + +- ``TransientTransactionError``: Raised if a write operation encounters an error + before the driver commits the transaction. To learn more about this error + type, see the :manual:`TransientTransactionError + description ` on + the Drivers API page in the Server manual. +- ``UnknownTransactionCommitResult``: Raised if the commit operation encounters + an error. To learn more about this error type, see the + :manual:`UnknownTransactionCommitResult + description ` + on the Drivers API page in the Server manual. + +The following sections describe how to handle these errors when using different APIs. + +Convenient Transaction API Error Handling +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +The Convenient Transaction API incorporates retry logic for these error types. +The driver automatically retries the transaction until there is a successful +commit. + +Core API Error Handling +~~~~~~~~~~~~~~~~~~~~~~~ + +If you are using the Core API to perform a transaction, you must add the following +error-handling functions to your application: + +- A function that retries the entire transaction when the driver encounters a + ``TransientTransactionError`` +- A function that retries the commit operation when the driver encounters an + ``UnknownTransactionCommitResult`` + +These functions must run until there is a successful commit or a different +error. For an example of this retry logic, see the :manual:`Core API section +` on the Drivers API page in the +Server manual. \ No newline at end of file From 40bf70f909670f991d4941bbed9ffb903c2df4b5 Mon Sep 17 00:00:00 2001 From: Melanie Ballard Date: Mon, 23 Jun 2025 09:30:47 -0400 Subject: [PATCH 6/7] DOCSP-50497 update aggregation table, delete files moved to server, and spell check --- source/aggregation.txt | 10 +- source/aggregation/filtered-subset.txt | 195 ---------- source/aggregation/group-total.txt | 229 ----------- source/aggregation/multi-field-join.txt | 260 ------------- source/aggregation/one-to-one-join.txt | 229 ----------- source/aggregation/pipeline-stages.txt | 485 ++++++++++++++---------- source/aggregation/unpack-arrays.txt | 210 ---------- 7 files changed, 282 insertions(+), 1336 deletions(-) delete mode 100644 source/aggregation/filtered-subset.txt delete mode 100644 source/aggregation/group-total.txt delete mode 100644 source/aggregation/multi-field-join.txt delete mode 100644 source/aggregation/one-to-one-join.txt delete mode 100644 source/aggregation/unpack-arrays.txt diff --git a/source/aggregation.txt b/source/aggregation.txt index e9b3ccf51..5d38ad9ef 100644 --- a/source/aggregation.txt +++ b/source/aggregation.txt @@ -47,14 +47,14 @@ Analogy ~~~~~~~ The aggregation pipeline is similar to an automobile factory assembly line. An -assembly lines has stations with specialized tools that are used to perform +assembly line has stations with specialized tools that are used to perform specific tasks. For example, when building a car, the assembly line begins with -a frame. As the car frame moves though the assembly line, each station adds a -new part. The factory transforms and assembles the initial parts, resulting in -finished cars. +a frame. As the car frame moves though the assembly line, each station assembles +a separate part. The result is a transformed final product, the finished car. The *aggregation pipeline* is the assembly line, the *aggregation stages* are -the assembly stations, and the *expression operators* are the specialized tools. +the assembly stations, the *expression operators* are the specialized tools, and +the *aggregated result* is the finished product. Compare Aggregation and Find Operations --------------------------------------- diff --git a/source/aggregation/filtered-subset.txt b/source/aggregation/filtered-subset.txt deleted file mode 100644 index 9549a15eb..000000000 --- a/source/aggregation/filtered-subset.txt +++ /dev/null @@ -1,195 +0,0 @@ -.. _node-aggregation-filtered-subset: - -=============== -Filtered Subset -=============== - -.. contents:: On this page - :local: - :backlinks: none - :depth: 2 - :class: singlecol - -.. facet:: - :name: genre - :values: tutorial - -.. meta:: - :keywords: code example, node.js, sort, limit, aggregation - :description: Learn to use the MongoDB Node.js Driver to create an aggregation pipeline that filters, sorts, and formats a subset of documents in a MongoDB collection. - -Introduction ------------- - -In this tutorial, you can learn how to use the {+driver-short+} to -construct an aggregation pipeline, perform the -aggregation on a collection, and print the results by completing and -running a sample app. This aggregation performs the following operations: - -- Matches a subset of documents by a field value -- Formats result documents - -.. tip:: - - You can also query for a subset of documents in a collection by using the - Query API. To learn how to specify a query, see the - :ref:`Read Operations guides `. - -Aggregation Task Summary -~~~~~~~~~~~~~~~~~~~~~~~~ - -This tutorial demonstrates how to query a collection for a specific -subset of documents in a collection. The results contain -documents that describe the three youngest people who are engineers. - -This example uses one collection, ``persons``, which contains -documents describing people. Each document includes a person's name, -date of birth, vocation, and other details. - -Before You Get Started ----------------------- - -Before you start this tutorial, complete the -:ref:`node-agg-tutorial-template-app` instructions to set up a working -Node.js application. - -After you set up the app, access the ``persons`` collection by adding the -following code to the application: - -.. literalinclude:: /includes/aggregation/filtered-subset.js - :language: javascript - :copyable: true - :start-after: start-collection - :end-before: end-collection - :dedent: - -Delete any existing data in the collections and insert sample data into -the ``persons`` collection as shown in the following code: - -.. literalinclude:: /includes/aggregation/filtered-subset.js - :language: javascript - :copyable: true - :start-after: start-insert-persons - :end-before: end-insert-persons - :dedent: - -Tutorial --------- - -.. procedure:: - :style: connected - - .. step:: Add a match stage for people who are engineers - - First, add a :manual:`$match - ` stage that finds documents in which - the value of the ``vocation`` field is ``"ENGINEER"``: - - .. literalinclude:: /includes/aggregation/filtered-subset.js - :language: javascript - :copyable: true - :start-after: start-match - :end-before: end-match - :dedent: - - .. step:: Add a sort stage to sort from youngest to oldest - - Next, add a :manual:`$sort - ` stage that sorts the - documents in descending order by the ``dateofbirth`` field to - list the youngest people first: - - .. literalinclude:: /includes/aggregation/filtered-subset.js - :language: javascript - :copyable: true - :start-after: start-sort - :end-before: end-sort - :dedent: - - .. step:: Add a limit stage to see only three results - - Next, add a :manual:`$limit ` - stage to the pipeline to output only the first three documents in - the results. - - .. literalinclude:: /includes/aggregation/filtered-subset.js - :language: javascript - :copyable: true - :start-after: start-limit - :end-before: end-limit - :dedent: - - .. step:: Add an unset stage to remove unneeded fields - - Finally, add an :manual:`$unset - ` stage. The - ``$unset`` stage removes unnecessary fields from the result documents: - - .. literalinclude:: /includes/aggregation/filtered-subset.js - :language: javascript - :copyable: true - :start-after: start-unset - :end-before: end-unset - :dedent: - - .. tip:: - - Use the ``$unset`` operator instead of ``$project`` to avoid - modifying the aggregation pipeline if documents with - different fields are added to the collection. - - .. step:: Run the aggregation pipeline - - Add the following code to the end of your application to perform - the aggregation on the ``persons`` collection: - - .. literalinclude:: /includes/aggregation/filtered-subset.js - :language: javascript - :copyable: true - :start-after: start-run-agg - :end-before: end-run-agg - :dedent: - - Finally, run the following command in your shell to start your - application: - - .. code-block:: bash - - node agg_tutorial.js - - .. step:: Interpret results - - The aggregated result contains three documents. The documents - represent the three youngest people with the vocation of ``"ENGINEER"``, - ordered from youngest to oldest. The results omit the ``_id`` and ``address`` - fields. - - .. code-block:: javascript - :copyable: false - - { - person_id: '7363626383', - firstname: 'Carl', - lastname: 'Simmons', - dateofbirth: 1998-12-26T13:13:55.000Z, - vocation: 'ENGINEER' - } - { - person_id: '1723338115', - firstname: 'Olive', - lastname: 'Ranieri', - dateofbirth: 1985-05-12T23:14:30.000Z, - gender: 'FEMALE', - vocation: 'ENGINEER' - } - { - person_id: '6392529400', - firstname: 'Elise', - lastname: 'Smith', - dateofbirth: 1972-01-13T09:32:07.000Z, - vocation: 'ENGINEER' - } - -To view the complete code for this tutorial, see the `Completed Filtered Subset App -`__ -on GitHub. \ No newline at end of file diff --git a/source/aggregation/group-total.txt b/source/aggregation/group-total.txt deleted file mode 100644 index 9bc66a784..000000000 --- a/source/aggregation/group-total.txt +++ /dev/null @@ -1,229 +0,0 @@ -.. _node-aggregation-group-total: - -=============== -Group and Total -=============== - -.. contents:: On this page - :local: - :backlinks: none - :depth: 2 - :class: singlecol - -.. facet:: - :name: genre - :values: tutorial - -.. meta:: - :keywords: code example, node.js, analyze, aggregation - :description: Learn to use the MongoDB Node.js Driver to construct an aggregation pipeline that groups and analyzes data. - -Introduction ------------- - -In this tutorial, you can learn how to use the {+driver-short+} to -construct an aggregation pipeline, perform the -aggregation on a collection, and print the results by completing and -running a sample app. This aggregation performs the following operations: - -- Matches a subset of documents by a field value -- Groups documents by common field values -- Adds computed fields to each result document - -Aggregation Task Summary -~~~~~~~~~~~~~~~~~~~~~~~~ - -This tutorial demonstrates how to group and analyze customer order data. The -results show the list of customers who purchased items in 2020 and -includes each customer's order history for 2020. - -This example uses one collection, ``orders``, which contains documents -describing individual product orders. Since each order can correspond to -only one customer, the order documents are grouped by the -``customer_id`` field, which contains customer email addresses. - -Before You Get Started ----------------------- - -Before you start this tutorial, complete the -:ref:`node-agg-tutorial-template-app` instructions to set up a working -Node.js application. - -After you set up the app, access the ``orders`` collection by adding the -following code to the application: - -.. literalinclude:: /includes/aggregation/group-total.js - :language: javascript - :copyable: true - :start-after: start-coll - :end-before: end-coll - :dedent: - -Delete any existing data and insert sample data into -the ``orders`` collection as shown in the following code: - -.. literalinclude:: /includes/aggregation/group-total.js - :language: javascript - :copyable: true - :start-after: start-insert-orders - :end-before: end-insert-orders - :dedent: - -Tutorial --------- - -.. procedure:: - :style: connected - - .. step:: Add a match stage for orders in 2020 - - First, add a :manual:`$match - ` stage that matches - orders placed in 2020: - - .. literalinclude:: /includes/aggregation/group-total.js - :language: javascript - :copyable: true - :start-after: start-match - :end-before: end-match - :dedent: - - .. step:: Add a sort stage to sort by order date - - Next, add a :manual:`$sort - ` stage to set an - ascending sort on the ``orderdate`` field to surface the earliest - 2020 purchase for each customer in the next stage: - - .. literalinclude:: /includes/aggregation/group-total.js - :language: javascript - :copyable: true - :start-after: start-sort1 - :end-before: end-sort1 - :dedent: - - .. step:: Add a group stage to group by email address - - Add a :manual:`$group - ` stage to group - orders by the value of the ``customer_id`` field. In this - stage, add aggregation operations that create the - following fields in the result documents: - - - ``first_purchase_date``: the date of the customer's first purchase - - ``total_value``: the total value of all the customer's purchases - - ``total_orders``: the total number of the customer's purchases - - ``orders``: the list of all the customer's purchases, - including the date and value of each purchase - - .. literalinclude:: /includes/aggregation/group-total.js - :language: javascript - :copyable: true - :start-after: start-group - :end-before: end-group - :dedent: - - .. step:: Add a sort stage to sort by first order date - - Next, add another :manual:`$sort - ` stage to set an - ascending sort on the ``first_purchase_date`` field: - - .. literalinclude:: /includes/aggregation/group-total.js - :language: javascript - :copyable: true - :start-after: start-sort2 - :end-before: end-sort2 - :dedent: - - .. step:: Add a set stage to display the email address - - Add a :manual:`$set - ` stage to recreate the - ``customer_id`` field from the values in the ``_id`` field - that were set during the ``$group`` stage: - - .. literalinclude:: /includes/aggregation/group-total.js - :language: javascript - :copyable: true - :start-after: start-set - :end-before: end-set - :dedent: - - .. step:: Add an unset stage to remove unneeded fields - - Finally, add an :manual:`$unset - ` stage. The - ``$unset`` stage removes the ``_id`` field from the result - documents: - - .. literalinclude:: /includes/aggregation/group-total.js - :language: javascript - :copyable: true - :start-after: start-unset - :end-before: end-unset - :dedent: - - .. step:: Run the aggregation pipeline - - Add the following code to the end of your application to perform - the aggregation on the ``orders`` collection: - - .. literalinclude:: /includes/aggregation/group-total.js - :language: javascript - :copyable: true - :start-after: start-run-agg - :end-before: end-run-agg - :dedent: - - Finally, run the following command in your shell to start your - application: - - .. code-block:: bash - - node agg_tutorial.js - - .. step:: Interpret results - - The aggregation returns the following summary of customers' orders - from 2020: - - .. code-block:: javascript - :copyable: false - - { - first_purchase_date: 2020-01-01T08:25:37.000Z, - total_value: 63, - total_orders: 1, - orders: [ { orderdate: 2020-01-01T08:25:37.000Z, value: 63 } ], - customer_id: 'oranieri@warmmail.com' - } - { - first_purchase_date: 2020-01-13T09:32:07.000Z, - total_value: 436, - total_orders: 4, - orders: [ - { orderdate: 2020-01-13T09:32:07.000Z, value: 99 }, - { orderdate: 2020-05-30T08:35:52.000Z, value: 231 }, - { orderdate: 2020-10-03T13:49:44.000Z, value: 102 }, - { orderdate: 2020-12-26T08:55:46.000Z, value: 4 } - ], - customer_id: 'elise_smith@myemail.com' - } - { - first_purchase_date: 2020-08-18T23:04:48.000Z, - total_value: 191, - total_orders: 2, - orders: [ - { orderdate: 2020-08-18T23:04:48.000Z, value: 4 }, - { orderdate: 2020-11-23T22:56:53.000Z, value: 187 } - ], - customer_id: 'tj@wheresmyemail.com' - } - - The result documents contain details from all the orders from - a given customer, grouped by the customer's email address. - -To view the complete code for this tutorial, see the `Completed Group and Total App -`__ -on GitHub. diff --git a/source/aggregation/multi-field-join.txt b/source/aggregation/multi-field-join.txt deleted file mode 100644 index 605676177..000000000 --- a/source/aggregation/multi-field-join.txt +++ /dev/null @@ -1,260 +0,0 @@ -.. _node-aggregation-multi-field: - -================ -Multi-Field Join -================ - -.. contents:: On this page - :local: - :backlinks: none - :depth: 2 - :class: singlecol - -.. facet:: - :name: genre - :values: tutorial - -.. meta:: - :keywords: code example, node.js, lookup, aggregation - :description: Learn to perform a multi-field join using the MongoDB Node.js Driver to combine data from two collections in an aggregation pipeline. - -Introduction ------------- - -In this tutorial, you can learn how to use the {+driver-short+} to -construct an aggregation pipeline, perform the -aggregation on a collection, and print the results by completing and -running a sample app. - -This aggregation performs a multi-field join. A multi-field join occurs when there are -multiple corresponding fields in the documents of two collections that you use to -match documents together. The aggregation matches these documents on the -field values and combines information from both into one document. - -.. tip:: One-to-many Joins - - A one-to-many join is a variety of a multi-field join. When you - perform a one-to-many join, you select one field from a document that - matches a field value in multiple documents on the other side of the - join. To learn more about these data relationships, - see the Wikipedia entries about :wikipedia:`One-to-many (data model) - ` and - :wikipedia:`Many-to-many (data model) - `. - -Aggregation Task Summary -~~~~~~~~~~~~~~~~~~~~~~~~ - -This tutorial demonstrates how to combine data from a collection that -describes product information with another collection that describes -customer orders. The results show a list of products ordered in 2020 -that also contains details about each order. - -This example uses two collections: - -- ``products``, which contains documents describing the products that - a shop sells -- ``orders``, which contains documents describing individual orders - for products in a shop - -An order can only contain one product, so the aggregation uses a -multi-field join to match a product document to documents representing orders of -that product. The collections are joined by the ``name`` and -``variation`` fields in documents in the ``products`` collection, corresponding -to the ``product_name`` and ``product_variation`` fields in documents in -the ``orders`` collection. - -Before You Get Started ----------------------- - -Before you start this tutorial, complete the -:ref:`node-agg-tutorial-template-app` instructions to set up a working -Node.js application. - -After you set up the app, access the ``products`` and ``orders`` -collections by adding the following code to the application: - -.. literalinclude:: /includes/aggregation/multi-field-join.js - :language: javascript - :copyable: true - :start-after: start-colls - :end-before: end-colls - :dedent: - -Delete any existing data and insert sample data into -the ``products`` collection as shown in the following code: - -.. literalinclude:: /includes/aggregation/multi-field-join.js - :language: javascript - :copyable: true - :start-after: start-insert-products - :end-before: end-insert-products - :dedent: - -Delete any existing data and insert sample data into -the ``orders`` collection as shown in the following code: - -.. literalinclude:: /includes/aggregation/multi-field-join.js - :language: javascript - :copyable: true - :start-after: start-insert-orders - :end-before: end-insert-orders - :dedent: - -Tutorial --------- - -.. procedure:: - :style: connected - - .. step:: Add a lookup stage to link the collections and import fields - - The first stage of the pipeline is a :manual:`$lookup - ` stage to join the - ``orders`` collection to the ``products`` collection by two - fields in each collection. The lookup stage contains an - embedded pipeline to configure the join. - - Within the embedded pipeline, add a :manual:`$match - ` stage to match the - values of two fields on each side of the join. Note that the following - code uses aliases for the ``name`` and ``variation`` fields - set when :ref:`creating the $lookup stage `: - - .. literalinclude:: /includes/aggregation/multi-field-join.js - :language: javascript - :copyable: true - :start-after: start-embedded-pl-match1 - :end-before: end-embedded-pl-match1 - :dedent: - - Within the embedded pipeline, add another :manual:`$match - ` stage to match - orders placed in 2020: - - .. literalinclude:: /includes/aggregation/multi-field-join.js - :language: javascript - :copyable: true - :start-after: start-embedded-pl-match2 - :end-before: end-embedded-pl-match2 - :dedent: - - Within the embedded pipeline, add an :manual:`$unset - ` stage to remove - unneeded fields from the ``orders`` collection side of the join: - - .. literalinclude:: /includes/aggregation/multi-field-join.js - :language: javascript - :copyable: true - :start-after: start-embedded-pl-unset - :end-before: end-embedded-pl-unset - :dedent: - - .. _node-multi-field-agg-lookup-stage: - - After the embedded pipeline is completed, add the - ``$lookup`` stage to the main aggregation pipeline. - Configure this stage to store the processed lookup fields in - an array field called ``orders``: - - .. literalinclude:: /includes/aggregation/multi-field-join.js - :language: javascript - :copyable: true - :start-after: start-lookup - :end-before: end-lookup - :dedent: - - .. step:: Add a match stage for products ordered in 2020 - - Next, add a :manual:`$match - ` stage to only show - products for which there is at least one order in 2020, - based on the ``orders`` array calculated in the previous step: - - .. literalinclude:: /includes/aggregation/multi-field-join.js - :language: javascript - :copyable: true - :start-after: start-match - :end-before: end-match - :dedent: - - .. step:: Add an unset stage to remove unneeded fields - - Finally, add an :manual:`$unset - ` stage. The - ``$unset`` stage removes the ``_id`` and ``description`` - fields from the result documents: - - .. literalinclude:: /includes/aggregation/multi-field-join.js - :language: javascript - :copyable: true - :start-after: start-unset - :end-before: end-unset - :dedent: - - .. step:: Run the aggregation pipeline - - Add the following code to the end of your application to perform - the aggregation on the ``products`` collection: - - .. literalinclude:: /includes/aggregation/multi-field-join.js - :language: javascript - :copyable: true - :start-after: start-run-agg - :end-before: end-run-agg - :dedent: - - Finally, run the following command in your shell to start your - application: - - .. code-block:: bash - - node agg_tutorial.js - - .. step:: Interpret results - - The aggregated result contains two documents. The documents - represent products for which there were orders placed in 2020. - Each document contains an ``orders`` array field that lists details - about each order for that product: - - .. code-block:: javascript - :copyable: false - - { - name: 'Asus Laptop', - variation: 'Standard Display', - category: 'ELECTRONICS', - orders: [ - { - customer_id: 'elise_smith@myemail.com', - orderdate: 2020-05-30T08:35:52.000Z, - value: 431.43 - }, - { - customer_id: 'jjones@tepidmail.com', - orderdate: 2020-12-26T08:55:46.000Z, - value: 429.65 - } - ] - } - { - name: 'Morphy Richards Food Mixer', - variation: 'Deluxe', - category: 'KITCHENWARE', - orders: [ - { - customer_id: 'oranieri@warmmail.com', - orderdate: 2020-01-01T08:25:37.000Z, - value: 63.13 - } - ] - } - - The result documents contain details from documents in the - ``orders`` collection and the ``products`` collection, joined by - the product names and variations. - -To view the complete code for this tutorial, see the `Completed Multi-field Join App -`__ -on GitHub. diff --git a/source/aggregation/one-to-one-join.txt b/source/aggregation/one-to-one-join.txt deleted file mode 100644 index e0b944a58..000000000 --- a/source/aggregation/one-to-one-join.txt +++ /dev/null @@ -1,229 +0,0 @@ -.. _node-aggregation-one-to-one: - -=============== -One-to-One Join -=============== - -.. contents:: On this page - :local: - :backlinks: none - :depth: 2 - :class: singlecol - -.. facet:: - :name: genre - :values: tutorial - -.. meta:: - :keywords: code example, node.js, lookup, aggregation - :description: Learn to perform a one-to-one join using the MongoDB Node.js Driver to combine data from two collections in an aggregation pipeline. - -Introduction ------------- - -In this tutorial, you can learn how to use the {+driver-short+} to -construct an aggregation pipeline, perform the -aggregation on a collection, and print the results by completing and -running a sample app. - -This aggregation performs a one-to-one join. A one-to-one join occurs -when a document in one collection has a field value that matches a -single document in another collection that has the same field value. The -aggregation matches these documents on the field value and combines -information from both sources into one result. - -.. tip:: - - A one-to-one join does not require the documents to have a - one-to-one relationship. To learn more about this data relationship, - see the Wikipedia entry about :wikipedia:`One-to-one (data model) - `. - -Aggregation Task Summary -~~~~~~~~~~~~~~~~~~~~~~~~ - -This tutorial demonstrates how to combine data from a collection that -describes product information with another collection that describes -customer orders. The results show a list of all orders placed in 2020 that -includes the product details associated with each order. - -This example uses two collections: - -- ``orders``: contains documents describing individual orders - for products in a shop -- ``products``: contains documents describing the products that - a shop sells - -An order can only contain one product, so the aggregation uses a -one-to-one join to match an order document to the document for the -product. The collections are joined by a field called ``product_id`` -that exists in documents in both collections. - -Before You Get Started ----------------------- - -Before you start this tutorial, complete the -:ref:`node-agg-tutorial-template-app` instructions to set up a working -Node.js application. - -After you set up the app, access the ``orders`` and ``products`` -collections by adding the following code to the application: - -.. literalinclude:: /includes/aggregation/one-to-one-join.js - :language: javascript - :copyable: true - :start-after: start-colls - :end-before: end-colls - :dedent: - -Delete any existing data and insert sample data into -the ``orders`` collection as shown in the following code: - -.. literalinclude:: /includes/aggregation/one-to-one-join.js - :language: javascript - :copyable: true - :start-after: start-insert-orders - :end-before: end-insert-orders - :dedent: - -Delete any existing data and insert sample data into -the ``products`` collection as shown in the following code: - -.. literalinclude:: /includes/aggregation/one-to-one-join.js - :language: javascript - :copyable: true - :start-after: start-insert-products - :end-before: end-insert-products - :dedent: - -Tutorial --------- - -.. procedure:: - :style: connected - - .. step:: Add a match stage for orders in 2020 - - Add a :manual:`$match - ` stage that matches - orders placed in 2020: - - .. literalinclude:: /includes/aggregation/one-to-one-join.js - :language: javascript - :copyable: true - :start-after: start-match - :end-before: end-match - :dedent: - - .. step:: Add a lookup stage to link the collections - - Next, add a :manual:`$lookup - ` stage. The - ``$lookup`` stage joins the ``product_id`` field in the ``orders`` - collection to the ``id`` field in the ``products`` collection: - - .. literalinclude:: /includes/aggregation/one-to-one-join.js - :language: javascript - :copyable: true - :start-after: start-lookup - :end-before: end-lookup - :dedent: - - .. step:: Add set stages to create new document fields - - Next, add two :manual:`$set ` - stages to the pipeline. - - The first ``$set`` stage sets the ``product_mapping`` field - to the first element in the ``product_mapping`` object - created in the previous ``$lookup`` stage. - - The second ``$set`` stage creates two new fields, ``product_name`` - and ``product_category``, from the values in the - ``product_mapping`` object field: - - .. literalinclude:: /includes/aggregation/one-to-one-join.js - :language: javascript - :copyable: true - :start-after: start-set - :end-before: end-set - :dedent: - - .. tip:: - - Because this is a one-to-one join, the ``$lookup`` stage - adds only one array element to the input document. The pipeline - uses the :manual:`$first ` - operator to retrieve the data from this element. - - .. step:: Add an unset stage to remove unneeded fields - - Finally, add an :manual:`$unset - ` stage. The - ``$unset`` stage removes unnecessary fields from the document: - - .. literalinclude:: /includes/aggregation/one-to-one-join.js - :language: javascript - :copyable: true - :start-after: start-unset - :end-before: end-unset - :dedent: - - .. step:: Run the aggregation pipeline - - Add the following code to the end of your application to perform - the aggregation on the ``orders`` collection: - - .. literalinclude:: /includes/aggregation/one-to-one-join.js - :language: javascript - :copyable: true - :start-after: start-run-agg - :end-before: end-run-agg - :dedent: - - Finally, run the following command in your shell to start your - application: - - .. code-block:: bash - - node agg_tutorial.js - - .. step:: Interpret results - - The aggregated result contains three documents. The documents - represent customer orders that occurred in 2020, with the - ``product_name`` and ``product_category`` of the ordered product: - - .. code-block:: javascript - :copyable: false - - { - customer_id: 'elise_smith@myemail.com', - orderdate: 2020-05-30T08:35:52.000Z, - value: 431.43, - product_name: 'Asus Laptop', - product_category: 'ELECTRONICS' - } - { - customer_id: 'oranieri@warmmail.com', - orderdate: 2020-01-01T08:25:37.000Z, - value: 63.13, - product_name: 'Morphy Richardds Food Mixer', - product_category: 'KITCHENWARE' - } - { - customer_id: 'jjones@tepidmail.com', - orderdate: 2020-12-26T08:55:46.000Z, - value: 429.65, - product_name: 'Asus Laptop', - product_category: 'ELECTRONICS' - } - - The result consists of documents that contain fields from - documents in the ``orders`` collection and the ``products`` - collection, joined by matching the ``product_id`` field present in - each original document. - -To view the complete code for this tutorial, see the `Completed One-to-one Join App -`__ -on GitHub. diff --git a/source/aggregation/pipeline-stages.txt b/source/aggregation/pipeline-stages.txt index 495bb1800..1ff27e6e9 100644 --- a/source/aggregation/pipeline-stages.txt +++ b/source/aggregation/pipeline-stages.txt @@ -27,225 +27,294 @@ stages by using methods in the {+driver-short+}. Build an Aggregation Pipeline ----------------------------- -You can use the {+driver-short+} to build an aggregation pipeline by adding -aggregation stages and operations to the aggregation framework. See the -following code to learn how to format the framework. +You can use the {+driver-short+} to build an aggregation pipeline by creating a +pipeline variable or passing aggregation stages directly into the aggregation +method. See the following examples to learn more about each of these approaches. +.. tabs:: -.. code-block:: javascript + .. tab:: Create a Pipeline + :tabid: pipeline-definition - // Defines the aggregation pipeline - const pipeline = [ - { $match: { ... } }, - { $group: { ... } } - ]; + .. code-block:: javascript - // Executes the aggregation pipeline - const results = coll.aggregate(pipeline); + // Defines the aggregation pipeline + const pipeline = [ + { $match: { ... } }, + { $group: { ... } } + ]; + + // Executes the aggregation pipeline + const results = collection.aggregate(pipeline); + + .. tab:: Direct Aggregation + :tabid: pipeline-direct + + .. code-block:: javascript + + // Defines and executes the aggregation pipeline + collection.aggregate([ + { $match: { ... } }, + { $group: { ... } } + ]); Aggregation Stage Methods ------------------------- -The following table lists the stages in the aggregation pipeline. The methods -are formatted as they are listed when used in Node.js. To learn more about an aggregation -stage and see a code example in a Node.js application, follow the link from the -stage name to its reference page in the {+mdb-server+} manual. +The following table lists the stages in the aggregation pipeline. The stages are +formatted as they are listed when used in Node.js unless noted otherwise in the +description column. To learn more about an aggregation stage and see a code +example in a Node.js application, follow the link from the stage name to its +reference page in the {+mdb-server+} manual. .. list-table:: - :header-rows: 1 - :widths: 20 80 - - * - Stage - - Description - - * - :manual:`$bucket ` - - Categorizes incoming documents into groups, called buckets, - based on a specified expression and bucket boundaries. - - * - :manual:`$bucketAuto ` - - Categorizes incoming documents into a specific number of - groups, called buckets, based on a specified expression. - Bucket boundaries are automatically determined in an attempt - to evenly distribute the documents into the specified number - of buckets. - - * - :manual:`$changeStream ` - - Returns a change stream cursor for the - collection. This stage can occur only once in an aggregation - pipeline and it must occur as the first stage. - - * - :manual:`$changeStreamSplitLargeEvent ` - - Splits large change stream events that exceed 16 MB into smaller fragments returned - in a change stream cursor. - - You can use ``$changeStreamSplitLargeEvent`` only in a ``$changeStream`` pipeline, and - it must be the final stage in the pipeline. - - * - :manual:`$count ` - - Returns a count of the number of documents at this stage of - the aggregation pipeline. - - * - :manual:`$densify ` - - Creates new documents in a sequence of documents where certain values in a field are missing. - - * - :manual:`$documents ` - - Returns literal documents from input expressions. - - * - :manual:`$facet ` - - Processes multiple aggregation pipelines - within a single stage on the same set - of input documents. Enables the creation of multi-faceted - aggregations capable of characterizing data across multiple - dimensions, or facets, in a single stage. - - * - :manual:`$geoNear ` - - Returns documents in order of nearest to farthest from a - specified point. This method adds a field to output documents - that contains the distance from the specified point. - - * - :manual:`$graphLookup ` - - Performs a recursive search on a collection. This method adds - a new array field to each output document that contains the traversal - results of the recursive search for that document. - - * - :manual:`$group ` - - Groups input documents by a specified identifier expression - and applies the accumulator expressions, if specified, to - each group. Consumes all input documents and outputs one - document per each distinct group. The output documents - contain only the identifier field and, if specified, accumulated - fields. - - * - :manual:`$limit ` - - Passes the first *n* documents unmodified to the pipeline, - where *n* is the specified limit. For each input document, - outputs either one document (for the first *n* documents) or - zero documents (after the first *n* documents). - - * - :manual:`$lookup ` - - Performs a left outer join to another collection in the - *same* database to filter in documents from the "joined" - collection for processing. - - * - :manual:`$match ` - - Filters the document stream to allow only matching documents - to pass unmodified into the next pipeline stage. - For each input document, outputs either one document (a match) or zero - documents (no match). - - * - :manual:`$merge ` - - Writes the resulting documents of the aggregation pipeline to - a collection. The stage can incorporate (insert new - documents, merge documents, replace documents, keep existing - documents, fail the operation, process documents with a - custom update pipeline) the results into an output - collection. To use this stage, it must be - the last stage in the pipeline. - - * - :manual:`$out ` - - Writes the resulting documents of the aggregation pipeline to - a collection. To use this stage, it must be - the last stage in the pipeline. - - * - :manual:`$project ` - - Reshapes each document in the stream, such as by adding new - fields or removing existing fields. For each input document, - outputs one document. - - * - :manual:`$rankFusion ` - - Uses a rank fusion algorithm to combine results from a Vector Search - query and an Atlas Search query. - - * - :manual:`$replaceRoot ` - - Replaces a document with the specified embedded document. The - operation replaces all existing fields in the input document, - including the ``_id`` field. Specify a document embedded in - the input document to promote the embedded document to the - top level. - - The ``$replaceWith`` stage is an alias for the ``$replaceRoot`` stage. - - * - :manual:`$replaceWith ` - - Replaces a document with the specified embedded document. - The operation replaces all existing fields in the input document, including - the ``_id`` field. Specify a document embedded in the input document to promote - the embedded document to the top level. - - The ``$replaceWith`` stage is an alias for the ``$replaceRoot`` stage. - - * - :manual:`$sample ` - - Randomly selects the specified number of documents from its - input. - - * - :manual:`$search ` - - Performs a full-text search of the field or fields in an - :atlas:`Atlas ` - collection. - - This stage is available only for MongoDB Atlas clusters, and is not - available for self-managed deployments. To learn more, see - :atlas:`Atlas Search Aggregation Pipeline Stages - ` in the Atlas documentation. - - * - :manual:`$searchMeta ` - - Returns different types of metadata result documents for the - :atlas:`Atlas Search ` query against an - :atlas:`Atlas ` - collection. - - This stage is available only for MongoDB Atlas clusters, - and is not available for self-managed deployments. To learn - more, see :atlas:`Atlas Search Aggregation Pipeline Stages - ` in the Atlas documentation. - - * - :manual:`$set ` - - Adds new fields to documents. Like the ``Project()`` method, - this method reshapes each - document in the stream by adding new fields to - output documents that contain both the existing fields - from the input documents and the newly added fields. - - * - :manual:`$setWindowFields ` - - Groups documents into windows and applies one or more - operators to the documents in each window. - - * - :manual:`$skip ` - - Skips the first *n* documents, where *n* is the specified skip - number, and passes the remaining documents unmodified to the - pipeline. For each input document, outputs either zero - documents (for the first *n* documents) or one document (if - after the first *n* documents). - - * - :manual:`$sort ` - - Reorders the document stream by a specified sort key. The documents remain unmodified. - For each input document, outputs one document. - - * - :manual:`$sortByCount ` - - Groups incoming documents based on the value of a specified - expression, then computes the count of documents in each - distinct group. - - * - :manual:`$unionWith ` - - Combines pipeline results from two collections into a single - result set. - - * - :manual:`$unwind ` - - Deconstructs an array field from the input documents to - output a document for *each* element. Each output document - replaces the array with an element value. For each input - document, outputs *n* Documents, where *n* is the number of - array elements. *n* can be zero for an empty array. - - * - :manual:`$vectorSearch ` - - Performs an :abbr:`ANN (Approximate Nearest Neighbor)` or - :abbr:`ENN (Exact Nearest Neighbor)` search on a - vector in the specified field of an - :atlas:`Atlas ` collection. - - This stage is available only for MongoDB Atlas clusters, and is not - available for self-managed deployments. To learn more, see - :ref:`Atlas Vector Search `. + :header-rows: 1 + :widths: 20 80 + + * - Stage + - Description + + * - :manual:`$addFields ` + * - Adds new fields to documents. Outputs documents that contain both the + existing fields from the input documents and the newly added fields. + ``$set`` is an alias for ``$addFields``. + + * - :manual:`$bucket ` + - Categorizes incoming documents into groups, called buckets, + based on a specified expression and bucket boundaries. + + * - :manual:`$bucketAuto ` + - Categorizes incoming documents into a specific number of + groups, called buckets, based on a specified expression. + Bucket boundaries are automatically determined in an attempt + to evenly distribute the documents into the specified number + of buckets. + + * - :manual:`$changeStream ` + - Returns a change stream cursor for the collection. + + Instead of being passed to the ``.aggregate()`` method,, + ``$changeStream`` uses the ``.watch()`` method on a ``collection`` + object. + + * - :manual:`$changeStreamSplitLargeEvent ` + - Splits large change stream events that exceed 16 MB into smaller fragments returned + in a change stream cursor. + + Instead of being passed to the ``.aggregate()`` method, + ``$changeStreamSplitLargeEvent`` uses the ``.watch()`` method on a + ``collection`` object. + + * - :manual:`$collStats ` + - Returns statistics regarding a collection or view. + + * - :manual:`$count ` + - Returns a count of the number of documents at this stage of + the aggregation pipeline. + + * - :manual:`$currentOp ` + - Returns a stream of documents containing information on active and/or + dormant operations as well as inactive sessions that are holding locks as + part of a transaction. + + * - :manual:`$densify ` + - Creates new documents in a sequence of documents where certain values in a field are missing. + + * - :manual:`$documents ` + - Returns literal documents from input expressions. + + * - :manual:`$facet ` + - Processes multiple aggregation pipelines + within a single stage on the same set + of input documents. Enables the creation of multi-faceted + aggregations capable of characterizing data across multiple + dimensions, or facets, in a single stage. + + * - :manual:`$geoNear ` + - Returns documents in order of nearest to farthest from a + specified point. This method adds a field to output documents + that contains the distance from the specified point. + + * - :manual:`$graphLookup ` + - Performs a recursive search on a collection. This method adds + a new array field to each output document that contains the traversal + results of the recursive search for that document. + + * - :manual:`$group ` + - Groups input documents by a specified identifier expression + and applies the accumulator expressions, if specified, to + each group. Consumes all input documents and outputs one + document per each distinct group. The output documents + contain only the identifier field and, if specified, accumulated + fields. + + * - :manual:`$indexStats ` + - Returns statistics regarding the use of each index for the collection. + + * - :manual:`$limit ` + - Passes the first *n* documents unmodified to the pipeline, + where *n* is the specified limit. For each input document, + outputs either one document (for the first *n* documents) or + zero documents (after the first *n* documents). + + * - :manual:`$listSampledQueries ` + - Lists sampled queries for all collections or a specific collection. + + * - :manual:`$listSearchIndexes ` + - Returns information about existing :ref:`Atlas Search indexes + ` on a specified collection. + + * - :manual:`$listSessions ` + - Lists all sessions that have been active long enough to propagate to the + ``system.sessions`` collection. + + * - :manual:`$lookup ` + - Performs a left outer join to another collection in the + *same* database to filter in documents from the "joined" + collection for processing. + + * - :manual:`$match ` + - Filters the document stream to allow only matching documents + to pass unmodified into the next pipeline stage. + For each input document, outputs either one document (a match) or zero + documents (no match). + + * - :manual:`$merge ` + - Writes the resulting documents of the aggregation pipeline to + a collection. The stage can incorporate (insert new + documents, merge documents, replace documents, keep existing + documents, fail the operation, process documents with a + custom update pipeline) the results into an output + collection. To use this stage, it must be + the last stage in the pipeline. + + * - :manual:`$out ` + - Writes the resulting documents of the aggregation pipeline to + a collection. To use this stage, it must be + the last stage in the pipeline. + + * - :manual:`$planCacheStats ` + - Returns :manual:`plan cache + ` information + for a collection. + + * - :manual:`$project ` + - Reshapes each document in the stream, such as by adding new + fields or removing existing fields. For each input document, + outputs one document. + + * - :manual:`$querySettings ` + - Returns query settings previously added with :manual:`setQuerySettings `. + + *New in version 8.0.* + + * - :manual:`$queryStats ` + - Returns runtime statistics for recorded queries. + + * - :manual:`$redact ` + - Reshapes each document in the stream by restricting the content for each + document based on information stored in the documents themselves. + Incorporates the functionality of :manual:`$project + ` and :manual:`$match + `. Can be used to implement field + level redaction. For each input document, outputs either one or zero + documents. + + * - :manual:`$replaceRoot ` + - Replaces a document with the specified embedded document. The + operation replaces all existing fields in the input document, + including the ``_id`` field. Specify a document embedded in + the input document to promote the embedded document to the + top level. + + The ``$replaceWith`` stage is an alias for the ``$replaceRoot`` stage. + + * - :manual:`$replaceWith ` + - Replaces a document with the specified embedded document. + The operation replaces all existing fields in the input document, including + the ``_id`` field. Specify a document embedded in the input document to promote + the embedded document to the top level. + + The ``$replaceWith`` stage is an alias for the ``$replaceRoot`` stage. + + * - :manual:`$sample ` + - Randomly selects the specified number of documents from its + input. + + * - :manual:`$search ` + - Performs a full-text search of the field or fields in an + :atlas:`Atlas ` + collection. + + This stage is available only for MongoDB Atlas clusters, and is not + available for self-managed deployments. To learn more, see + :atlas:`Atlas Search Aggregation Pipeline Stages + ` in the Atlas documentation. + + * - :manual:`$searchMeta ` + - Returns different types of metadata result documents for the + :atlas:`Atlas Search ` query against an + :atlas:`Atlas ` + collection. + + This stage is available only for MongoDB Atlas clusters, + and is not available for self-managed deployments. To learn + more, see :atlas:`Atlas Search Aggregation Pipeline Stages + ` in the Atlas documentation. + + * - :manual:`$set ` + - Adds new fields to documents. Like the ``Project()`` method, + this method reshapes each + document in the stream by adding new fields to + output documents that contain both the existing fields + from the input documents and the newly added fields. + + * - :manual:`$setWindowFields ` + - Groups documents into windows and applies one or more + operators to the documents in each window. + + * - :manual:`$skip ` + - Skips the first *n* documents, where *n* is the specified skip + number, and passes the remaining documents unmodified to the + pipeline. For each input document, outputs either zero + documents (for the first *n* documents) or one document (if + after the first *n* documents). + + * - :manual:`$sort ` + - Reorders the document stream by a specified sort key. The documents remain unmodified. + For each input document, outputs one document. + + * - :manual:`$sortByCount ` + - Groups incoming documents based on the value of a specified + expression, then computes the count of documents in each + distinct group. + + * - :manual:`$unionWith ` + - Combines pipeline results from two collections into a single + result set. + + * - :manual:`$unset ` + - Removes/excludes fields from documents. + + ``$unset`` is an alias for ``$project`` that removes fields. + + * - :manual:`$unwind ` + - Deconstructs an array field from the input documents to + output a document for *each* element. Each output document + replaces the array with an element value. For each input + document, outputs *n* Documents, where *n* is the number of + array elements. *n* can be zero for an empty array. + + * - :manual:`$vectorSearch ` + - Performs an :abbr:`ANN (Approximate Nearest Neighbor)` or + :abbr:`ENN (Exact Nearest Neighbor)` search on a + vector in the specified field of an + :atlas:`Atlas ` collection. + + This stage is available only for MongoDB Atlas clusters, and is not + available for self-managed deployments. To learn more, see + :ref:`Atlas Vector Search `. API Documentation ~~~~~~~~~~~~~~~~~ diff --git a/source/aggregation/unpack-arrays.txt b/source/aggregation/unpack-arrays.txt deleted file mode 100644 index 392183452..000000000 --- a/source/aggregation/unpack-arrays.txt +++ /dev/null @@ -1,210 +0,0 @@ -.. _node-aggregation-arrays: - -======================= -Unpack Arrays and Group -======================= - -.. contents:: On this page - :local: - :backlinks: none - :depth: 2 - :class: singlecol - -.. facet:: - :name: genre - :values: tutorial - -.. meta:: - :keywords: code example, node.js, analyze, array - :description: Learn to use the MongoDB Node.js Driver to create an aggregation pipeline that unpacks arrays, filters, groups, and computes fields in MongoDB. - -Introduction ------------- - -In this tutorial, you can learn how to use the {+driver-short+} to -construct an aggregation pipeline, perform the -aggregation on a collection, and print the results by completing and -running a sample app. This aggregation performs the following operations: - -- Unwinds an array field into separate documents -- Matches a subset of documents by a field value -- Groups documents by common field values -- Adds computed fields to each result document - -Aggregation Task Summary -~~~~~~~~~~~~~~~~~~~~~~~~ - -This tutorial demonstrates how to create insights from customer order -data. The results show the list of products ordered that cost more than -$15, and each document contains the number of units sold and the total -sale value for each product. - -This example uses one collection, ``orders``, which contains documents -describing product orders. Since each order contains multiple products, -the first step of the aggregation is unpacking the ``products`` array -into individual product order documents. - -Before You Get Started ----------------------- - -Before you start this tutorial, complete the -:ref:`node-agg-tutorial-template-app` instructions to set up a working -Node.js application. - -After you set up the app, access the ``orders`` collection by adding the -following code to the application: - -.. literalinclude:: /includes/aggregation/unpack-arrays.js - :language: javascript - :copyable: true - :start-after: start-coll - :end-before: end-coll - :dedent: - -Delete any existing data and insert sample data into -the ``orders`` collection as shown in the following code: - -.. literalinclude:: /includes/aggregation/unpack-arrays.js - :language: javascript - :copyable: true - :start-after: start-insert-orders - :end-before: end-insert-orders - :dedent: - -Tutorial --------- - -.. procedure:: - :style: connected - - .. step:: Add an unwind stage to unpack the array of product orders - - First, add an :manual:`$unwind - ` stage to separate the - entries in the ``products`` array into individual documents: - - .. literalinclude:: /includes/aggregation/unpack-arrays.js - :language: javascript - :copyable: true - :start-after: start-unwind - :end-before: end-unwind - :dedent: - - .. step:: Add a match stage for products that cost more than $15 - - Next, add a :manual:`$match - ` stage that matches - products with a ``products.price`` value greater than ``15``: - - .. literalinclude:: /includes/aggregation/unpack-arrays.js - :language: javascript - :copyable: true - :start-after: start-match - :end-before: end-match - :dedent: - - .. step:: Add a group stage to group by product type - - Add a :manual:`$group - ` stage to group - orders by the value of the ``prod_id`` field. In this - stage, add aggregation operations that create the - following fields in the result documents: - - - ``product``: the product name - - ``total_value``: the total value of all the sales of the product - - ``quantity``: the number of orders for the product - - .. literalinclude:: /includes/aggregation/unpack-arrays.js - :language: javascript - :copyable: true - :start-after: start-group - :end-before: end-group - :dedent: - - .. step:: Add a set stage to display the product ID - - Add a :manual:`$set - ` stage to recreate the - ``product_id`` field from the values in the ``_id`` field - that were set during the ``$group`` stage: - - .. literalinclude:: /includes/aggregation/unpack-arrays.js - :language: javascript - :copyable: true - :start-after: start-set - :end-before: end-set - :dedent: - - .. step:: Add an unset stage to remove unneeded fields - - Finally, add an :manual:`$unset - ` stage. The - ``$unset`` stage removes the ``_id`` field from the result - documents: - - .. literalinclude:: /includes/aggregation/unpack-arrays.js - :language: javascript - :copyable: true - :start-after: start-unset - :end-before: end-unset - :dedent: - - .. step:: Run the aggregation pipeline - - Add the following code to the end of your application to perform - the aggregation on the ``orders`` collection: - - .. literalinclude:: /includes/aggregation/unpack-arrays.js - :language: javascript - :copyable: true - :start-after: start-run-agg - :end-before: end-run-agg - :dedent: - - Finally, run the following command in your shell to start your - application: - - .. code-block:: bash - - node agg_tutorial.js - - .. step:: Interpret results - - The aggregation returns the following summary of customers' orders - from 2020: - - .. code-block:: javascript - :copyable: false - - { - product: 'Asus Laptop', - total_value: 860, - quantity: 2, - product_id: 'abc12345' - } - { - product: 'Morphy Richards Food Mixer', - total_value: 431, - quantity: 1, - product_id: 'pqr88223' - } - { - product: 'Russell Hobbs Chrome Kettle', - total_value: 16, - quantity: 1, - product_id: 'xyz11228' - } - { - product: 'Karcher Hose Set', - total_value: 66, - quantity: 3, - product_id: 'def45678' - } - - The result documents contain details about the total value and - quantity of orders for products that cost more than $15. - -To view the complete code for this tutorial, see the `Completed Unpack Arrays App -`__ -on GitHub. From dfe3245dd4e654fcc32b06bf32d5e205f7330249 Mon Sep 17 00:00:00 2001 From: Melanie Ballard Date: Mon, 23 Jun 2025 09:51:17 -0400 Subject: [PATCH 7/7] DOCSP-50497 fix table and code directives --- source/aggregation/pipeline-stages.txt | 487 +++++++++++++------------ 1 file changed, 244 insertions(+), 243 deletions(-) diff --git a/source/aggregation/pipeline-stages.txt b/source/aggregation/pipeline-stages.txt index 1ff27e6e9..6640dd6c7 100644 --- a/source/aggregation/pipeline-stages.txt +++ b/source/aggregation/pipeline-stages.txt @@ -50,7 +50,7 @@ method. See the following examples to learn more about each of these approaches. .. tab:: Direct Aggregation :tabid: pipeline-direct - .. code-block:: javascript + .. code-block:: javascript // Defines and executes the aggregation pipeline collection.aggregate([ @@ -68,253 +68,254 @@ description column. To learn more about an aggregation stage and see a code example in a Node.js application, follow the link from the stage name to its reference page in the {+mdb-server+} manual. + .. list-table:: :header-rows: 1 :widths: 20 80 - * - Stage - - Description - - * - :manual:`$addFields ` - * - Adds new fields to documents. Outputs documents that contain both the - existing fields from the input documents and the newly added fields. - ``$set`` is an alias for ``$addFields``. - - * - :manual:`$bucket ` - - Categorizes incoming documents into groups, called buckets, - based on a specified expression and bucket boundaries. - - * - :manual:`$bucketAuto ` - - Categorizes incoming documents into a specific number of - groups, called buckets, based on a specified expression. - Bucket boundaries are automatically determined in an attempt - to evenly distribute the documents into the specified number - of buckets. - - * - :manual:`$changeStream ` - - Returns a change stream cursor for the collection. - - Instead of being passed to the ``.aggregate()`` method,, - ``$changeStream`` uses the ``.watch()`` method on a ``collection`` - object. - - * - :manual:`$changeStreamSplitLargeEvent ` - - Splits large change stream events that exceed 16 MB into smaller fragments returned - in a change stream cursor. - - Instead of being passed to the ``.aggregate()`` method, - ``$changeStreamSplitLargeEvent`` uses the ``.watch()`` method on a - ``collection`` object. - - * - :manual:`$collStats ` - - Returns statistics regarding a collection or view. - - * - :manual:`$count ` - - Returns a count of the number of documents at this stage of - the aggregation pipeline. - - * - :manual:`$currentOp ` - - Returns a stream of documents containing information on active and/or - dormant operations as well as inactive sessions that are holding locks as - part of a transaction. - - * - :manual:`$densify ` - - Creates new documents in a sequence of documents where certain values in a field are missing. - - * - :manual:`$documents ` - - Returns literal documents from input expressions. - - * - :manual:`$facet ` - - Processes multiple aggregation pipelines - within a single stage on the same set - of input documents. Enables the creation of multi-faceted - aggregations capable of characterizing data across multiple - dimensions, or facets, in a single stage. - - * - :manual:`$geoNear ` - - Returns documents in order of nearest to farthest from a - specified point. This method adds a field to output documents - that contains the distance from the specified point. - - * - :manual:`$graphLookup ` - - Performs a recursive search on a collection. This method adds - a new array field to each output document that contains the traversal - results of the recursive search for that document. - - * - :manual:`$group ` - - Groups input documents by a specified identifier expression - and applies the accumulator expressions, if specified, to - each group. Consumes all input documents and outputs one - document per each distinct group. The output documents - contain only the identifier field and, if specified, accumulated - fields. - - * - :manual:`$indexStats ` - - Returns statistics regarding the use of each index for the collection. - - * - :manual:`$limit ` - - Passes the first *n* documents unmodified to the pipeline, - where *n* is the specified limit. For each input document, - outputs either one document (for the first *n* documents) or - zero documents (after the first *n* documents). - - * - :manual:`$listSampledQueries ` - - Lists sampled queries for all collections or a specific collection. - - * - :manual:`$listSearchIndexes ` - - Returns information about existing :ref:`Atlas Search indexes - ` on a specified collection. - - * - :manual:`$listSessions ` - - Lists all sessions that have been active long enough to propagate to the - ``system.sessions`` collection. - - * - :manual:`$lookup ` - - Performs a left outer join to another collection in the - *same* database to filter in documents from the "joined" - collection for processing. - - * - :manual:`$match ` - - Filters the document stream to allow only matching documents - to pass unmodified into the next pipeline stage. - For each input document, outputs either one document (a match) or zero - documents (no match). - - * - :manual:`$merge ` - - Writes the resulting documents of the aggregation pipeline to - a collection. The stage can incorporate (insert new - documents, merge documents, replace documents, keep existing - documents, fail the operation, process documents with a - custom update pipeline) the results into an output - collection. To use this stage, it must be - the last stage in the pipeline. - - * - :manual:`$out ` - - Writes the resulting documents of the aggregation pipeline to - a collection. To use this stage, it must be - the last stage in the pipeline. - - * - :manual:`$planCacheStats ` - - Returns :manual:`plan cache - ` information - for a collection. - - * - :manual:`$project ` - - Reshapes each document in the stream, such as by adding new - fields or removing existing fields. For each input document, - outputs one document. - - * - :manual:`$querySettings ` - - Returns query settings previously added with :manual:`setQuerySettings `. - - *New in version 8.0.* - - * - :manual:`$queryStats ` - - Returns runtime statistics for recorded queries. - - * - :manual:`$redact ` - - Reshapes each document in the stream by restricting the content for each - document based on information stored in the documents themselves. - Incorporates the functionality of :manual:`$project - ` and :manual:`$match - `. Can be used to implement field - level redaction. For each input document, outputs either one or zero - documents. - - * - :manual:`$replaceRoot ` - - Replaces a document with the specified embedded document. The - operation replaces all existing fields in the input document, - including the ``_id`` field. Specify a document embedded in - the input document to promote the embedded document to the - top level. - - The ``$replaceWith`` stage is an alias for the ``$replaceRoot`` stage. - - * - :manual:`$replaceWith ` - - Replaces a document with the specified embedded document. - The operation replaces all existing fields in the input document, including - the ``_id`` field. Specify a document embedded in the input document to promote - the embedded document to the top level. - - The ``$replaceWith`` stage is an alias for the ``$replaceRoot`` stage. - - * - :manual:`$sample ` - - Randomly selects the specified number of documents from its - input. - - * - :manual:`$search ` - - Performs a full-text search of the field or fields in an - :atlas:`Atlas ` - collection. - - This stage is available only for MongoDB Atlas clusters, and is not - available for self-managed deployments. To learn more, see - :atlas:`Atlas Search Aggregation Pipeline Stages - ` in the Atlas documentation. - - * - :manual:`$searchMeta ` - - Returns different types of metadata result documents for the - :atlas:`Atlas Search ` query against an - :atlas:`Atlas ` - collection. - - This stage is available only for MongoDB Atlas clusters, - and is not available for self-managed deployments. To learn - more, see :atlas:`Atlas Search Aggregation Pipeline Stages - ` in the Atlas documentation. - - * - :manual:`$set ` - - Adds new fields to documents. Like the ``Project()`` method, - this method reshapes each - document in the stream by adding new fields to - output documents that contain both the existing fields - from the input documents and the newly added fields. - - * - :manual:`$setWindowFields ` - - Groups documents into windows and applies one or more - operators to the documents in each window. - - * - :manual:`$skip ` - - Skips the first *n* documents, where *n* is the specified skip - number, and passes the remaining documents unmodified to the - pipeline. For each input document, outputs either zero - documents (for the first *n* documents) or one document (if - after the first *n* documents). - - * - :manual:`$sort ` - - Reorders the document stream by a specified sort key. The documents remain unmodified. - For each input document, outputs one document. - - * - :manual:`$sortByCount ` - - Groups incoming documents based on the value of a specified - expression, then computes the count of documents in each - distinct group. - - * - :manual:`$unionWith ` - - Combines pipeline results from two collections into a single - result set. - - * - :manual:`$unset ` - - Removes/excludes fields from documents. + * - Stage + - Description + + * - :manual:`$addFields ` + * - Adds new fields to documents. Outputs documents that contain both the + existing fields from the input documents and the newly added fields. + ``$set`` is an alias for ``$addFields``. + + * - :manual:`$bucket ` + - Categorizes incoming documents into groups, called buckets, + based on a specified expression and bucket boundaries. + + * - :manual:`$bucketAuto ` + - Categorizes incoming documents into a specific number of + groups, called buckets, based on a specified expression. + Bucket boundaries are automatically determined in an attempt + to evenly distribute the documents into the specified number + of buckets. + + * - :manual:`$changeStream ` + - Returns a change stream cursor for the collection. - ``$unset`` is an alias for ``$project`` that removes fields. - - * - :manual:`$unwind ` - - Deconstructs an array field from the input documents to - output a document for *each* element. Each output document - replaces the array with an element value. For each input - document, outputs *n* Documents, where *n* is the number of - array elements. *n* can be zero for an empty array. - - * - :manual:`$vectorSearch ` - - Performs an :abbr:`ANN (Approximate Nearest Neighbor)` or - :abbr:`ENN (Exact Nearest Neighbor)` search on a - vector in the specified field of an - :atlas:`Atlas ` collection. - - This stage is available only for MongoDB Atlas clusters, and is not - available for self-managed deployments. To learn more, see - :ref:`Atlas Vector Search `. + Instead of being passed to the ``.aggregate()`` method,, + ``$changeStream`` uses the ``.watch()`` method on a ``collection`` + object. + + * - :manual:`$changeStreamSplitLargeEvent ` + - Splits large change stream events that exceed 16 MB into smaller fragments returned + in a change stream cursor. + + Instead of being passed to the ``.aggregate()`` method, + ``$changeStreamSplitLargeEvent`` uses the ``.watch()`` method on a + ``collection`` object. + + * - :manual:`$collStats ` + - Returns statistics regarding a collection or view. + + * - :manual:`$count ` + - Returns a count of the number of documents at this stage of + the aggregation pipeline. + + * - :manual:`$currentOp ` + - Returns a stream of documents containing information on active and/or + dormant operations as well as inactive sessions that are holding locks as + part of a transaction. + + * - :manual:`$densify ` + - Creates new documents in a sequence of documents where certain values in a field are missing. + + * - :manual:`$documents ` + - Returns literal documents from input expressions. + + * - :manual:`$facet ` + - Processes multiple aggregation pipelines + within a single stage on the same set + of input documents. Enables the creation of multi-faceted + aggregations capable of characterizing data across multiple + dimensions, or facets, in a single stage. + + * - :manual:`$geoNear ` + - Returns documents in order of nearest to farthest from a + specified point. This method adds a field to output documents + that contains the distance from the specified point. + + * - :manual:`$graphLookup ` + - Performs a recursive search on a collection. This method adds + a new array field to each output document that contains the traversal + results of the recursive search for that document. + + * - :manual:`$group ` + - Groups input documents by a specified identifier expression + and applies the accumulator expressions, if specified, to + each group. Consumes all input documents and outputs one + document per each distinct group. The output documents + contain only the identifier field and, if specified, accumulated + fields. + + * - :manual:`$indexStats ` + - Returns statistics regarding the use of each index for the collection. + + * - :manual:`$limit ` + - Passes the first *n* documents unmodified to the pipeline, + where *n* is the specified limit. For each input document, + outputs either one document (for the first *n* documents) or + zero documents (after the first *n* documents). + + * - :manual:`$listSampledQueries ` + - Lists sampled queries for all collections or a specific collection. + + * - :manual:`$listSearchIndexes ` + - Returns information about existing :ref:`Atlas Search indexes + ` on a specified collection. + + * - :manual:`$listSessions ` + - Lists all sessions that have been active long enough to propagate to the + ``system.sessions`` collection. + + * - :manual:`$lookup ` + - Performs a left outer join to another collection in the + *same* database to filter in documents from the "joined" + collection for processing. + + * - :manual:`$match ` + - Filters the document stream to allow only matching documents + to pass unmodified into the next pipeline stage. + For each input document, outputs either one document (a match) or zero + documents (no match). + + * - :manual:`$merge ` + - Writes the resulting documents of the aggregation pipeline to + a collection. The stage can incorporate (insert new + documents, merge documents, replace documents, keep existing + documents, fail the operation, process documents with a + custom update pipeline) the results into an output + collection. To use this stage, it must be + the last stage in the pipeline. + + * - :manual:`$out ` + - Writes the resulting documents of the aggregation pipeline to + a collection. To use this stage, it must be + the last stage in the pipeline. + + * - :manual:`$planCacheStats ` + - Returns :manual:`plan cache + ` information + for a collection. + + * - :manual:`$project ` + - Reshapes each document in the stream, such as by adding new + fields or removing existing fields. For each input document, + outputs one document. + + * - :manual:`$querySettings ` + - Returns query settings previously added with :manual:`setQuerySettings `. + + *New in version 8.0.* + + * - :manual:`$queryStats ` + - Returns runtime statistics for recorded queries. + + * - :manual:`$redact ` + - Reshapes each document in the stream by restricting the content for each + document based on information stored in the documents themselves. + Incorporates the functionality of :manual:`$project + ` and :manual:`$match + `. Can be used to implement field + level redaction. For each input document, outputs either one or zero + documents. + + * - :manual:`$replaceRoot ` + - Replaces a document with the specified embedded document. The + operation replaces all existing fields in the input document, + including the ``_id`` field. Specify a document embedded in + the input document to promote the embedded document to the + top level. + + The ``$replaceWith`` stage is an alias for the ``$replaceRoot`` stage. + + * - :manual:`$replaceWith ` + - Replaces a document with the specified embedded document. + The operation replaces all existing fields in the input document, including + the ``_id`` field. Specify a document embedded in the input document to promote + the embedded document to the top level. + + The ``$replaceWith`` stage is an alias for the ``$replaceRoot`` stage. + + * - :manual:`$sample ` + - Randomly selects the specified number of documents from its + input. + + * - :manual:`$search ` + - Performs a full-text search of the field or fields in an + :atlas:`Atlas ` + collection. + + This stage is available only for MongoDB Atlas clusters, and is not + available for self-managed deployments. To learn more, see + :atlas:`Atlas Search Aggregation Pipeline Stages + ` in the Atlas documentation. + + * - :manual:`$searchMeta ` + - Returns different types of metadata result documents for the + :atlas:`Atlas Search ` query against an + :atlas:`Atlas ` + collection. + + This stage is available only for MongoDB Atlas clusters, + and is not available for self-managed deployments. To learn + more, see :atlas:`Atlas Search Aggregation Pipeline Stages + ` in the Atlas documentation. + + * - :manual:`$set ` + - Adds new fields to documents. Like the ``Project()`` method, + this method reshapes each + document in the stream by adding new fields to + output documents that contain both the existing fields + from the input documents and the newly added fields. + + * - :manual:`$setWindowFields ` + - Groups documents into windows and applies one or more + operators to the documents in each window. + + * - :manual:`$skip ` + - Skips the first *n* documents, where *n* is the specified skip + number, and passes the remaining documents unmodified to the + pipeline. For each input document, outputs either zero + documents (for the first *n* documents) or one document (if + after the first *n* documents). + + * - :manual:`$sort ` + - Reorders the document stream by a specified sort key. The documents remain unmodified. + For each input document, outputs one document. + + * - :manual:`$sortByCount ` + - Groups incoming documents based on the value of a specified + expression, then computes the count of documents in each + distinct group. + + * - :manual:`$unionWith ` + - Combines pipeline results from two collections into a single + result set. + + * - :manual:`$unset ` + - Removes/excludes fields from documents. + + ``$unset`` is an alias for ``$project`` that removes fields. + + * - :manual:`$unwind ` + - Deconstructs an array field from the input documents to + output a document for *each* element. Each output document + replaces the array with an element value. For each input + document, outputs *n* Documents, where *n* is the number of + array elements. *n* can be zero for an empty array. + + * - :manual:`$vectorSearch ` + - Performs an :abbr:`ANN (Approximate Nearest Neighbor)` or + :abbr:`ENN (Exact Nearest Neighbor)` search on a + vector in the specified field of an + :atlas:`Atlas ` collection. + + This stage is available only for MongoDB Atlas clusters, and is not + available for self-managed deployments. To learn more, see + :ref:`Atlas Vector Search `. API Documentation ~~~~~~~~~~~~~~~~~