collection – Collection level operations

Collection level utilities for Mongo.

pymongo.ASCENDING
Ascending sort order.
pymongo.DESCENDING
Descending sort order.
pymongo.GEO2D

Index specifier for a 2-dimensional geospatial index.

New in version 1.5.1.

Note

Geo-spatial indexing requires server version >= 1.3.3+.

class pymongo.collection.Collection(database, name, options=None, create=False, **kwargs)

Get / create a Mongo collection.

Raises TypeError if name is not an instance of basestring. Raises InvalidName if name is not a valid collection name. Any additional keyword arguments will be used as options passed to the create command. See create_collection() for valid options.

If create is True or additional keyword arguments are present a create command will be sent. Otherwise, a create command will not be sent and the collection will be created implicitly on first use.

Parameters:
  • database: the database to get a collection from
  • name: the name of the collection to get
  • options: DEPRECATED dictionary of collection options
  • create (optional): if True, force collection creation even without options being set
  • **kwargs (optional): additional keyword arguments will be passed as options for the create collection command

Changed in version 1.5: deprecating options in favor of kwargs

New in version 1.5: the create parameter

See general MongoDB documentation

collections

count()

Get the number of documents in this collection.

To get the number of documents matching a specific query use pymongo.cursor.Cursor.count().

create_index(key_or_list, deprecated_unique=None, ttl=300, **kwargs)

Creates an index on this collection.

Takes either a single key or a list of (key, direction) pairs. The key(s) must be an instance of basestring, and the directions must be one of (ASCENDING, DESCENDING, GEO2D). Returns the name of the created index.

To create a single key index on the key 'mike' we just use a string argument:

>>> my_collection.create_index("mike")

For a compound index on 'mike' descending and 'eliot' ascending we need to use a list of tuples:

>>> my_collection.create_index([("mike", pymongo.DESCENDING),
...                             ("eliot", pymongo.ASCENDING)])

All optional index creation paramaters should be passed as keyword arguments to this method. Valid options include:

  • name: custom name to use for this index - if none is given, a name will be generated
  • unique: should this index guarantee uniqueness?
  • dropDups or drop_dups: should we drop duplicates during index creation when creating a unique index?
  • min: minimum value for keys in a GEO2D index
  • max: maximum value for keys in a GEO2D index
Parameters:
  • key_or_list: a single key or a list of (key, direction) pairs specifying the index to create
  • deprecated_unique: DEPRECATED - use unique as a kwarg
  • ttl (optional): time window (in seconds) during which this index will be recognized by subsequent calls to ensure_index() - see documentation for ensure_index() for details
  • kwargs (optional): any additional index creation options (see the above list) should be passed as keyword arguments

Changed in version 1.5.1: Accept kwargs to support all index creation options.

New in version 1.5: The name parameter.

See also

ensure_index()

See general MongoDB documentation

indexes

database

The Database that this Collection is a part of.

Changed in version 1.3: database is now a property rather than a method. The database() method is deprecated.

distinct(key)

Get a list of distinct values for key among all documents in this collection.

Raises TypeError if key is not an instance of basestring.

To get the distinct values for a key in the result set of a query use distinct().

Parameters:
  • key: name of key for which we want to get the distinct values

Note

Requires server version >= 1.1.0

New in version 1.1.1.

drop_index(index_or_name)

Drops the specified index on this collection.

Can be used on non-existant collections or collections with no indexes. Raises OperationFailure on an error. index_or_name can be either an index name (as returned by create_index), or an index specifier (as passed to create_index). An index specifier should be a list of (key, direction) pairs. Raises TypeError if index is not an instance of (str, unicode, list).

Warning

if a custom name was used on index creation (by passing the name parameter to create_index() or ensure_index()) the index must be dropped by name.

Parameters:
  • index_or_name: index (or name of index) to drop
drop_indexes()

Drops all indexes on this collection.

Can be used on non-existant collections or collections with no indexes. Raises OperationFailure on an error.

ensure_index(key_or_list, deprecated_unique=None, ttl=300, **kwargs)

Ensures that an index exists on this collection.

Takes either a single key or a list of (key, direction) pairs. The key(s) must be an instance of basestring, and the direction(s) must be one of (ASCENDING, DESCENDING, GEO2D). See create_index() for a detailed example.

Unlike create_index(), which attempts to create an index unconditionally, ensure_index() takes advantage of some caching within the driver such that it only attempts to create indexes that might not already exist. When an index is created (or ensured) by PyMongo it is “remembered” for ttl seconds. Repeated calls to ensure_index() within that time limit will be lightweight - they will not attempt to actually create the index.

Care must be taken when the database is being accessed through multiple connections at once. If an index is created using PyMongo and then deleted using another connection any call to ensure_index() within the cache window will fail to re-create the missing index.

Returns the name of the created index if an index is actually created. Returns None if the index already exists.

All optional index creation paramaters should be passed as keyword arguments to this method. Valid options include:

  • name: custom name to use for this index - if none is given, a name will be generated
  • unique: should this index guarantee uniqueness?
  • dropDups or drop_dups: should we drop duplicates during index creation when creating a unique index?
  • min: minimum value for keys in a GEO2D index
  • max: maximum value for keys in a GEO2D index
Parameters:
  • key_or_list: a single key or a list of (key, direction) pairs specifying the index to create
  • deprecated_unique: DEPRECATED - use unique as a kwarg
  • ttl (optional): time window (in seconds) during which this index will be recognized by subsequent calls to ensure_index()
  • kwargs (optional): any additional index creation options (see the above list) should be passed as keyword arguments

Changed in version 1.5.1: Accept kwargs to support all index creation options.

New in version 1.5: The name parameter.

See also

create_index()

find(spec=None, fields=None, skip=0, limit=0, timeout=True, snapshot=False, tailable=False, _sock=None, _must_use_master=False, _is_command=False)

Query the database.

The spec argument is a prototype document that all results must match. For example:

>>> db.test.find({"hello": "world"})

only matches documents that have a key “hello” with value “world”. Matches can have other keys in addition to “hello”. The fields argument is used to specify a subset of fields that should be included in the result documents. By limiting results to a certain subset of fields you can cut down on network traffic and decoding time.

Raises TypeError if any of the arguments are of improper type. Returns an instance of Cursor corresponding to this query.

Parameters:
  • spec (optional): a SON object specifying elements which must be present for a document to be included in the result set
  • fields (optional): a list of field names that should be returned in the result set (“_id” will always be included)
  • skip (optional): the number of documents to omit (from the start of the result set) when returning the results
  • limit (optional): the maximum number of results to return
  • timeout (optional): if True, any returned cursor will be subject to the normal timeout behavior of the mongod process. Otherwise, the returned cursor will never timeout at the server. Care should be taken to ensure that cursors with timeout turned off are properly closed.
  • snapshot (optional): if True, snapshot mode will be used for this query. Snapshot mode assures no duplicates are returned, or objects missed, which were present at both the start and end of the query’s execution. For details, see the snapshot documentation.
  • tailable (optional): the result of this find call will be a tailable cursor - tailable cursors aren’t closed when the last data is retrieved but are kept open and the cursors location marks the final document’s position. if more data is received iteration of the cursor will continue from the last document received. For details, see the tailable cursor documentation.

New in version 1.1: The tailable parameter.

See general MongoDB documentation

find

find_one(spec_or_object_id=None, fields=None, _sock=None, _must_use_master=False, _is_command=False)

Get a single object from the database.

Raises TypeError if the argument is of an improper type. Returns a single SON object, or None if no result is found.

Parameters:
  • spec_or_object_id (optional): a SON object specifying elements which must be present for a document to be returned OR an instance of ObjectId to be used as the value for an _id query
  • fields (optional): a list of field names that should be included in the returned document (“_id” will always be included)
full_name

The full name of this Collection.

The full name is of the form database_name.collection_name.

Changed in version 1.3: full_name is now a property rather than a method. The full_name() method is deprecated.

group(key, condition, initial, reduce, finalize=None, command=True)

Perform a query similar to an SQL group by operation.

Returns an array of grouped items.

The key parameter can be:

  • None to use the entire document as a key.
  • A list of keys (each a basestring) to group by.
  • A basestring or Code instance containing a JavaScript function to be applied to each document, returning the key to group by.
Parameters:
  • key: fields to group by (see above description)
  • condition: specification of rows to be considered (as a find() query specification)
  • initial: initial value of the aggregation counter object
  • reduce: aggregation function as a JavaScript string
  • finalize: function to be called on each object in output list.
  • command (optional): DEPRECATED if True, run the group as a command instead of in an eval - this option is deprecated and will be removed in favor of running all groups as commands

Changed in version 1.4: The key argument can now be None or a JavaScript function, in addition to a list of keys.

Changed in version 1.3: The command argument now defaults to True and is deprecated.

index_information()

Get information on this collection’s indexes.

Returns a dictionary where the keys are index names (as returned by create_index()) and the values are lists of (key, direction) pairs specifying the index (as passed to create_index()).

insert(doc_or_docs, manipulate=True, safe=False, check_keys=True)

Insert a document(s) into this collection.

If manipulate is set the document(s) are manipulated using any SONManipulators that have been added to this database. Returns the _id of the inserted document or a list of _ids of the inserted documents. If the document(s) does not already contain an ‘_id’ one will be added. If safe is True then the insert will be checked for errors, raising OperationFailure if one occurred. Safe inserts wait for a response from the database, while normal inserts do not.

Parameters:
  • doc_or_docs: a SON object or list of SON objects to be inserted
  • manipulate (optional): manipulate the documents before inserting?
  • safe (optional): check that the insert succeeded?
  • check_keys (optional): check if keys start with ‘$’ or contain ‘.’, raising pymongo.errors.InvalidName in either case

Changed in version 1.1: Bulk insert works with any iterable

See general MongoDB documentation

insert

map_reduce(map, reduce, full_response=False, **kwargs)

Perform a map/reduce operation on this collection.

If full_response is False (default) returns a Collection instance containing the results of the operation. Otherwise, returns the full response from the server to the map reduce command.

Parameters:
  • map: map function (as a JavaScript string)

  • reduce: reduce function (as a JavaScript string)

  • full_response (optional): if True, return full response to this command - otherwise just return the result collection

  • **kwargs (optional): additional arguments to the map reduce command may be passed as keyword arguments to this helper method, e.g.:

    >>> db.test.map_reduce(map, reduce, limit=2)
    

Note

Requires server version >= 1.1.1

New in version 1.2.

See general MongoDB documentation

mapreduce

name

The name of this Collection.

Changed in version 1.3: name is now a property rather than a method. The name() method is deprecated.

options()

Get the options set on this collection.

Returns a dictionary of options and their values - see create_collection() for more information on the possible options. Returns an empty dictionary if the collection has not been created yet.

remove(spec_or_object_id=None, safe=False)

Remove a document(s) from this collection.

Warning

Calls to remove() should be performed with care, as removed data cannot be restored.

Raises TypeError if spec_or_object_id is not an instance of (dict, ObjectId). If safe is True then the remove operation will be checked for errors, raising OperationFailure if one occurred. Safe removes wait for a response from the database, while normal removes do not.

If no spec_or_object_id is given all documents in this collection will be removed. This is not equivalent to calling drop_collection(), however, as indexes will not be removed.

If safe is True returns the response to the lastError command. Otherwise, returns None.

Parameters:
  • spec_or_object_id (optional): a dict or SON instance specifying which documents should be removed; or an instance of ObjectId specifying the value of the _id field for the document to be removed
  • safe (optional): check that the remove succeeded?

Changed in version 1.4: Return the response to lastError if safe is True.

Changed in version 1.2: The spec_or_object_id parameter is now optional. If it is not specified all documents in the collection will be removed.

New in version 1.1: The safe parameter.

See general MongoDB documentation

remove

rename(new_name)

Rename this collection.

If operating in auth mode, client must be authorized as an admin to perform this operation. Raises TypeError if new_name is not an instance of basestring. Raises InvalidName if new_name is not a valid collection name.

Parameters:
  • new_name: new name for this collection
save(to_save, manipulate=True, safe=False)

Save a document in this collection.

If to_save already has an ‘_id’ then an update (upsert) operation is performed and any existing document with that _id is overwritten. Otherwise an ‘_id’ will be added to to_save and an insert operation is performed. Returns the _id of the saved document.

Raises TypeError if to_save is not an instance of dict. If safe is True then the save will be checked for errors, raising OperationFailure if one occurred. Safe inserts wait for a response from the database, while normal inserts do not. Returns the _id of the saved document.

Parameters:
  • to_save: the SON object to be saved
  • manipulate (optional): manipulate the SON object before saving it
  • safe (optional): check that the save succeeded?

See general MongoDB documentation

insert

update(spec, document, upsert=False, manipulate=False, safe=False, multi=False)

Update a document(s) in this collection.

Raises TypeError if either spec or document is not an instance of dict or upsert is not an instance of bool. If safe is True then the update will be checked for errors, raising OperationFailure if one occurred. Safe updates require a response from the database, while normal updates do not - thus, setting safe to True will negatively impact performance.

There are many useful update modifiers which can be used when performing updates. For example, here we use the "$set" modifier to modify some fields in a matching document:

>>> db.test.insert({"x": "y", "a": "b"})
ObjectId('...')
>>> list(db.test.find())
[{u'a': u'b', u'x': u'y', u'_id': ObjectId('...')}]
>>> db.test.update({"x": "y"}, {"$set": {"a": "c"}})
>>> list(db.test.find())
[{u'a': u'c', u'x': u'y', u'_id': ObjectId('...')}]

If safe is True returns the response to the lastError command. Otherwise, returns None.

Parameters:
  • spec: a dict or SON instance specifying elements which must be present for a document to be updated
  • document: a dict or SON instance specifying the document to be used for the update or (in the case of an upsert) insert - see docs on MongoDB update modifiers
  • upsert (optional): perform an upsert if True
  • manipulate (optional): manipulate the document before updating? If True all instances of SONManipulator added to this Database will be applied to the document before performing the update.
  • safe (optional): check that the update succeeded?
  • multi (optional): update all documents that match spec, rather than just the first matching document. The default value for multi is currently False, but this might eventually change to True. It is recommended that you specify this argument explicitly for all update operations in order to prepare your code for that change.

Changed in version 1.4: Return the response to lastError if safe is True.

New in version 1.1.1: The multi parameter.

See general MongoDB documentation

update

Previous topic

database – Database level operations

Next topic

cursor – Tools for iterating over MongoDB query results

This Page