Skip to content

Releases: pinecone-io/pinecone-python-client

Release v7.0.2

28 May 17:26
Compare
Choose a tag to compare

This small bugfix release includes the following fixes:

  • Windows users should now be able to install without seeing the readline error reported by in #502. See #503 for details on the root cause and fix.
  • We have added a new multi-platform installation testing workflow to catch future issues like the above Windows problem.
  • While initially running these new tests we discovered a dependency was not being included correctly for the Assistant functionality: pinecone-plugin-assistant. The assistant plugin had been inadvertently added as a dev dependency rather than a dependency, which means our integration tests for that functionality were able to pass while the published artifact was not including it. We have corrected this problem, which means assistant functions should now work without installing anything additional.

Release v7.0.1

21 May 19:51
Compare
Choose a tag to compare

This small bugfix release fixes:

  • Broken autocompletion / intellisense for inference functions. See #498 for details.
  • Restores missing type information for Exception classes that was inadvertently removed when setting up the package-level .pyi file

Release v7.0.0

20 May 19:31
Compare
Choose a tag to compare

Upgrading from 6.x to 7.x

The v7 release of the Pinecone Python SDK has been published as pinecone to PyPI.

There are no intentional breaking changes between v6 and v7 of the SDK. The major version bump reflects the move from calling the 2025-01 to the 2025-04 version of the underlying API.

Some internals of the client have been reorganized or moved, but we've made an effort to alias everything and show warning messages when appropriate. If you experience any unexpected breaking changes that cause you friction while upgrading, let us know and we'll try to smooth it out.

Note

The official SDK package was renamed from pinecone-client to pinecone beginning in version 5.1.0.
Please remove pinecone-client from your project dependencies and add pinecone instead to get
the latest updates if upgrading from earlier versions.

What's new in 7.x

New Features:

  • Pinecone Assistant: The assistant plugin is now bundled by default. You can simply start using it without installing anything additional.
  • Inference API: List/view models from the model gallery via API
  • Backups:
    • Create backup from serverless index
    • Create serverless index from backup
    • List/view backups
    • List/view backup restore jobs
  • Bring Your Own Cloud (BYOC):
    • Create, list, describe, and delete BYOC indexes

Other useful improvements:

  • ~70% faster client instantiation time thanks to extensive refactoring to implement lazy loading. This means your app won't waste time loading code for features you're not using.
  • Retries with exponential backoff are now enabled by default for REST calls (implemented for both urllib3 and aiohttp).
  • We're following PEP 561 and adding a py.typed marker file to indicate inline type information is present in the package. We're still working toward reaching full coverage with our type hints, but including this file allows some tools to find the inline definitions we have already implemented.
  • @yorickvP contributed a fix for PineconeGrpcFuture objects blocking during construction

Backups for Serverless Indexes

You can create backups from your Serverless indexes and use these backups to create new indexes. Some fields such as record_count are initially empty but will be populated by the time a backup is ready for use.

from pinecone import Pinecone

pc = Pinecone()

index_name = 'example-index'
if not pc.has_index(name=index_name):
    raise Exception('An index must exist before backing it up')

backup = pc.create_backup(
    index_name=index_name,
    backup_name='example-backup',
    description='testing out backups'
)
# {
#     "backup_id": "4698a618-7e56-4a44-93bc-fc8f1371aa36",
#     "source_index_name": "example-index",
#     "source_index_id": "ec6fd44c-ab45-4873-97f3-f6b44b67e9bc",
#     "status": "Initializing",
#     "cloud": "aws",
#     "region": "us-east-1",
#     "tags": {},
#     "name": "example-backup",
#     "description": "testing out backups",
#     "dimension": null,
#     "record_count": null,
#     "namespace_count": null,
#     "size_bytes": null,
#     "created_at": "2025-05-16T18:44:28.480671533Z"
# }

Check the status of a backup

from pinecone import Pinecone

pc = Pinecone()

pc.describe_backup(backup_id='4698a618-7e56-4a44-93bc-fc8f1371aa36')
# {
#     "backup_id": "4698a618-7e56-4a44-93bc-fc8f1371aa36",
#     "source_index_name": "example-index",
#     "source_index_id": "ec6fd44c-ab45-4873-97f3-f6b44b67e9bc",
#     "status": "Ready",
#     "cloud": "aws",
#     "region": "us-east-1",
#     "tags": {},
#     "name": "example-backup",
#     "description": "testing out backups",
#     "dimension": 768,
#     "record_count": 1000,
#     "namespace_count": 1,
#     "size_bytes": 289656,
#     "created_at": "2025-05-16T18:44:28.480691Z"
# }

You can use list_backups to see all of your backups and their current status. If you have a large number of backups, results will be paginated. You can control the pagination with optional parameters for limit and pagination_token.

from pinecone import Pinecone

pc = Pinecone()

# All backups
pc.list_backups()

# Only backups associated with a particular index
pc.list_backups(index_name='my-index')

To create an index from a backup, use create_index_from_backup.

from pinecone import Pinecone

pc = Pinecone()

pc.create_index_from_backup(
    name='index-from-backup',
    backup_id='4698a618-7e56-4a44-93bc-fc8f1371aa36',
    deletion_protection = "disabled",
    tags={'env': 'testing'},
)

Under the hood, a restore job is created to handle taking data from your backup and loading it into the newly created serverless index. You can check status of pending restore jobs with pc.list_restore_jobs() or pc.describe_restore_job()

Explore and discover models available in our Inference API

You can now fetch a dynamic list of models supported by the Inference API.

from pinecone import Pinecone

pc = Pinecone()

# List all models
models = pc.inference.list_models()

# List models, with model type filtering
models = pc.inference.list_models(type="embed")
models = pc.inference.list_models(type="rerank")

# List models, with vector type filtering
models = pc.inference.list_models(vector_type="dense")
models = pc.inference.list_models(vector_type="sparse")

# List models, with both type and vector type filtering
models = pc.inference.list_models(type="rerank", vector_type="dense")

Or, if you know the name of a model, you can get just those details

pc.inference.get_model(model_name='pinecone-rerank-v0')
# {
#     "model": "pinecone-rerank-v0",
#     "short_description": "A state of the art reranking model that out-performs competitors on widely accepted benchmarks. It can handle chunks up to 512 tokens (1-2 paragraphs)",
#     "type": "rerank",
#     "supported_parameters": [
#         {
#             "parameter": "truncate",
#             "type": "one_of",
#             "value_type": "string",
#             "required": false,
#             "default": "END",
#             "allowed_values": [
#                 "END",
#                 "NONE"
#             ]
#         }
#     ],
#     "modality": "text",
#     "max_sequence_length": 512,
#     "max_batch_size": 100,
#     "provider_name": "Pinecone",
#     "supported_metrics": []
# }

Client support for BYOC (Bring Your Own Cloud)

For customers using our BYOC offering, you can now create indexes and list/describe indexes you have created in your cloud.

from pinecone import Pinecone, ByocSpec

pc = Pinecone()

pc.create_index(
    name='example-byoc-index',
    dimension=768,
    metric='cosine',
    spec=ByocSpec(environment='my-private-env'),
    tags={
        'env': 'testing'
    },
    deletion_protection='enabled'
)

Release v6.0.2

13 Mar 21:42
Compare
Choose a tag to compare

Fixes

  • [Fix] Error when fetching sparse vector by id over grpc by @jhamon in #467

Full Changelog: v6.0.1...v6.0.2

Release v6.0.1

21 Feb 19:26
Compare
Choose a tag to compare

This release contains a small fix to correct an incompatibility between the 6.0.0 pinecone release and pinecone-plugin-assistant. While working toward improving type coverage of the sdk, some attributes of the internal Configuraiton class were erroneously removed even though they are still needed by the plugin to load correctly.

The 6.0.0 pinecone SDK should now work with all versions of pinecone-plugin-assistant except for 1.1.1 which errors when used with 6.0.0.

Thanks @avi1mizrahi for contributing the fix.

Release v6.0.0

07 Feb 16:00
Compare
Choose a tag to compare

What's new in this release?

Indexes with Integrated Inference

This release adds a new create_index_for_model method as well as upsert_records, and search methods. Together these methods provide a way for you to easily store your data and let us manage the process of creating embeddings. To learn about available models, see the Model Gallery.

Note: If you were previously using the preview versions of this functionality via the pinecone-plugin-records package, you will need to uninstall that package in order to use the v6 pinecone release.

from pinecone import (
    Pinecone,
    CloudProvider,
    AwsRegion,
    EmbedModel,
)

# 1. Instantiate the Pinecone client
pc = Pinecone(api_key="<<PINECONE_API_KEY>>")

# 2. Create an index configured for use with a particular model
index_config = pc.create_index_for_model(
    name="my-model-index",
    cloud=CloudProvider.AWS,
    region=AwsRegion.US_EAST_1,
    embed=IndexEmbed(
        model=EmbedModel.Multilingual_E5_Large,
        field_map={"text": "my_text_field"}
    )
)

# 3. Instantiate an Index client
idx = pc.Index(host=index_config.host)

# 4. Upsert records
idx.upsert_records(
    namespace="my-namespace",
    records=[
        {
            "_id": "test1",
            "my_text_field": "Apple is a popular fruit known for its sweetness and crisp texture.",
        },
        {
            "_id": "test2",
            "my_text_field": "The tech company Apple is known for its innovative products like the iPhone.",
        },
        {
            "_id": "test3",
            "my_text_field": "Many people enjoy eating apples as a healthy snack.",
        },
        {
            "_id": "test4",
            "my_text_field": "Apple Inc. has revolutionized the tech industry with its sleek designs and user-friendly interfaces.",
        },
        {
            "_id": "test5",
            "my_text_field": "An apple a day keeps the doctor away, as the saying goes.",
        },
        {
            "_id": "test6",
            "my_text_field": "Apple Computer Company was founded on April 1, 1976, by Steve Jobs, Steve Wozniak, and Ronald Wayne as a partnership.",
        },
    ],
)

# 5. Search for similar records
from pinecone import SearchQuery, SearchRerank, RerankModel

response = index.search_records(
    namespace="my-namespace",
    query=SearchQuery(
        inputs={
            "text": "Apple corporation",
        },
        top_k=3
    ),
    rerank=SearchRerank(
        model=RerankModel.Bge_Reranker_V2_M3,
        rank_fields=["my_text_field"],
        top_n=3,
    ),
)

Call the Inference API

You can now interact with Pinecone's Inference API without the need to install any extra plugins.

Note: If you were previously using the preview versions of this functionality via the pinecone-plugin-inference package, you will need to uninstall that package.

from pinecone import Pinecone

pc = Pinecone(api_key="<<PINECONE_API_KEY>>")

inputs = ["Who created the first computer?"]
outputs = pc.inference.embed(
    model="multilingual-e5-large", 
    inputs=inputs, parameters={"input_type": "passage", "truncate": "END"}
)
print(outputs)
#  EmbeddingsList(
#      model='multilingual-e5-large',
#      data=[
#          {'values': [0.1, ...., 0.2]},
#        ],
#      usage={'total_tokens': 6}
#  )

New client variants with support for asyncio

The v6 Python SDK introduces a new client variants, PineconeAsyncio and IndexAsyncio, which provide async methods for use with asyncio. This should unblock those who wish to use Pinecone with modern async web frameworks such as FastAPI, Quart, Sanic, etc. Those trying to onboard to Pinecone and upsert large amounts of data should significantly benefit from the efficiency of running many upserts in parallel.

To use these, you will need to install pinecone[asyncio] which pulls in an extra depdency on aiohttp. See notes on installation.

You can expect more documentation and information on how to use these asyncio clients to follow soon.

import asyncio

from pinecone import (
    PineconeAsyncio,
    IndexEmbed,
    CloudProvider,
    AwsRegion,
    EmbedModel
)

async def main():
    async with PineconeAsyncio() as pc:
        if not await pc.has_index(index_name):
            desc = await pc.create_index_for_model(
                name="book-search",
                cloud=CloudProvider.AWS,
                region=AwsRegion.US_EAST_1,
                embed=IndexEmbed(
                    model=EmbedModel.Multilingual_E5_Large,
                    metric="cosine",
                    field_map={
                        "text": "description",
                    },
                )
            )

asyncio.run(main())

Interactions with a deployed index are done via IndexAsyncio class, which can be instantiated using helper methods on either Pinecone or PineconeAsyncio:

import asyncio
from pinecone import Pinecone

async def main():
    pc = Pinecone(api_key='<<PINECONE_API_KEY>>')    
    async with pc.IndexAsyncio(host="book-search-dojoi3u.svc.aped-4627-b74a.pinecone.io") as idx:
        await idx.upsert_records(
            namespace="books-records",
            records=[
                {
                    "id": "1",
                    "title": "The Great Gatsby",
                    "author": "F. Scott Fitzgerald",
                    "description": "The story of the mysteriously wealthy Jay Gatsby and his love for the beautiful Daisy Buchanan.",
                    "year": 1925,
                },
                {
                    "id": "2",
                    "title": "To Kill a Mockingbird",
                    "author": "Harper Lee",
                    "description": "A young girl comes of age in the segregated American South and witnesses her father's courageous defense of an innocent black man.",
                    "year": 1960,
                },
                {
                    "id": "3",
                    "title": "1984",
                    "author": "George Orwell",
                    "description": "In a dystopian future, a totalitarian regime exercises absolute control through pervasive surveillance and propaganda.",
                    "year": 1949,
                },
            ]
        )


asyncio.run(main())

Organize your indexes with tags

Tags are key-value pairs you can attach to indexes to better understand, organize, and identify your resources. Tags are flexible and can be tailored to your needs, but some common use cases for them might be to label an index with the relevant deployment environment, application, team, or owner.

Tags can be set during index creation by passing an optional dictionary with the tags keyword argument to the create_index and create_index_for_model methods. Here's an example demonstrating how tags can be passed to create_index.

from pinecone import (
    Pinecone,
    ServerlessSpec,
    CloudProvider,
    GcpRegion,
    Metric
)

pc = Pinecone(api_key='<<PINECONE_API_KEY>>')

pc.create_index(
    name='my-index',
    dimension=1536,
    metric=Metric.COSINE,
    spec=ServerlessSpec(
        cloud=CloudProvider.GCP,
        region=GcpRegion.US_CENTRAL1
    ),
    tags={
        "environment": "testing",
        "owner": "jsmith",
    }
)

See this page for more documentation about how to add, modify, or remove tags.

Sparse indexes early access support

Sparse indexes are currently in early access. This release will allow those with early access to create sparse indexes and view those configurations with the describe_index and list_indexes methods.

These are created using the same create_index method as other index types but with different configuration options. For sparse indexes, you must omit dimension while passingmetric="dotproduct" and vector_type="sparse".

from pinecone import (
    Pinecone,
    ServerlessSpec,
    CloudProvider,
    AwsRegion,
    Metric,
    VectorType
)

pc = Pinecone()
pc.create_index(
    name='sparse-index',
    metric=Metric.DOTPRODUCT,
    spec=ServerlessSpec(
        cloud=CloudProvider.AWS,
        region=AwsRegion.US_WEST_2
    ),
    vector_type=VectorType.SPARSE
)

# Check the description to get the host url
desc = pc.describe_index(name='sparse-index')

# Instantiate the index client
sparse_index = pc.Index(host=desc.host)

Upserting and querying a sparse index is very similar to before, except now the values field of a Vector (used when working with dense values) may be unset.

import random
from pinecone import Vector, SparseValues

def unique_random_integers(n, range_start, range_end):
    if n > (range_end - range_start + 1):
        raise ValueError("Range too small for the requested number of unique integers")
    return random.sample(range(range_start, range_end + 1), n)

# Generate some random sparse vectors
sparse_index.upsert(
    vectors=[
        Vector(
            id=str(i),
            sparse_values=SparseValues(
                indices=unique_random_integers(10, 0, 10000),
                values=[random.random() for j in range(10)]
            )
        ) for i in range(10000)
    ],...
Read more

Release v5.4.2

09 Dec 16:22
Compare
Choose a tag to compare

This release contains a small adjustment to the query_namespaces method added in the 5.4.0. The initial implementation had a bug that meant it could not properly merge small result sets across multiple namespaces. This release adds a required keyword argument, metric to the query_namespaces method, which should enable the SDK to merge results no matter how many results are returned.

from pinecone import Pinecone

pc = Pinecone(api_key='YOUR_API_KEY')
index = pc.Index(host='your-index-host')

query_results = index.query_namespaces(
    vector=[0.1, 0.2, ...],  # The query vector, dimension should match your index
    namespaces=['ns1', 'ns2', 'ns3'],
    metric="cosine", # This is the new required keyword argument
    include_values=False,
    include_metadata=True,
    filter={},
    top_k=100,
)

What's Changed

  • [Bug] query_namespaces can handle single result by @jhamon in #421

Full Changelog: v5.4.1...v5.4.2

Release v5.4.1

26 Nov 18:51
Compare
Choose a tag to compare

What's Changed

  • [Chore] Allow support for pinecone-plugin-inference >=2.0.0, <4.0.0 by @austin-denoble in #419

Release v5.4.0

13 Nov 20:37
Compare
Choose a tag to compare

Query namespaces

In this release we have added a utility method to run a query across multiple namespaces, then merge the result sets into a single ranked result set with the top_k most relevant results. The query_namespaces method accepts most of the same arguments as query with the addition of a required namespaces param.

Since query_namespaces executes multiple queries in parallel, in order to get good performance it is important to set values for the pool_threads and connection_pool_maxsize properties on the index client. The pool_threads setting is the number of threads available to execute requests while connection_pool_maxsize is the number of cached http connections that will be held. Since these tasks are not computationally heavy and are mainly i/o bound, it should be okay to have a high ratio of threads to cpus.

The combined results include the sum of all read unit usage used to perform the underlying queries for each namespace.

from pinecone import Pinecone

pc = Pinecone(api_key="key")
index = pc.Index(
  name="index-name",
  pool_threads=50,             # <-- make sure to set these
  connection_pool_maxsize=50,  # <-- make sure to set these
)

query_vec = [ 0.1, ...] # an embedding vector with same dimension as the index
combined_results = index.query_namespaces(
    vector=query_vec,
    namespaces=['ns1', 'ns2', 'ns3', 'ns4'],
    top_k=10,
    include_values=False,
    include_metadata=True,
    filter={"genre": { "$eq": "comedy" }},
    show_progress=False,
)

for scored_vec in combined_results.matches:
    print(scored_vec)
print(combined_results.usage)

A version of query_namespaces is also available over grpc. For grpc, there is no need to set the connection_pool_maxsize because grpc makes efficient use of open connections by default.

from pinecone.grpc import PineconeGRPC

pc = PineconeGRPC(api_key="key")
index = pc.Index(
  name="index-name",
  pool_threads=50, # <-- make sure to set this
)

query_vec = [ 0.1, ...] # an embedding vector with same dimension as the index
combined_results = index.query_namespaces(
    vector=query_vec,
    namespaces=['ns1', 'ns2', 'ns3', 'ns4'],
    top_k=10,
    include_values=False,
    include_metadata=True,
    filter={"genre": { "$eq": "comedy" }},
    show_progress=False,
)

for scored_vec in combined_results.matches:
    print(scored_vec)
print(combined_results.usage)

Changelog

Additions

  • [feat] PineconeGrpcFuture implements concurrent.futures.Future by @jhamon in #410
  • Update to pinecone-plugin-inference=2.0.0 by @ssmith-pc in #397
  • Detect plugins for Index and IndexGRPC classes by @jhamon in #402
  • Add query_namespaces by @jhamon in #409
  • Expose connection_pool_maxsize on Index and add docstrings by @jhamon in #415
  • Implement query_namespaces over grpc by @jhamon in #416
  • query_namespaces performance improvements by @jhamon in #417

Chores / Fixes

  • [Refactor] Extract GrpcChannelFactory from GRPCIndexBase by @jhamon in #394
  • [Refactor] Extract GrpcRunner from GRPCIndexBase class by @jhamon in #395
  • [Chore] Replace black with ruff linter / formatter by @jhamon in #392
  • [Fix] Update build-oas script for building exceptions template changes by @ssmith-pc in #396
  • [Chore] Put date into test index and collection names by @jhamon in #399
  • [Chore] Automatically cleanup old resources each night by @jhamon in #400
  • [Chore] Improve test flakes by @jhamon in #404

Full Changelog: v5.3.1...v5.4.0.dev5

Release v5.3.1

19 Sep 20:49
Compare
Choose a tag to compare

What's Changed

  • [Fix] Add missing python-dateutil dependency by @jhamon in #391