๐Ÿงฌ Embeddings

Embeddings are the A.I-native way to represent any kind of data, making them the perfect fit for working with all kinds of A.I-powered tools and algorithms. They can represent text, images, and soon audio and video. There are many options for creating embeddings, whether locally using an installed library, or by calling an API.

Chroma provides lightweight wrappers around popular embedding providers, making it easy to use them in your apps. You can set an embedding function when you create a Chroma collection, which will be used automatically, or you can call them directly yourself.

PythonJS
OpenAIโœ…โœ…
Google Generative AIโœ…โœ…
Cohereโœ…โœ…
Hugging Faceโœ…โž–
Instructorโœ…โž–
Hugging Face Embedding Serverโœ…โœ…
Jina AIโœ…โœ…

We welcome pull requests to add new Embedding Functions to the community.


Default: all-MiniLM-L6-v2#

By default, Chroma uses the Sentence Transformers all-MiniLM-L6-v2 model to create embeddings. This embedding model can create sentence and document embeddings that can be used for a wide variety of tasks. This embedding function runs locally on your machine, and may require you download the model files (this will happen automatically).

python

Sentence Transformers#

Chroma can also use any Sentence Transformers model to create embeddings.

python

You can pass in an optional model_name argument, which lets you choose which Sentence Transformers model to use. By default, Chroma uses all-MiniLM-L6-v2. You can see a list of all available models here.


Custom Embedding Functions#

You can create your own embedding function to use with Chroma, it just needs to implement the EmbeddingFunction protocol.

python

We welcome contributions! If you create an embedding function that you think would be useful to others, please consider submitting a pull request to add it to Chroma's embedding_functions module.