- Python
This embedding function relies on several python packages:You can pass in optional arguments:
open-clip-torch: Install withpip install open-clip-torchtorch: Install withpip install torchpillow: Install withpip install pillow
model_name: The name of the OpenCLIP model to use (default: “ViT-B-32”)checkpoint: The checkpoint to use for the model (default: “laion2b_s34b_b79k”)device: Device used for computation, “cpu” or “cuda” (default: “cpu”)
OpenCLIP is great for multimodal applications where you need to embed both text and images in the same embedding space. Visit OpenCLIP documentation for more information on available models and checkpoints.