OpenPipe
OpenPipe has an llms.txt. Do you?
OpenPipe offers a streamlined interface for managing machine learning models and datasets, making it easier for developers to integrate AI into their projects. With API endpoints for creating, listing, and deleting datasets and models, it's a powerful tool for building robust AI applications.
Not sure yours is this good? Check it →
OpenPipe's llms.txt Insights
Short and sweet
1 section. Minimalist, but hey — at least they showed up.
Goldilocks zone
58 lines — not too long, not too short. AI loves this.
Double trouble
Runs both llms.txt and llms-full.txt. Someone takes this seriously.
What's inside OpenPipe's llms.txt
OpenPipe's llms.txt contains 2 sections:
- OpenPipe
- Docs
How does OpenPipe's llms.txt compare?
| OpenPipe | Directory Avg | Top Performer | |
|---|---|---|---|
| Lines | 58 | 1029 | 163,447 |
| Sections | 1 | 17 | 3207 |
Cool table. Now the real question — where do you land? Find out →
OpenPipe's llms.txt preview
First 58 of 58 lines
# OpenPipe
## Docs
- [Delete Dataset](https://docs.openpipe.ai/api-reference/delete-dataset.md): Delete a dataset.
- [Delete Model](https://docs.openpipe.ai/api-reference/delete-model.md): Delete an existing model.
- [Get Model](https://docs.openpipe.ai/api-reference/get-getModel.md): Get a model by ID.
- [List Datasets](https://docs.openpipe.ai/api-reference/get-listDatasets.md): List datasets for a project.
- [List Models](https://docs.openpipe.ai/api-reference/get-listModels.md): List all models for a project.
- [Chat Completions](https://docs.openpipe.ai/api-reference/post-chatcompletions.md): OpenAI-compatible route for generating inference and optionally logging the request.
- [Create Dataset](https://docs.openpipe.ai/api-reference/post-createDataset.md): Create a new dataset.
- [Add Entries to Dataset](https://docs.openpipe.ai/api-reference/post-createDatasetEntries.md): Add new dataset entries.
- [Create Model](https://docs.openpipe.ai/api-reference/post-createModel.md): Train a new model.
- [Judge Criteria](https://docs.openpipe.ai/api-reference/post-criteriajudge.md): Get a judgement of a completion against the specified criterion
- [Report](https://docs.openpipe.ai/api-reference/post-report.md): Record request logs from OpenAI models
- [Report Anthropic](https://docs.openpipe.ai/api-reference/post-report-anthropic.md): Record request logs from Anthropic models
- [Update Metadata](https://docs.openpipe.ai/api-reference/post-updatemetadata.md): Update tags metadata for logged calls matching the provided filters.
- [Base Models](https://docs.openpipe.ai/base-models.md): Train and compare across a range of the most powerful base models.
- [Caching](https://docs.openpipe.ai/features/caching.md): Improve performance and reduce costs by caching previously generated responses.
- [Anthropic Proxy](https://docs.openpipe.ai/features/chat-completions/anthropic.md)
- [Proxying to External Models](https://docs.openpipe.ai/features/chat-completions/external-models.md)
- [Gemini Proxy](https://docs.openpipe.ai/features/chat-completions/gemini.md)
- [Chat Completions](https://docs.openpipe.ai/features/chat-completions/overview.md)
- [Criterion Alignment Sets](https://docs.openpipe.ai/features/criteria/alignment-set.md): Use alignment sets to test and improve your criteria.
- [API Endpoints](https://docs.openpipe.ai/features/criteria/api.md): Use the Criteria API for runtime evaluation and offline testing.
- [Criteria](https://docs.openpipe.ai/features/criteria/overview.md): Align LLM judgements with human ratings to evaluate and improve your models.
- [Criteria Quick Start](https://docs.openpipe.ai/features/criteria/quick-start.md): Create and align your first criterion.
- [Exporting Data](https://docs.openpipe.ai/features/datasets/exporting-data.md): Export your past requests as a JSONL file in their raw form.
- [Importing Request Logs](https://docs.openpipe.ai/features/datasets/importing-logs.md): Search and filter your past LLM requests to inspect your responses and build a training dataset.
- [Datasets](https://docs.openpipe.ai/features/datasets/overview.md): Collect, evaluate, and refine your training data.
- [Datasets Quick Start](https://docs.openpipe.ai/features/datasets/quick-start.md): Create your first dataset and import training data.
- [Relabeling Data](https://docs.openpipe.ai/features/datasets/relabeling-data.md): Use powerful models to generate new outputs for your data before training.
- [Uploading Data](https://docs.openpipe.ai/features/datasets/uploading-data.md): Upload external data to kickstart your fine-tuning process. Use the OpenAI chat fine-tuning format.
- [Deployment Types](https://docs.openpipe.ai/features/deployments.md): Learn about serverless, hourly, and dedicated deployments.
- [Direct Preference Optimization (DPO)](https://docs.openpipe.ai/features/dpo/overview.md)
- [DPO Quick Start](https://docs.openpipe.ai/features/dpo/quick-start.md): Train your first DPO fine-tuned model with OpenPipe.
- [Code Evaluations](https://docs.openpipe.ai/features/evaluations/code.md): Write custom code to evaluate your LLM outputs.
- [Criterion Evaluations](https://docs.openpipe.ai/features/evaluations/criterion.md): Evaluate your LLM outputs using criteria.
- [Head-to-Head Evaluations](https://docs.openpipe.ai/features/evaluations/head-to-head.md): Evaluate your LLM outputs against one another using head-to-head evaluations.
- [Evaluations](https://docs.openpipe.ai/features/evaluations/overview.md): Evaluate the quality of your LLMs against one another or independently.
- [Evaluations Quick Start](https://docs.openpipe.ai/features/evaluations/quick-start.md): Create your first head to head evaluation.
- [External Models](https://docs.openpipe.ai/features/external-models.md)
- [Fallback options](https://docs.openpipe.ai/features/fallback.md): Safeguard your application against potential failures, timeouts, or instabilities that may occur when using experimental or newly released models.
- [Fine Tuning via API](https://docs.openpipe.ai/features/fine-tuning/api.md): Fine tune your models programmatically through our API.
- [Fine-Tuning Quick Start](https://docs.openpipe.ai/features/fine-tuning/quick-start.md): Train your first fine-tuned model with OpenPipe.
- [Reward Models (Beta)](https://docs.openpipe.ai/features/fine-tuning/reward-models.md): Train reward models to judge the quality of LLM responses based on preference data.
- [Fine Tuning via Webapp](https://docs.openpipe.ai/features/fine-tuning/webapp.md): Fine tune your models on filtered logs or uploaded datasets. Filter by prompt id and exclude requests with an undesirable output.
- [Pruning Rules](https://docs.openpipe.ai/features/pruning-rules.md): Decrease input token counts by pruning out chunks of static text.
- [Exporting Logs](https://docs.openpipe.ai/features/request-logs/exporting-logs.md): Export your past requests as a JSONL file in their raw form.
- [Logging Requests](https://docs.openpipe.ai/features/request-logs/logging-requests.md): Record production data to train and improve your models' performance.
- [Logging Anthropic Requests](https://docs.openpipe.ai/features/request-logs/reporting-anthropic.md)
- [Updating Metadata Tags](https://docs.openpipe.ai/features/updating-metadata.md)
- [Installing the SDK](https://docs.openpipe.ai/getting-started/openpipe-sdk.md)
- [Quick Start](https://docs.openpipe.ai/getting-started/quick-start.md): Get started with OpenPipe in a few quick steps.
- [OpenPipe Documentation](https://docs.openpipe.ai/introduction.md): Software engineers and data scientists use OpenPipe's intuitive fine-tuning and monitoring services to decrease the cost and latency of their LLM operations. You can use OpenPipe to collect and analyze LLM logs, create fine-tuned models, and compare output from multiple models given the same input.
- [Overview](https://docs.openpipe.ai/overview.md): OpenPipe is a streamlined platform designed to help product-focused teams train specialized LLM models as replacements for slow and expensive prompts.
- [Pricing Overview](https://docs.openpipe.ai/pricing/pricing.md)
What is llms.txt?
llms.txt is an open standard that helps AI language models understand your website. By placing a structured markdown file at /llms.txt, websites provide AI search engines like ChatGPT, Claude, and Perplexity with a clear map of their content, services, and documentation. Companies like OpenPipe use it to ensure AI accurately represents their brand when answering user queries. Read the spec.
OpenPipe showed up. Where's yours?
1000+ companies didn't overthink it. 60 seconds. Go.
Check your site →