# Interpreting the performance visualization for ChatGPT, Claude, Perplexity, etc.

In this section, you will see a list of major AI engines, including ChatGPT, Gemini, Claude, Perplexity, and Bing Chat. For each engine, Flensh provides two key metrics visualized as clear bar graphs:

* **Score (0-100):** This measures the quality and compatibility of your website's content specifically for that AI engine. It answers the question: "How well does my content meet the unique preferences of this platform?" A higher score means your content is well-aligned with what that engine values.
* **Rank (0-100):** This is your competitive positioning. It shows how your website ranks against the top 100 competitors in your niche on that specific AI engine. A lower rank number (e.g., a rank of 5) is better, indicating you are in the top tier. This metric answers the question: "How do I stack up against my competition on this platform?"

You will also find a brief **"Reasoning"** snippet for each score, explaining why your site performed the way it did for that particular engine.


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://flensh.gitbook.io/flensh-docs/core-features-in-detail/2.3.-ai-visibility-breakdown-by-engine/interpreting-the-performance-visualization-for-chatgpt-claude-perplexity-etc..md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
