diff --git a/README.md b/README.md index ce716946516..9d8970072b3 100644 --- a/README.md +++ b/README.md @@ -1,12 +1,17 @@ + ![248433934-7886223b-c1d1-4260-82aa-da5741f303bb](https://github.com/xtekky/gpt4free/assets/98614666/ea012c87-76e0-496a-8ac4-e2de090cc6c9) xtekky%2Fgpt4free | Trendshift --- -

Written by @xtekky

+

+ + Written by @xtekky + +

@@ -30,7 +35,6 @@ docker pull hlohaus789/g4f ## 🆕 What's New - **For comprehensive details on new features and updates, please refer to our** [Releases](https://github.com/xtekky/gpt4free/releases) **page** - - **Installation Guide for Windows (.exe):** đŸ’ģ [Installation Guide for Windows (.exe)](#installation-guide-for-windows-exe) - **Join our Telegram Channel:** 📨 [telegram.me/g4f_channel](https://telegram.me/g4f_channel) - **Join our Discord Group:** đŸ’Ŧ🆕ī¸ [https://discord.gg/5E39JUWUFa](https://discord.gg/5E39JUWUFa) @@ -39,166 +43,128 @@ docker pull hlohaus789/g4f Is your site on this repository and you want to take it down? Send an email to takedown@g4f.ai with proof it is yours and it will be removed as fast as possible. To prevent reproduction please secure your API. 😉 ## 🚀 GPT4Free on HuggingFace - [![HuggingSpace](https://github.com/user-attachments/assets/1d859e8a-d6fa-416f-a213-ccc26aa11e90)](https://huggingface.co/spaces/roxky/g4f) +**Is a proof-of-concept API package for multi-provider AI requests. It showcases features such as:** -Explore our GPT4Free project on HuggingFace Spaces by clicking the link below: - -- [Visit GPT4Free on HuggingFace](https://huggingface.co/spaces/roxky/g4f) - -If you would like to create your own copy of this space, you can duplicate it using the following link: +- Load balancing and request flow control. +- Seamless integration with multiple AI providers. +- Comprehensive text and image generation support. -- [Duplicate GPT4Free Space](https://huggingface.co/spaces/roxky/g4f?duplicate=true) +> Explore the [Visit GPT4Free on HuggingFace Space](https://huggingface.co/spaces/roxky/g4f) for a hosted version or [Duplicate GPT4Free Space](https://huggingface.co/spaces/roxky/g4f?duplicate=true) it for personal use. +--- ## 📚 Table of Contents - [🆕 What's New](#-whats-new) - [📚 Table of Contents](#-table-of-contents) - - [🛠ī¸ Getting Started](#-getting-started) - - [Docker Container Guide](#docker-container-guide) - - [Installation Guide for Windows (.exe)](#installation-guide-for-windows-exe) - - [Use python](#use-python) - - [Prerequisites](#prerequisites) - - [Install using PyPI package](#install-using-pypi-package) - - [Install from source](#install-from-source) - - [Install using Docker](#install-using-docker) - - [💡 Usage](#-usage) - - [Text Generation](#text-generation) - - [Image Generation](#image-generation) - - [Web UI](#web-ui) - - [Interference API](#interference-api) - - [Local Inference](docs/local.md) - - [Configuration](#configuration) - - [Full Documentation for Python API](#full-documentation-for-python-api) - - [Requests API from G4F](docs/requests.md) - - [Client API from G4F](docs/client.md) - - [AsyncClient API from G4F](docs/async_client.md) - - [🚀 Providers and Models](docs/providers-and-models.md) - - [🔗 Powered by gpt4free](#-powered-by-gpt4free) - - [🤝 Contribute](#-contribute) - - [How do i create a new Provider?](#guide-how-do-i-create-a-new-provider) - - [How can AI help me with writing code?](#guide-how-can-ai-help-me-with-writing-code) + - [⚡ Getting Started](#-getting-started) + - [🛠 Installation](#-installation) + - [đŸŗ Using Docker](#-using-docker) + - [đŸĒŸ Windows Guide (.exe)](#-windows-guide-exe) + - [🐍 Python Installation](#-python-installation) + - [💡 Usage](#-usage) + - [📝 Text Generation](#-text-generation) + - [🎨 Image Generation](#-image-generation) + - [🌐 Web Interface](#-web-interface) + - [đŸ–Ĩī¸ Local Inference](docs/local.md) + - [🤖 Interference API](#-interference-api) + - [🛠ī¸ Configuration](docs/configuration.md) + - [📱 Run on Smartphone](#-run-on-smartphone) + - [📘 Full Documentation for Python API](#-full-documentation-for-python-api) + - [🚀 Providers and Models](docs/providers-and-models.md) + - [🔗 Powered by gpt4free](#-powered-by-gpt4free) + - [🤝 Contribute](#-contribute) + - [How do i create a new Provider?](#guide-how-do-i-create-a-new-provider) + - [How can AI help me with writing code?](#guide-how-can-ai-help-me-with-writing-code) - [🙌 Contributors](#-contributors) - [Šī¸ Copyright](#-copyright) - - [⭐ Star History](#-star-history) - - [📄 License](#-license) - -## 🛠ī¸ Getting Started - -#### Docker Container Guide - -##### Getting Started Quickly: - -1. **Install Docker:** Begin by [downloading and installing Docker](https://docs.docker.com/get-docker/). - -2. **Check Directories:** - -Before running the container, make sure the necessary data directories exist or can be created. For example, you can create and set ownership on these directories by running: - -```bash -mkdir -p ${PWD}/har_and_cookies ${PWD}/generated_images -chown -R 1000:1000 ${PWD}/har_and_cookies ${PWD}/generated_images -``` + - [⭐ Star History](#-star-history) + - [📄 License](#-license) -3. **Set Up the Container:** - Use the following commands to pull the latest image and start the container: - -```bash -docker pull hlohaus789/g4f -docker run \ - -p 8080:8080 -p 1337:1337 -p 7900:7900 \ - --shm-size="2g" \ - -v ${PWD}/har_and_cookies:/app/har_and_cookies \ - -v ${PWD}/generated_images:/app/generated_images \ - hlohaus789/g4f:latest -``` - -##### Running the Slim Docker Image - -Use the following command to run the Slim Docker image. This command also updates the `g4f` package at startup and installs any additional dependencies: - -```bash -docker run \ - -p 1337:1337 \ - -v ${PWD}/har_and_cookies:/app/har_and_cookies \ - -v ${PWD}/generated_images:/app/generated_images \ - hlohaus789/g4f:latest-slim \ - rm -r -f /app/g4f/ \ - && pip install -U g4f[slim] \ - && python -m g4f --debug -``` - -4. **Access the Client:** - - - To use the included client, navigate to: [http://localhost:8080/chat/](http://localhost:8080/chat/) or [http://localhost:1337/chat/](http://localhost:1337/chat/) - - Or set the API base for your client to: [http://localhost:1337/v1](http://localhost:1337/v1) +--- -5. **(Optional) Provider Login:** +## ⚡ī¸ Getting Started + +## 🛠 Installation + +### đŸŗ Using Docker +1. **Install Docker:** [Download and install Docker](https://docs.docker.com/get-docker/). +2. **Set Up Directories:** Before running the container, make sure the necessary data directories exist or can be created. For example, you can create and set ownership on these directories by running: + ```bash + mkdir -p ${PWD}/har_and_cookies ${PWD}/generated_images + chown -R 1000:1000 ${PWD}/har_and_cookies ${PWD}/generated_images + ``` +3. **Run the Docker Container:** Use the following commands to pull the latest image and start the container: + ```bash + docker pull hlohaus789/g4f + docker run -p 8080:8080 -p 1337:1337 -p 7900:7900 \ + --shm-size="2g" \ + -v ${PWD}/har_and_cookies:/app/har_and_cookies \ + -v ${PWD}/generated_images:/app/generated_images \ + hlohaus789/g4f:latest + ``` + +4. **Running the Slim Docker Image:** Use the following command to run the Slim Docker image. This command also updates the `g4f` package at startup and installs any additional dependencies: + ```bash + docker run \ + -p 1337:1337 \ + -v ${PWD}/har_and_cookies:/app/har_and_cookies \ + -v ${PWD}/generated_images:/app/generated_images \ + hlohaus789/g4f:latest-slim \ + rm -r -f /app/g4f/ \ + && pip install -U g4f[slim] \ + && python -m g4f --debug + ``` + +5. **Access the Client Interface:** + - **To use the included client, navigate to:** [http://localhost:8080/chat/](http://localhost:8080/chat/) or [http://localhost:1337/chat/](http://localhost:1337/chat/) + - **Or set the API base for your client to:** [http://localhost:1337/v1](http://localhost:1337/v1) + +6. **(Optional) Provider Login:** If required, you can access the container's desktop here: http://localhost:7900/?autoconnect=1&resize=scale&password=secret for provider login purposes. -#### Installation Guide for Windows (.exe) +--- +### đŸĒŸ Windows Guide (.exe) To ensure the seamless operation of our application, please follow the instructions below. These steps are designed to guide you through the installation process on Windows operating systems. -### Installation Steps - +**Installation Steps:** 1. **Download the Application**: Visit our [releases page](https://github.com/xtekky/gpt4free/releases/tag/0.4.0.6) and download the most recent version of the application, named `g4f.exe.zip`. 2. **File Placement**: After downloading, locate the `.zip` file in your Downloads folder. Unpack it to a directory of your choice on your system, then execute the `g4f.exe` file to run the app. -3. **Open GUI**: The app starts a web server with the GUI. Open your favorite browser and navigate to `http://localhost:8080/chat/` to access the application interface. +3. **Open GUI**: The app starts a web server with the GUI. Open your favorite browser and navigate to [http://localhost:8080/chat/](http://localhost:8080/chat/) to access the application interface. 4. **Firewall Configuration (Hotfix)**: Upon installation, it may be necessary to adjust your Windows Firewall settings to allow the application to operate correctly. To do this, access your Windows Firewall settings and allow the application. By following these steps, you should be able to successfully install and run the application on your Windows system. If you encounter any issues during the installation process, please refer to our Issue Tracker or try to get contact over Discord for assistance. --- -### Learn More About the GUI - -For detailed instructions on how to set up, configure, and use the GPT4Free GUI, refer to the **GUI Documentation**: - -- [GUI Documentation](docs/gui.md) - -This guide includes step-by-step details on provider selection, managing conversations, using advanced features like speech recognition, and more. - ---- - -### Use Your Smartphone - -Run the Web UI on your smartphone for easy access on the go. Check out the dedicated guide to learn how to set up and use the GUI on your mobile device: - -- [Run on Smartphone Guide](docs/guides/phone.md) - ---- - -### Use python - -##### Prerequisites: +### 🐍 Python Installation -1. [Download and install Python](https://www.python.org/downloads/) (Version 3.10+ is recommended). -2. [Install Google Chrome](https://www.google.com/chrome/) for providers with webdriver +#### Prerequisites: +1. Install Python 3.10+ from [python.org](https://www.python.org/downloads/). +2. Install Google Chrome for certain providers. -##### Install using PyPI package: - -``` +#### Install with PyPI: +```bash pip install -U g4f[all] ``` -How do I install only parts or do disable parts? -Use partial requirements: [/docs/requirements](docs/requirements.md) - -##### Install from source: +> How do I install only parts or do disable parts? **Use partial requirements:** [/docs/requirements](docs/requirements.md) -How do I load the project using git and installing the project requirements? -Read this tutorial and follow it step by step: [/docs/git](docs/git.md) +#### Install from Source: +```bash +git clone https://github.com/xtekky/gpt4free.git +cd gpt4free +pip install -r requirements.txt +``` -##### Install using Docker: +> How do I load the project using git and installing the project requirements? **Read this tutorial and follow it step by step:** [/docs/git](docs/git.md) -How do I build and run composer image from source? -Use docker-compose: [/docs/docker](docs/docker.md) +--- ## 💡 Usage -#### Text Generation - +### 📝 Text Generation ```python from g4f.client import Client @@ -206,16 +172,15 @@ client = Client() response = client.chat.completions.create( model="gpt-4o-mini", messages=[{"role": "user", "content": "Hello"}], - web_search = False + web_search=False ) print(response.choices[0].message.content) ``` - ``` Hello! How can I assist you today? ``` -#### Image Generation +### 🎨 Image Generation ```python from g4f.client import Client @@ -226,37 +191,27 @@ response = client.images.generate( response_format="url" ) -image_url = response.data[0].url -print(f"Generated image URL: {image_url}") +print(f"Generated image URL: {response.data[0].url}") ``` - [![Image with cat](/docs/images/cat.jpeg)](docs/client.md) -#### **Full Documentation for Python API** - - **New:** - - **Requests API from G4F:** [/docs/requests](docs/requests.md) - - **Client API from G4F:** [/docs/client](docs/client.md) - - **AsyncClient API from G4F:** [/docs/async_client](docs/async_client.md) - - **File API from G4F:** [/docs/file](docs/file.md) - - - **Legacy:** - - **Legacy API with python modules:** [/docs/legacy](docs/legacy.md) - -#### Web UI - -**To start the web interface, type the following codes in python:** - +### 🌐 Web Interface +**Run the GUI using Python:** ```python from g4f.gui import run_gui run_gui() ``` -or execute the following command: +**Or, run via CLI:** ```bash python -m g4f.cli gui -port 8080 -debug ``` -### Interference API +> **Learn More About the GUI:** For detailed instructions on how to set up, configure, and use the GPT4Free GUI, refer to the [GUI Documentation](docs/gui.md) . This guide includes step-by-step details on provider selection, managing conversations, using advanced features like speech recognition, and more. + +--- + +### 🤖 Interference API The **Interference API** enables seamless integration with OpenAI's services through G4F, allowing you to deploy efficient AI solutions. @@ -266,99 +221,21 @@ The **Interference API** enables seamless integration with OpenAI's services thr This API is designed for straightforward implementation and enhanced compatibility with other OpenAI integrations. -### Configuration - -#### Authentication - -Refer to the [G4F Authentication Setup Guide](docs/authentication.md) for detailed instructions on setting up authentication. - -#### Cookies - -Cookies are essential for using Meta AI and Microsoft Designer to create images. -Additionally, cookies are required for the Google Gemini and WhiteRabbitNeo Provider. -From Bing, ensure you have the "\_U" cookie, and from Google, all cookies starting with "\_\_Secure-1PSID" are needed. - -You can pass these cookies directly to the create function or set them using the `set_cookies` method before running G4F: - -```python -from g4f.cookies import set_cookies - -set_cookies(".bing.com", { - "_U": "cookie value" -}) - -set_cookies(".google.com", { - "__Secure-1PSID": "cookie value" -}) -``` - -#### Using .har and Cookie Files - -You can place `.har` and cookie files `.json` in the default `./har_and_cookies` directory. To export a cookie file, use the [EditThisCookie Extension](https://chromewebstore.google.com/detail/editthiscookie/fngmhnnpilhplaeedifhccceomclgfbg) available on the Chrome Web Store. - -#### Creating .har Files to Capture Cookies - -To capture cookies, you can also create `.har` files. For more details, refer to the next section. - -#### Changing the Cookies Directory and Loading Cookie Files in Python - -You can change the cookies directory and load cookie files in your Python environment. To set the cookies directory relative to your Python file, use the following code: - -```python -import os.path -from g4f.cookies import set_cookies_dir, read_cookie_files - -import g4f.debug -g4f.debug.logging = True - -cookies_dir = os.path.join(os.path.dirname(__file__), "har_and_cookies") -set_cookies_dir(cookies_dir) -read_cookie_files(cookies_dir) -``` - -### Debug Mode - -If you enable debug mode, you will see logs similar to the following: - -``` -Read .har file: ./har_and_cookies/you.com.har -Cookies added: 10 from .you.com -Read cookie file: ./har_and_cookies/google.json -Cookies added: 16 from .google.com -``` - -#### .HAR File for OpenaiChat Provider - -##### Generating a .HAR File - -To utilize the OpenaiChat provider, a .har file is required from https://chatgpt.com/. Follow the steps below to create a valid .har file: - -1. Navigate to https://chatgpt.com/ using your preferred web browser and log in with your credentials. -2. Access the Developer Tools in your browser. This can typically be done by right-clicking the page and selecting "Inspect," or by pressing F12 or Ctrl+Shift+I (Cmd+Option+I on a Mac). -3. With the Developer Tools open, switch to the "Network" tab. -4. Reload the website to capture the loading process within the Network tab. -5. Initiate an action in the chat which can be captured in the .har file. -6. Right-click any of the network activities listed and select "Save all as HAR with content" to export the .har file. - -##### Storing the .HAR File - -- Place the exported .har file in the `./har_and_cookies` directory if you are using Docker. Alternatively, if you are using Python from a terminal, you can store it in a `./har_and_cookies` directory within your current working directory. - -> **Note:** Ensure that your .har file is stored securely, as it may contain sensitive information. - -#### Using Proxy +--- -If you want to hide or change your IP address for the providers, you can set a proxy globally via an environment variable: +### 📱 Run on Smartphone +Run the Web UI on your smartphone for easy access on the go. Check out the dedicated guide to learn how to set up and use the GUI on your mobile device: [Run on Smartphone Guide](docs/guides/phone.md) -**- On macOS and Linux:** -```bash -export G4F_PROXY="http://host:port" -``` +--- -**- On Windows:** -```bash -set G4F_PROXY=http://host:port -``` +#### **📘 Full Documentation for Python API** + - **Client API from G4F:** [/docs/client](docs/client.md) + - **AsyncClient API from G4F:** [/docs/async_client](docs/async_client.md) + - **Requests API from G4F:** [/docs/requests](docs/requests.md) + - **File API from G4F:** [/docs/file](docs/file.md) + - **Legacy API with python modules:** [/docs/legacy](docs/legacy.md) + +--- ## 🔗 Powered by gpt4free @@ -818,6 +695,8 @@ set G4F_PROXY=http://host:port + + ## 🤝 Contribute We welcome contributions from the community. Whether you're adding new providers or features, or simply fixing typos and making small improvements, your input is valued. Creating a pull request is all it takes – our co-pilot will handle the code review process. Once all changes have been addressed, we'll merge the pull request into the main branch and release the updates at a later time. @@ -827,7 +706,9 @@ We welcome contributions from the community. Whether you're adding new providers ###### Guide: How can AI help me with writing code? - **Read:** [AI Assistance Guide](docs/guides/help_me.md) -## 🙌 Contributors + + +## Contributors A list of all contributors is available [here](https://github.com/xtekky/gpt4free/graphs/contributors) @@ -946,6 +827,7 @@ A list of all contributors is available [here](https://github.com/xtekky/gpt4fre _Having input implies that the AI's code generation utilized it as one of many sources._ + ## Šī¸ Copyright This program is licensed under the [GNU GPL v3](https://www.gnu.org/licenses/gpl-3.0.txt) @@ -967,12 +849,14 @@ You should have received a copy of the GNU General Public License along with this program. If not, see . ``` + ## ⭐ Star History Star History Chart + ## 📄 License diff --git a/docs/async_client.md b/docs/async_client.md index 90013ccaa0a..b00df7efaf9 100644 --- a/docs/async_client.md +++ b/docs/async_client.md @@ -1,4 +1,6 @@ + + # G4F - AsyncClient API Guide The G4F AsyncClient API is a powerful asynchronous interface for interacting with various AI models. This guide provides comprehensive information on how to use the API effectively, including setup, usage examples, best practices, and important considerations for optimal performance. @@ -18,6 +20,9 @@ The G4F AsyncClient API is designed to be compatible with the OpenAI API, making - [Streaming Completions](#streaming-completions) - [Using a Vision Model](#using-a-vision-model) - [Image Generation](#image-generation) + - [Advanced Usage](#advanced-usage) + - [Conversation Memory](#conversation-memory) + - [Search Tool Support](#search-tool-support) - [Concurrent Tasks](#concurrent-tasks-with-asynciogather) - [Available Models and Providers](#available-models-and-providers) - [Error Handling and Best Practices](#error-handling-and-best-practices) @@ -144,8 +149,8 @@ from g4f.client import AsyncClient async def main(): client = AsyncClient() - - stream = client.chat.completions.create( + + stream = await client.chat.completions.create( model="gpt-4", messages=[ { @@ -154,6 +159,7 @@ async def main(): } ], stream=True, + web_search = False ) async for chunk in stream: @@ -163,6 +169,8 @@ async def main(): asyncio.run(main()) ``` +--- + ### Using a Vision Model **Analyze an image and generate a description:** ```python @@ -244,6 +252,194 @@ async def main(): asyncio.run(main()) ``` +--- + +### Creating Image Variations +**Create variations of an existing image:** +```python +import asyncio +from g4f.client import AsyncClient +from g4f.Provider import OpenaiChat + +async def main(): + client = AsyncClient(image_provider=OpenaiChat) + + response = await client.images.create_variation( + prompt="a white siamese cat", + image=open("docs/images/cat.jpg", "rb"), + model="dall-e-3", + # Add any other necessary parameters + ) + + image_url = response.data[0].url + print(f"Generated image URL: {image_url}") + +asyncio.run(main()) +``` + +--- + + +## Advanced Usage + +### Conversation Memory +To maintain a coherent conversation, it's important to store the context or history of the dialogue. This can be achieved by appending both the user's inputs and the bot's responses to a messages list. This allows the model to reference past exchanges when generating responses. + +**The following example demonstrates how to implement conversation memory with the G4F:** +```python +import asyncio +from g4f.client import AsyncClient + +class Conversation: + def __init__(self): + self.client = AsyncClient() + self.history = [ + { + "role": "system", + "content": "You are a helpful assistant." + } + ] + + def add_message(self, role, content): + self.history.append({ + "role": role, + "content": content + }) + + async def get_response(self, user_message): + # Add user message to history + self.add_message("user", user_message) + + # Get response from AI + response = await self.client.chat.completions.create( + model="gpt-4o-mini", + messages=self.history, + web_search=False + ) + + # Add AI response to history + assistant_response = response.choices[0].message.content + self.add_message("assistant", assistant_response) + + return assistant_response + +async def main(): + conversation = Conversation() + + print("=" * 50) + print("G4F Chat started (type 'exit' to end)".center(50)) + print("=" * 50) + print("\nAI: Hello! How can I assist you today?") + + while True: + user_input = input("\nYou: ") + + if user_input.lower() == 'exit': + print("\nGoodbye!") + break + + response = await conversation.get_response(user_input) + print("\nAI:", response) + +if __name__ == "__main__": + asyncio.run(main()) +``` + +--- + +## Search Tool Support + +The **Search Tool Support** feature enables triggering a web search during chat completions. This is useful for retrieving real-time or specific data, offering a more flexible solution than `web_search`. + +**Example Usage:** +```python +import asyncio +from g4f.client import AsyncClient + +async def main(): + client = AsyncClient() + + tool_calls = [ + { + "function": { + "arguments": { + "query": "Latest advancements in AI", + "max_results": 5, + "max_words": 2500, + "backend": "api", + "add_text": True, + "timeout": 5 + }, + "name": "search_tool" + }, + "type": "function" + } + ] + + response = await client.chat.completions.create( + model="gpt-4", + messages=[ + { + "role": "user", + "content": "Tell me about recent advancements in AI." + } + ], + tool_calls=tool_calls + ) + + print(response.choices[0].message.content) + +if __name__ == "__main__": + asyncio.run(main()) +``` + +**Parameters for `search_tool`:** +- **`query`**: The search query string. +- **`max_results`**: Number of search results to retrieve. +- **`max_words`**: Maximum number of words in the response. +- **`backend`**: The backend used for search (e.g., `"api"`). +- **`add_text`**: Whether to include text snippets in the response. +- **`timeout`**: Maximum time (in seconds) for the search operation. + +**Advantages of Search Tool Support:** +- Works with any provider, irrespective of `web_search` support. +- Offers more customization and control over the search process. +- Bypasses provider-specific limitations. + +--- + +### Using a List of Providers with RetryProvider +```python +import asyncio +from g4f.client import AsyncClient + +import g4f.debug +g4f.debug.logging = True +g4f.debug.version_check = False + +from g4f.Provider import RetryProvider, Phind, FreeChatgpt, Liaobots + +async def main(): + client = AsyncClient(provider=RetryProvider([Phind, FreeChatgpt, Liaobots], shuffle=False) + + response = await client.chat.completions.create( + model="gpt-4o-mini", + messages=[ + { + "role": "user", + "content": "Hello" + } + ], + web_search = False + ) + + print(response.choices[0].message.content) + +asyncio.run(main()) +``` + +--- + ### Concurrent Tasks with asyncio.gather **Execute multiple tasks concurrently:** ```python @@ -284,9 +480,10 @@ asyncio.run(main()) ``` ## Available Models and Providers -The G4F AsyncClient supports a wide range of AI models and providers, allowing you to choose the best option for your specific use case. **Here's a brief overview of the available models and providers:** +The G4F AsyncClient supports a wide range of AI models and providers, allowing you to choose the best option for your specific use case. -### Models +**Here's a brief overview of the available models and providers:** +**Models** - GPT-3.5-Turbo - GPT-4o-Mini - GPT-4 @@ -295,7 +492,7 @@ The G4F AsyncClient supports a wide range of AI models and providers, allowing y - Claude (Anthropic) - And more... -### Providers +**Providers** - OpenAI - Google (for Gemini) - Anthropic @@ -321,7 +518,9 @@ response = await client.chat.completions.create( ``` ## Error Handling and Best Practices -Implementing proper error handling and following best practices is crucial when working with the G4F AsyncClient API. This ensures your application remains robust and can gracefully handle various scenarios. **Here are some key practices to follow:** +Implementing proper error handling and following best practices is crucial when working with the G4F AsyncClient API. This ensures your application remains robust and can gracefully handle various scenarios. + +**Here are some key practices to follow:** 1. **Use try-except blocks to catch and handle exceptions:** ```python diff --git a/docs/authentication.md b/docs/authentication.md index d9847c14923..45bad795c1a 100644 --- a/docs/authentication.md +++ b/docs/authentication.md @@ -1,139 +1,266 @@ -# G4F Authentication Setup Guide +**# G4F - Authentication Guide** +This documentation explains how to authenticate with G4F providers and configure GUI security. It covers API key management, cookie-based authentication, rate limiting, and GUI access controls. -This documentation explains how to set up Basic Authentication for the GUI and API key authentication for the API when running the G4F server. +--- -## Prerequisites +## **Table of Contents** +1. **[Provider Authentication](#provider-authentication)** + - [Prerequisites](#prerequisites) + - [API Key Setup](#api-key-setup) + - [Synchronous Usage](#synchronous-usage) + - [Asynchronous Usage](#asynchronous-usage) + - [Multiple Providers](#multiple-providers-with-api-keys) + - [Cookie-Based Authentication](#cookie-based-authentication) + - [Rate Limiting](#rate-limiting) + - [Error Handling](#error-handling) + - [Supported Providers](#supported-providers) +2. **[GUI Authentication](#gui-authentication)** + - [Server Setup](#server-setup) + - [Browser Access](#browser-access) + - [Programmatic Access](#programmatic-access) +3. **[Best Practices](#best-practices)** +4. **[Troubleshooting](#troubleshooting)** -Before proceeding, ensure you have the following installed: -- Python 3.x -- G4F package installed (ensure it is set up and working) -- Basic knowledge of using environment variables on your operating system +--- -## Steps to Set Up Authentication +## **Provider Authentication** -### 1. API Key Authentication for Both GUI and API +### **Prerequisites** +- Python 3.7+ +- Installed `g4f` package: + ```bash + pip install g4f + ``` +- API keys or cookies from providers (if required). -To secure both the GUI and the API, you'll authenticate using an API key. The API key should be injected via an environment variable and passed to both the GUI (via Basic Authentication) and the API. +--- -#### Steps to Inject the API Key Using Environment Variables: +### **API Key Setup** +#### **Step 1: Set Environment Variables** +**For Linux/macOS (Terminal)**: +```bash +# Example for Anthropic +export ANTHROPIC_API_KEY="your_key_here" -1. **Set the environment variable** for your API key: +# Example for HuggingFace +export HUGGINGFACE_API_KEY="another_key_here" +``` - On Linux/macOS: - ```bash - export G4F_API_KEY="your-api-key-here" - ``` +**For Windows (Command Prompt)**: +```cmd +:: Example for Anthropic +set ANTHROPIC_API_KEY=your_key_here - On Windows (Command Prompt): - ```bash - set G4F_API_KEY="your-api-key-here" - ``` +:: Example for HuggingFace +set HUGGINGFACE_API_KEY=another_key_here +``` - On Windows (PowerShell): - ```bash - $env:G4F_API_KEY="your-api-key-here" - ``` +**For Windows (PowerShell)**: +```powershell +# Example for Anthropic +$env:ANTHROPIC_API_KEY = "your_key_here" - Replace `your-api-key-here` with your actual API key. +# Example for HuggingFace +$env:HUGGINGFACE_API_KEY = "another_key_here" +``` -2. **Run the G4F server with the API key injected**: +#### **Step 2: Initialize Client** +```python +from g4f.client import Client - Use the following command to start the G4F server. The API key will be passed to both the GUI and the API: +# Example for Anthropic +client = Client( + provider="g4f.Provider.Anthropic", + api_key="your_key_here" # Or use os.getenv("ANTHROPIC_API_KEY") +) +``` - ```bash - python -m g4f --debug --port 8080 --g4f-api-key $G4F_API_KEY - ``` +--- - - `--debug` enables debug mode for more verbose logs. - - `--port 8080` specifies the port on which the server will run (you can change this if needed). - - `--g4f-api-key` specifies the API key for both the GUI and the API. +### **Synchronous Usage** +```python +from g4f.client import Client -#### Example: +# Initialize with Anthropic +client = Client(provider="g4f.Provider.Anthropic", api_key="your_key_here") -```bash -export G4F_API_KEY="my-secret-api-key" -python -m g4f --debug --port 8080 --g4f-api-key $G4F_API_KEY +# Simple request +response = client.chat.completions.create( + model="claude-3.5-sonnet", + messages=[{"role": "user", "content": "Hello!"}] +) +print(response.choices[0].message.content) ``` -Now, both the GUI and API will require the correct API key for access. - --- -### 2. Accessing the GUI with Basic Authentication +### **Asynchronous Usage** +```python +import asyncio +from g4f.client import AsyncClient + +async def main(): + # Initialize with Groq + client = AsyncClient(provider="g4f.Provider.Groq", api_key="your_key_here") + + response = await client.chat.completions.create( + model="mixtral-8x7b", + messages=[{"role": "user", "content": "Hello!"}] + ) + print(response.choices[0].message.content) + +asyncio.run(main()) +``` -The GUI uses **Basic Authentication**, where the **username** can be any value, and the **password** is your API key. +--- -#### Example: +### **Multiple Providers with API Keys** +```python +import os +from g4f.client import Client -To access the GUI, open your web browser and navigate to `http://localhost:8080/chat/`. You will be prompted for a username and password. +# Using environment variables +providers = { + "Anthropic": os.getenv("ANTHROPIC_API_KEY"), + "Groq": os.getenv("GROQ_API_KEY") +} -- **Username**: You can use any username (e.g., `user` or `admin`). -- **Password**: Enter your API key (the same key you set in the `G4F_API_KEY` environment variable). +for provider_name, api_key in providers.items(): + client = Client(provider=f"g4f.Provider.{provider_name}", api_key=api_key) + response = client.chat.completions.create( + model="claude-3.5-sonnet", + messages=[{"role": "user", "content": f"Hello from {provider_name}!"}] + ) + print(f"{provider_name}: {response.choices[0].message.content}") +``` --- -### 3. Python Example for Accessing the API +### **Cookie-Based Authentication** +**For Providers Like Gemini/Bing**: +1. Open your browser and log in to the provider's website. +2. Use developer tools (F12) to copy cookies: + - Chrome/Edge: **Application** → **Cookies** + - Firefox: **Storage** → **Cookies** -To interact with the API, you can send requests by including the `g4f-api-key` in the headers. Here's an example of how to do this using the `requests` library in Python. +```python +from g4f.Provider import Gemini + +# Initialize with cookies +client = Client( + provider=Gemini, + cookies={ + "__Secure-1PSID": "your_cookie_value_here", + "__Secure-1PSIDTS": "timestamp_value_here" + } +) +``` -#### Example Code to Send a Request: +--- +### **Rate Limiting** ```python -import requests +from aiolimiter import AsyncLimiter -url = "http://localhost:8080/v1/chat/completions" - -# Body of the request -body = { - "model": "your-model-name", # Replace with your model name - "provider": "your-provider", # Replace with the provider name - "messages": [ - { - "role": "user", - "content": "Hello" - } - ] -} +# Limit to 5 requests per second +rate_limiter = AsyncLimiter(max_rate=5, time_period=1) -# API Key (can be set as an environment variable) -api_key = "your-api-key-here" # Replace with your actual API key +async def make_request(): + async with rate_limiter: + return await client.chat.completions.create(...) +``` -# Send the POST request -response = requests.post(url, json=body, headers={"g4f-api-key": api_key}) +--- -# Check the response -print(response.status_code) -print(response.json()) +### **Error Handling** +```python +from tenacity import retry, stop_after_attempt, wait_exponential + +@retry(stop=stop_after_attempt(3), wait=wait_exponential(multiplier=1, min=4, max=10)) +def safe_request(): + try: + return client.chat.completions.create(...) + except Exception as e: + print(f"Attempt failed: {str(e)}") + raise ``` -In this example: -- Replace `"your-api-key-here"` with your actual API key. -- `"model"` and `"provider"` should be replaced with the appropriate model and provider you're using. -- The `messages` array contains the conversation you want to send to the API. +--- -#### Response: +### **Supported Providers** +| Provider | Auth Type | Example Models | +|----------------|-----------------|----------------------| +| Anthropic | API Key | `claude-3.5-sonnet` | +| Gemini | Cookies | `gemini-1.5-pro` | +| Groq | API Key | `mixtral-8x7b` | +| HuggingFace | API Key | `llama-3.1-70b` | -The response will contain the output of the API request, such as the model's completion or other relevant data, which you can then process in your application. +*Full list: [Providers and Models](providers-and-models.md)* --- -### 4. Testing the Setup +## **GUI Authentication** + +### **Server Setup** +1. Create a password: + ```bash + # Linux/macOS + export G4F_API_KEY="your_password_here" + + # Windows (Command Prompt) + set G4F_API_KEY=your_password_here -- **Accessing the GUI**: Open a web browser and navigate to `http://localhost:8080/chat/`. The GUI will now prompt you for a username and password. You can enter any username (e.g., `admin`), and for the password, enter the API key you set up in the environment variable. - -- **Accessing the API**: Use the Python code example above to send requests to the API. Ensure the correct API key is included in the `g4f-api-key` header. + # Windows (PowerShell) + $env:G4F_API_KEY = "your_password_here" + ``` +2. Start the server: + ```bash + python -m g4f --debug --port 8080 --g4f-api-key $G4F_API_KEY + ``` + +--- + +### **Browser Access** +1. Navigate to `http://localhost:8080/chat/`. +2. Use credentials: + - **Username**: Any value (e.g., `admin`). + - **Password**: Your `G4F_API_KEY`. --- -### 5. Troubleshooting +### **Programmatic Access** +```python +import requests + +response = requests.get( + "http://localhost:8080/chat/", + auth=("admin", "your_password_here") +) +print("Success!" if response.status_code == 200 else f"Failed: {response.status_code}") +``` + +--- -- **GUI Access Issues**: If you're unable to access the GUI, ensure that you are using the correct API key as the password. -- **API Access Issues**: If the API is rejecting requests, verify that the `G4F_API_KEY` environment variable is correctly set and passed to the server. You can also check the server logs for more detailed error messages. +## **Best Practices** +1. 🔒 **Never hardcode keys** + - Use `.env` files or secret managers like AWS Secrets Manager. +2. 🔄 **Rotate keys every 90 days** + - Especially critical for production environments. +3. 📊 **Monitor API usage** + - Use tools like Prometheus/Grafana for tracking. +4. â™ģī¸ **Retry transient errors** + - Use the `tenacity` library for robust retry logic. --- -## Summary +## **Troubleshooting** +| Issue | Solution | +|---------------------------|-------------------------------------------| +| **"Invalid API Key"** | 1. Verify key spelling
2. Regenerate key in provider dashboard | +| **"Cookie Expired"** | 1. Re-login to provider website
2. Update cookie values | +| **"Rate Limit Exceeded"** | 1. Implement rate limiting
2. Upgrade provider plan | +| **"Provider Not Found"** | 1. Check provider name spelling
2. Verify provider compatibility | -By following the steps above, you will have successfully set up Basic Authentication for the G4F GUI (using any username and the API key as the password) and API key authentication for the API. This ensures that only authorized users can access both the interface and make API requests. +--- -[Return to Home](/) \ No newline at end of file +**[âŦ† Back to Top](#table-of-contents)** | **[Providers and Models →](providers-and-models.md)** diff --git a/docs/client.md b/docs/client.md index bfdb27dbdec..274e997cf69 100644 --- a/docs/client.md +++ b/docs/client.md @@ -1,3 +1,5 @@ + + # G4F Client API Guide ## Table of Contents @@ -11,10 +13,12 @@ - [Usage Examples](#usage-examples) - [Text Completions](#text-completions) - [Streaming Completions](#streaming-completions) + - [Using a Vision Model](#using-a-vision-model) - [Image Generation](#image-generation) - [Creating Image Variations](#creating-image-variations) - - [Search Tool Support](#search-tool-support) - [Advanced Usage](#advanced-usage) + - [Conversation Memory](#conversation-memory) + - [Search Tool Support](#search-tool-support) - [Using a List of Providers with RetryProvider](#using-a-list-of-providers-with-retryprovider) - [Using a Vision Model](#using-a-vision-model) - [Command-line Chat Program](#command-line-chat-program) @@ -170,78 +174,48 @@ stream = client.chat.completions.create( } ], stream=True, + web_search = False ) for chunk in stream: if chunk.choices[0].delta.content: print(chunk.choices[0].delta.content or "", end="") ``` - --- -## Search Tool Support - -The **Search Tool Support** feature enables triggering a web search during chat completions. This is useful for retrieving real-time or specific data, offering a more flexible solution than `web_search`. - -**Example Usage**: +### Using a Vision Model +**Analyze an image and generate a description:** ```python +import g4f +import requests + from g4f.client import Client +from g4f.Provider.GeminiPro import GeminiPro -client = Client() +# Initialize the GPT client with the desired provider and api key +client = Client( + api_key="your_api_key_here", + provider=GeminiPro +) -tool_calls = [ - { - "function": { - "arguments": { - "query": "Latest advancements in AI", - "max_results": 5, - "max_words": 2500, - "backend": "api", - "add_text": True, - "timeout": 5 - }, - "name": "search_tool" - }, - "type": "function" - } -] +image = requests.get("https://raw.githubusercontent.com/xtekky/gpt4free/refs/heads/main/docs/images/cat.jpeg", stream=True).raw +# Or: image = open("docs/images/cat.jpeg", "rb") response = client.chat.completions.create( - model="gpt-4", + model=g4f.models.default, messages=[ - {"role": "user", "content": "Tell me about recent advancements in AI."} + { + "role": "user", + "content": "What's in this image?" + } ], - tool_calls=tool_calls + image=image + # Add any other necessary parameters ) print(response.choices[0].message.content) ``` -**Parameters for `search_tool`:** -- **`query`**: The search query string. -- **`max_results`**: Number of search results to retrieve. -- **`max_words`**: Maximum number of words in the response. -- **`backend`**: The backend used for search (e.g., `"api"`). -- **`add_text`**: Whether to include text snippets in the response. -- **`timeout`**: Maximum time (in seconds) for the search operation. - -**Advantages of Search Tool Support:** -- Works with any provider, irrespective of `web_search` support. -- Offers more customization and control over the search process. -- Bypasses provider-specific limitations. - -### Streaming Completions -```python -stream = client.chat.completions.create( - model="gpt-4", - messages=[{"role": "user", "content": "Say this is a test"}], - stream=True, -) - -for chunk in stream: - print(chunk.choices[0].delta.content or "", end="") -``` - --- ### Image Generation @@ -310,65 +284,169 @@ print(f"Generated image URL: {image_url}") ## Advanced Usage -### Using a List of Providers with RetryProvider +### Conversation Memory +To maintain a coherent conversation, it's important to store the context or history of the dialogue. This can be achieved by appending both the user's inputs and the bot's responses to a messages list. This allows the model to reference past exchanges when generating responses. + +**The conversation history consists of messages with different roles:** +- `system`: Initial instructions that define the AI's behavior +- `user`: Messages from the user +- `assistant`: Responses from the AI + +**The following example demonstrates how to implement conversation memory with the G4F:** ```python from g4f.client import Client -from g4f.Provider import RetryProvider, Phind, FreeChatgpt, Liaobots -import g4f.debug -g4f.debug.logging = True -g4f.debug.version_check = False +class Conversation: + def __init__(self): + self.client = Client() + self.history = [ + { + "role": "system", + "content": "You are a helpful assistant." + } + ] + + def add_message(self, role, content): + self.history.append({ + "role": role, + "content": content + }) + + def get_response(self, user_message): + # Add user message to history + self.add_message("user", user_message) + + # Get response from AI + response = self.client.chat.completions.create( + model="gpt-4o-mini", + messages=self.history, + web_search=False + ) + + # Add AI response to history + assistant_response = response.choices[0].message.content + self.add_message("assistant", assistant_response) + + return assistant_response + +def main(): + conversation = Conversation() + + print("=" * 50) + print("G4F Chat started (type 'exit' to end)".center(50)) + print("=" * 50) + print("\nAI: Hello! How can I assist you today?") + + while True: + user_input = input("\nYou: ") + + if user_input.lower() == 'exit': + print("\nGoodbye!") + break + + response = conversation.get_response(user_input) + print("\nAI:", response) + +if __name__ == "__main__": + main() +``` -client = Client( - provider=RetryProvider([Phind, FreeChatgpt, Liaobots], shuffle=False) -) +**Key Features:** +- Maintains conversation context through a message history +- Includes system instructions for AI behavior +- Automatically stores both user inputs and AI responses +- Simple and clean implementation using a class-based approach + +**Usage Example:** +```python +conversation = Conversation() +response = conversation.get_response("Hello, how are you?") +print(response) +``` + +**Note:** +The conversation history grows with each interaction. For long conversations, you might want to implement a method to limit the history size or clear old messages to manage token usage. + +--- + +## Search Tool Support + +The **Search Tool Support** feature enables triggering a web search during chat completions. This is useful for retrieving real-time or specific data, offering a more flexible solution than `web_search`. + +**Example Usage**: +```python +from g4f.client import Client + +client = Client() + +tool_calls = [ + { + "function": { + "arguments": { + "query": "Latest advancements in AI", + "max_results": 5, + "max_words": 2500, + "backend": "api", + "add_text": True, + "timeout": 5 + }, + "name": "search_tool" + }, + "type": "function" + } +] response = client.chat.completions.create( - model="", + model="gpt-4", messages=[ - { - "role": "user", - "content": "Hello" - } - ] + {"role": "user", "content": "Tell me about recent advancements in AI."} + ], + tool_calls=tool_calls ) print(response.choices[0].message.content) ``` - -### Using a Vision Model -**Analyze an image and generate a description:** -```python -import g4f -import requests +**Parameters for `search_tool`:** +- **`query`**: The search query string. +- **`max_results`**: Number of search results to retrieve. +- **`max_words`**: Maximum number of words in the response. +- **`backend`**: The backend used for search (e.g., `"api"`). +- **`add_text`**: Whether to include text snippets in the response. +- **`timeout`**: Maximum time (in seconds) for the search operation. + +**Advantages of Search Tool Support:** +- Works with any provider, irrespective of `web_search` support. +- Offers more customization and control over the search process. +- Bypasses provider-specific limitations. + +--- + +### Using a List of Providers with RetryProvider +```python from g4f.client import Client -from g4f.Provider.GeminiPro import GeminiPro +from g4f.Provider import RetryProvider, Phind, FreeChatgpt, Liaobots +import g4f.debug + +g4f.debug.logging = True +g4f.debug.version_check = False -# Initialize the GPT client with the desired provider and api key client = Client( - api_key="your_api_key_here", - provider=GeminiPro + provider=RetryProvider([Phind, FreeChatgpt, Liaobots], shuffle=False) ) -image = requests.get("https://raw.githubusercontent.com/xtekky/gpt4free/refs/heads/main/docs/images/cat.jpeg", stream=True).raw -# Or: image = open("docs/images/cat.jpeg", "rb") - response = client.chat.completions.create( - model=g4f.models.default, + model="", messages=[ { "role": "user", - "content": "What's in this image?" + "content": "Hello" } - ], - image=image - # Add any other necessary parameters + ] ) print(response.choices[0].message.content) ``` - ## Command-line Chat Program **Here's an example of a simple command-line chat program using the G4F Client:** diff --git a/docs/configuration.md b/docs/configuration.md new file mode 100644 index 00000000000..72bfb9bd0e2 --- /dev/null +++ b/docs/configuration.md @@ -0,0 +1,95 @@ + +### G4F - Configuration + + +## Table of Contents +- [Authentication](#authentication) +- [Cookies Configuration](#cookies-configuration) +- [HAR and Cookie Files](#har-and-cookie-files) +- [Debug Mode](#debug-mode) +- [Proxy Configuration](#proxy-configuration) + + +#### Authentication + +Refer to the [G4F Authentication Setup Guide](authentication.md) for detailed instructions on setting up authentication. + +### Cookies Configuration +Cookies are essential for using Meta AI and Microsoft Designer to create images. +Additionally, cookies are required for the Google Gemini and WhiteRabbitNeo Provider. +From Bing, ensure you have the "\_U" cookie, and from Google, all cookies starting with "\_\_Secure-1PSID" are needed. + +**You can pass these cookies directly to the create function or set them using the `set_cookies` method before running G4F:** +```python +from g4f.cookies import set_cookies + +set_cookies(".bing.com", { + "_U": "cookie value" +}) + +set_cookies(".google.com", { + "__Secure-1PSID": "cookie value" +}) +``` +--- +### HAR and Cookie Files +**Using .har and Cookie Files** +You can place `.har` and cookie files `.json` in the default `./har_and_cookies` directory. To export a cookie file, use the [EditThisCookie Extension](https://chromewebstore.google.com/detail/editthiscookie/fngmhnnpilhplaeedifhccceomclgfbg) available on the Chrome Web Store. + +**Creating .har Files to Capture Cookies** +To capture cookies, you can also create `.har` files. For more details, refer to the next section. + +### Changing the Cookies Directory and Loading Cookie Files in Python +**You can change the cookies directory and load cookie files in your Python environment. To set the cookies directory relative to your Python file, use the following code:** +```python +import os.path +from g4f.cookies import set_cookies_dir, read_cookie_files + +import g4f.debug +g4f.debug.logging = True + +cookies_dir = os.path.join(os.path.dirname(__file__), "har_and_cookies") +set_cookies_dir(cookies_dir) +read_cookie_files(cookies_dir) +``` + +### Debug Mode +**If you enable debug mode, you will see logs similar to the following:** + +``` +Read .har file: ./har_and_cookies/you.com.har +Cookies added: 10 from .you.com +Read cookie file: ./har_and_cookies/google.json +Cookies added: 16 from .google.com +``` + +#### .HAR File for OpenaiChat Provider + +##### Generating a .HAR File + +**To utilize the OpenaiChat provider, a .har file is required from https://chatgpt.com/. Follow the steps below to create a valid .har file:** +1. Navigate to https://chatgpt.com/ using your preferred web browser and log in with your credentials. +2. Access the Developer Tools in your browser. This can typically be done by right-clicking the page and selecting "Inspect," or by pressing F12 or Ctrl+Shift+I (Cmd+Option+I on a Mac). +3. With the Developer Tools open, switch to the "Network" tab. +4. Reload the website to capture the loading process within the Network tab. +5. Initiate an action in the chat which can be captured in the .har file. +6. Right-click any of the network activities listed and select "Save all as HAR with content" to export the .har file. + +##### Storing the .HAR File + +- Place the exported .har file in the `./har_and_cookies` directory if you are using Docker. Alternatively, if you are using Python from a terminal, you can store it in a `./har_and_cookies` directory within your current working directory. + +> **Note:** Ensure that your .har file is stored securely, as it may contain sensitive information. + +### Proxy Configuration +**If you want to hide or change your IP address for the providers, you can set a proxy globally via an environment variable:** + +**- On macOS and Linux:** +```bash +export G4F_PROXY="http://host:port" +``` + +**- On Windows:** +```bash +set G4F_PROXY=http://host:port +``` diff --git a/docs/providers-and-models.md b/docs/providers-and-models.md index ac27fe73c35..37938960198 100644 --- a/docs/providers-and-models.md +++ b/docs/providers-and-models.md @@ -5,11 +5,14 @@ This document provides an overview of various AI providers and models, including text generation, image generation, and vision capabilities. It aims to help users navigate the diverse landscape of AI services and choose the most suitable option for their needs. +> **Note**: See our [Authentication Guide] (authentication.md) for authentication instructions for the provider. + + ## Table of Contents - [Providers](#providers) - - [Free](#providers-free) + - [No auth required](#providers-not-needs-auth) - [HuggingSpace](#providers-huggingspace) - - [Needs Auth](#providers-needs-auth) + - [Needs auth](#providers-needs-auth) - [Models](#models) - [Text Models](#text-models) - [Image Models](#image-models) @@ -17,90 +20,100 @@ This document provides an overview of various AI providers and models, including --- ## Providers +**Authentication types:** +- **Get API key** - Requires an API key for authentication. You need to obtain an API key from the provider's website to use their services. +- **Manual cookies** - Requires manual browser cookies setup. You need to be logged in to the provider's website to use their services. +- **Automatic cookies** - Browser cookies authentication that is automatically fetched. No manual setup needed. +- **Optional API key** - Works without authentication, but you can provide an API key for better rate limits or additional features. The service is usable without an API key. +- **API key / Cookies** - Supports both authentication methods. You can use either an API key or browser cookies for authentication. +- **No auth required** - No authentication needed. The service is publicly available without any credentials. + +**Symbols:** +- ✔ - Feature is supported +- ❌ - Feature is not supported +- ✔ _**(n+)**_ - Number of additional models supported by the provider but not publicly listed -### Providers Free -| Website | Provider | Text Models | Image Models | Vision Models | Stream | Status | Auth | +--- +### Providers No auth required +| Website | API Credentials | Provider | Text Models | Image Models | Vision (Image Upload) | Stream | Status | |----------|-------------|--------------|---------------|--------|--------|------|------| -|[aichatfree.info](https://aichatfree.info)|`g4f.Provider.AIChatFree`|`gemini-1.5-pro`|`sdxl, flux-pro, flux, flux-realism, flux-anime, flux-3d, flux-disney, flux-pixel, flux-4o, any-dark, midjourney, dall-e-3`|❌|✔|![](https://img.shields.io/badge/Active-brightgreen)|❌| -|[api.airforce](https://api.airforce)|`g4f.Provider.Airforce`|`phi-2, openchat-3.5, deepseek-coder, hermes-2-dpo, hermes-2-pro, openhermes-2.5, lfm-40b, german-7b, llama-2-7b, llama-3.1-8b, llama-3.1-70b, neural-7b, zephyr-7b, evil`|`sdxl, flux-pro, flux, flux-realism, flux-anime, flux-3d, flux-disney, flux-pixel, flux-4o, any-dark, midjourney, dall-e-3`|❌|✔|![](https://img.shields.io/badge/Active-brightgreen)|❌| -|[aiuncensored.info/ai_uncensored](https://www.aiuncensored.info/ai_uncensored)|`g4f.Provider.AIUncensored`|`hermes-3`|❌|❌|✔|![](https://img.shields.io/badge/Active-brightgreen)|❌| -|[amigochat.io](https://amigochat.io/chat/)|`g4f.Provider.AmigoChat`|✔|✔|❌|✔|![Error](https://img.shields.io/badge/RateLimit-f48d37)|❌| -|[autonomous.ai](https://www.autonomous.ai/anon/)|`g4f.Provider.AutonomousAI`|`llama-3.3-70b, qwen-2.5-coder-32b, hermes-3, llama-3.2-90b, llama-3.3-70b`|✔|❌|✔|![Error](https://img.shields.io/badge/RateLimit-f48d37)|❌| -|[blackbox.ai](https://www.blackbox.ai)|`g4f.Provider.Blackbox`|`blackboxai, gpt-4, gpt-4o, gemini-1.5-pro, claude-3.5-sonnet, blackboxai-pro, llama-3.1-8b, llama-3.1-70b, llama_3_1_405b, llama-3.3-70b, mixtral-7b, deepseek-chat, dbrx-instruct, qwq-32b, hermes-2-dpo`|`flux`|`blackboxai, gpt-4o, gemini-1.5-pro, gemini-1.5-flash, llama-3.1-8b, llama-3.1-70b, llama-3.1-405b`|✔|![](https://img.shields.io/badge/Active-brightgreen)|❌| -|[blackbox.ai](https://www.blackbox.ai)|`g4f.Provider.BlackboxCreateAgent`|`llama-3.1-70b`|`flux`|❌|✔|![](https://img.shields.io/badge/Active-brightgreen)|❌| -|[cablyai.com](https://cablyai.com)|`g4f.Provider.CablyAI`|`cably-80b`|❌|❌|✔|![](https://img.shields.io/badge/Active-brightgreen)|❌| -|[chatglm.cn](https://chatglm.cn)|`g4f.Provider.ChatGLM`|`glm-4`|❌|❌|✔|![](https://img.shields.io/badge/Active-brightgreen)|❌| -|[chatgpt.com](https://chatgpt.com)|`g4f.Provider.ChatGpt`|✔|❌|❌|✔|![Error](https://img.shields.io/badge/HTTPError-f48d37)|❌| -|[chatgpt.es](https://chatgpt.es)|`g4f.Provider.ChatGptEs`|`gpt-4, gpt-4o, gpt-4o-mini`|❌|❌|✔|![](https://img.shields.io/badge/Active-brightgreen)|❌| -|[chatgptt.me](https://chatgptt.me)|`g4f.Provider.ChatGptt`|`gpt-4, gpt-4o, gpt-4o-mini`|❌|❌|✔|![](https://img.shields.io/badge/Active-brightgreen)|❌| -|[claudeson.net](https://claudeson.net)|`g4f.Provider.ClaudeSon`|`claude-3.5-sonnet`|❌|❌|✔|![](https://img.shields.io/badge/Active-brightgreen)|❌| -|[playground.ai.cloudflare.com](https://playground.ai.cloudflare.com)|`g4f.Provider.Cloudflare`|`llama-2-7b, llama-3-8b, llama-3.1-8b, llama-3.2-1b, qwen-1.5-7b`|❌|❌|✔|![](https://img.shields.io/badge/Active-brightgreen)|❌| -|[copilot.microsoft.com](https://copilot.microsoft.com)|`g4f.Provider.Copilot`|`gpt-4`|❌|❌|✔|![](https://img.shields.io/badge/Active-brightgreen)|❌| -|[darkai.foundation](https://darkai.foundation)|`g4f.Provider.DarkAI`|`gpt-3.5-turbo, gpt-4o, llama-3.1-70b`|❌|❌|✔|![](https://img.shields.io/badge/Active-brightgreen)|❌| -|[duckduckgo.com/aichat](https://duckduckgo.com/aichat)|`g4f.Provider.DDG`|`gpt-4, gpt-4o-mini, claude-3-haiku, llama-3.1-70b, mixtral-8x7b`|❌|❌|✔|![](https://img.shields.io/badge/Active-brightgreen)|❌| -|[black-forest-labs-flux-1-dev.hf.space](https://black-forest-labs-flux-1-dev.hf.space)|`g4f.Provider.Flux`|❌|`flux-dev`|❌|✔|![](https://img.shields.io/badge/Active-brightgreen)|❌| -|[chat10.free2gpt.xyz](https://chat10.free2gpt.xyz)|`g4f.Provider.Free2GPT`|`mistral-7b`|❌|❌|✔|![](https://img.shields.io/badge/Active-brightgreen)|❌| -|[freegptsnav.aifree.site](https://freegptsnav.aifree.site)|`g4f.Provider.FreeGpt`|`gemini-1.5-pro`|❌|❌|✔|![](https://img.shields.io/badge/Active-brightgreen)|❌| -|[app.giz.ai/assistant](https://app.giz.ai/assistant)|`g4f.Provider.GizAI`|`gemini-1.5-flash`|❌|❌|✔|![](https://img.shields.io/badge/Active-brightgreen)|❌| -|[gprochat.com](https://gprochat.com)|`g4f.Provider.GPROChat`|`gemini-1.5-pro`|❌|❌|✔|![](https://img.shields.io/badge/Active-brightgreen)|❌| -|[editor.imagelabs.net](editor.imagelabs.net)|`g4f.Provider.ImageLabs`|❌|✔|❌|✔|![](https://img.shields.io/badge/Active-brightgreen)|❌| -|[huggingface.co/spaces](https://huggingface.co/spaces)|`g4f.Provider.HuggingSpace`|`qwen-2.5-72b, qwen-2.5-72b`|`flux-dev, flux-schnell, sd-3.5`|❌|✔|![](https://img.shields.io/badge/Active-brightgreen)|❌| -|[jmuz.me](https://jmuz.me)|`g4f.Provider.Jmuz`|`gpt-4o, gpt-4, gpt-4o-mini, claude-3.5-sonnet, claude-3-opus, claude-3-haiku, gemini-1.5-pro, gemini-1.5-flash, gemini-exp, llama-3.1-405b, llama-3.1-70b, llama-3.1-8b, llama-3.2-90b, llama-3.2-11b, llama-3.3-70b, qwen-2.5-72b, qwen-2.5-coder-32b, qwq-32b-preview, wizardlm-2-8x22b, deepseek-2.5, mixtral-8x7b`|❌|❌|✔|![](https://img.shields.io/badge/Active-brightgreen)|❌| -|[liaobots.work](https://liaobots.work)|`g4f.Provider.Liaobots`|`grok-2, gpt-4o-mini, gpt-4o, gpt-4, o1-preview, o1-mini, claude-3-opus, claude-3.5-sonnet, claude-3-sonnet, gemini-1.5-flash, gemini-1.5-pro, gemini-2.0-flash, gemini-2.0-flash-thinking`|❌|❌|✔|![](https://img.shields.io/badge/Active-brightgreen)|❌| -|[mhystical.cc](https://mhystical.cc)|`g4f.Provider.Mhystical`|`gpt-4`|❌|❌|✔|![Error](https://img.shields.io/badge/Active-brightgreen)|❌| -|[labs.perplexity.ai](https://labs.perplexity.ai)|`g4f.Provider.PerplexityLabs`|`sonar-online, sonar-chat, llama-3.3-70b, llama-3.1-8b, llama-3.1-70b, lfm-40b`|❌|❌|✔|![Error](https://img.shields.io/badge/Active-brightgreen)|❌| -|[pi.ai/talk](https://pi.ai/talk)|`g4f.Provider.Pi`|`pi`|❌|❌|✔|![Error](https://img.shields.io/badge/Active-brightgreen)|❌| -|[pizzagpt.it](https://www.pizzagpt.it)|`g4f.Provider.Pizzagpt`|`gpt-4o-mini`|❌|❌|✔|![](https://img.shields.io/badge/Active-brightgreen)|❌| -|[pollinations.ai](https://pollinations.ai)|`g4f.Provider.PollinationsAI`|`gpt-4o, mistral-large, mistral-nemo, llama-3.3-70b, gpt-4, qwen-2-72b, qwen-2.5-coder-32b, claude-3.5-sonnet, command-r, deepseek-chat, llama-3.2-3b, evil, p1, turbo, unity, midijourney, rtist`|`flux, flux-realism, flux-cablyai, flux-anime, flux-3d, any-dark, flux-pro, midjourney, dall-e-3`|❌|✔|![](https://img.shields.io/badge/Active-brightgreen)|❌| -|[app.prodia.com](https://app.prodia.com)|`g4f.Provider.Prodia`|❌|✔|❌|❌|![](https://img.shields.io/badge/Active-brightgreen)|❌| -|[rubiks.ai](https://rubiks.ai)|`g4f.Provider.RubiksAI`|`gpt-4o-mini, llama-3.1-70b`|❌|❌|✔|![Error](https://img.shields.io/badge/Active-brightgreen)|❌| -|[teach-anything.com](https://www.teach-anything.com)|`g4f.Provider.TeachAnything`|`llama-3.1-70b`|❌|❌|✔|![](https://img.shields.io/badge/Active-brightgreen)|❌| -|[you.com](https://you.com)|`g4f.Provider.You`|✔|✔|✔|✔|![](https://img.shields.io/badge/Active-brightgreen)|❌| -|[chat9.yqcloud.top](https://chat9.yqcloud.top)|`g4f.Provider.Yqcloud`|`gpt-4`|✔|✔|✔|![](https://img.shields.io/badge/Active-brightgreen)|❌| +|[aichatfree.info](https://aichatfree.info)|No auth required|`g4f.Provider.AIChatFree`|`gemini-1.5-pro` _**(1+)**_|❌|❌|✔|![](https://img.shields.io/badge/Active-brightgreen)| +|[api.airforce](https://api.airforce)|No auth required|`g4f.Provider.Airforce`|`phi-2, openchat-3.5, deepseek-coder, hermes-2-dpo, hermes-2-pro, openhermes-2.5, lfm-40b, german-7b, llama-2-7b, llama-3.1-8b, llama-3.1-70b, neural-7b, zephyr-7b, evil` _**(7+)**_|`sdxl, flux-pro, flux, flux-realism, flux-anime, flux-3d, flux-disney, flux-pixel, flux-4o, any-dark, midjourney, dall-e-3`|❌|✔|![](https://img.shields.io/badge/Active-brightgreen)| +|[aiuncensored.info/ai_uncensored](https://www.aiuncensored.info/ai_uncensored)|Optional API key|`g4f.Provider.AIUncensored`|`hermes-3`|❌|❌|✔|![](https://img.shields.io/badge/Active-brightgreen)| +|[autonomous.ai](https://www.autonomous.ai/anon/)|No auth required|`g4f.Provider.AutonomousAI`|`llama-3.3-70b, qwen-2.5-coder-32b, hermes-3, llama-3.2-90b, llama-3.3-70b, llama-3-2-70b`|✔|❌|✔|![](https://img.shields.io/badge/Active-brightgreen)| +|[blackbox.ai](https://www.blackbox.ai)|No auth required|`g4f.Provider.Blackbox`|`blackboxai, gpt-4, gpt-4o, gemini-1.5-flash, gemini-1.5-pro, claude-3.5-sonnet, blackboxai-pro, llama-3.1-8b, llama-3.1-70b, llama-3-1-405b, llama-3.3-70b, mixtral-7b, deepseek-chat, dbrx-instruct, qwq-32b, hermes-2-dpo` _**(+31)**_|`flux`|`blackboxai, gpt-4o, gemini-1.5-pro, gemini-1.5-flash, llama-3.1-8b, llama-3.1-70b, llama-3.1-405b`|✔|![](https://img.shields.io/badge/Active-brightgreen)| +|[cablyai.com](https://cablyai.com)|No auth required|`g4f.Provider.CablyAI`|`cably-80b`|❌|❌|✔|![](https://img.shields.io/badge/Active-brightgreen)| +|[chatglm.cn](https://chatglm.cn)|No auth required|`g4f.Provider.ChatGLM`|`glm-4`|❌|❌|✔|![](https://img.shields.io/badge/Active-brightgreen)| +|[chatgpt.com](https://chatgpt.com)|No auth required|`g4f.Provider.ChatGpt`|✔ _**(+7)**_|❌|❌|✔|![Error](https://img.shields.io/badge/HTTPError-f48d37)| +|[chatgpt.es](https://chatgpt.es)|No auth required|`g4f.Provider.ChatGptEs`|`gpt-4, gpt-4o, gpt-4o-mini`|❌|❌|✔|![](https://img.shields.io/badge/Active-brightgreen)| +|[chatgptt.me](https://chatgptt.me)|No auth required|`g4f.Provider.ChatGptt`|`gpt-4, gpt-4o, gpt-4o-mini`|❌|❌|✔|![](https://img.shields.io/badge/Active-brightgreen)| +|[playground.ai.cloudflare.com](https://playground.ai.cloudflare.com)||[Automatic cookies](https://playground.ai.cloudflare.com)||`g4f.Provider.Cloudflare`|`llama-2-7b, llama-3-8b, llama-3.1-8b, llama-3.2-1b, qwen-1.5-7b`|❌|❌|✔|![Error](https://img.shields.io/badge/Active-brightgreen)|❌| +|[copilot.microsoft.com](https://copilot.microsoft.com)|Optional API key|`g4f.Provider.Copilot`|`gpt-4, gpt-4o`|❌|❌|✔|![](https://img.shields.io/badge/Active-brightgreen)| +|[darkai.foundation](https://darkai.foundation)|No auth required|`g4f.Provider.DarkAI`|`gpt-3.5-turbo, gpt-4o, llama-3.1-70b`|❌|❌|✔|![](https://img.shields.io/badge/Active-brightgreen)| +|[duckduckgo.com/aichat](https://duckduckgo.com/aichat)|No auth required|`g4f.Provider.DDG`|`gpt-4, gpt-4o-mini, claude-3-haiku, llama-3.1-70b, mixtral-8x7b`|❌|❌|✔|![](https://img.shields.io/badge/Active-brightgreen)| +|[deepinfra.com/chat](https://deepinfra.com/chat)|No auth required|`g4f.Provider.DeepInfraChat`|`llama-3.1-8b, llama-3.1-70b, qwq-32b, wizardlm-2-8x22b, wizardlm-2-7b, qwen-2-72b, qwen-2.5-coder-32b, nemotron-70b`|❌|❌|✔|![](https://img.shields.io/badge/Active-brightgreen)| +|[chat10.free2gpt.xyz](https://chat10.free2gpt.xyz)|No auth required|`g4f.Provider.Free2GPT`|`mistral-7b`|❌|❌|✔|![](https://img.shields.io/badge/Active-brightgreen)| +|[freegptsnav.aifree.site](https://freegptsnav.aifree.site)|No auth required|`g4f.Provider.FreeGpt`|`gemini-1.5-pro`|❌|❌|✔|![](https://img.shields.io/badge/Active-brightgreen)| +|[app.giz.ai/assistant](https://app.giz.ai/assistant)|No auth required|`g4f.Provider.GizAI`|`gemini-1.5-flash`|❌|❌|✔|![](https://img.shields.io/badge/Active-brightgreen)| +|[gprochat.com](https://gprochat.com)|No auth required|`g4f.Provider.GPROChat`|`gemini-1.5-pro`|❌|❌|✔|![](https://img.shields.io/badge/Active-brightgreen)| +|[editor.imagelabs.net](editor.imagelabs.net)|No auth required|`g4f.Provider.ImageLabs`|❌|✔ _**(1+)**_|❌|✔|![](https://img.shields.io/badge/Active-brightgreen)| +|[huggingface.co/spaces](https://huggingface.co/spaces)|Optional API key|`g4f.Provider.HuggingSpace`|`qvq-72b, qwen-2-72b, command-r, command-r-plus, command-r7b`|`flux-dev, flux-schnell, sd-3.5`|❌|✔|![](https://img.shields.io/badge/Active-brightgreen)| +|[jmuz.me](https://jmuz.me)|Optional API key|`g4f.Provider.Jmuz`|`gpt-4o, gpt-4, gpt-4o-mini, claude-3.5-sonnet, gemini-1.5-pro, gemini-1.5-flash, gemini-exp, llama-3.1-405b, llama-3.1-70b, llama-3.1-8b, llama-3.2-11b, llama-3.3-70b, qwq-32b-preview, mixtral-8x7b` _**(7+)**_|❌|❌|✔|![](https://img.shields.io/badge/Active-brightgreen)| +|[liaobots.work](https://liaobots.work)|[Automatic cookies](https://liaobots.work)|`g4f.Provider.Liaobots`|`grok-2, gpt-4o-mini, gpt-4o, gpt-4, o1-preview, o1-mini, claude-3-opus, claude-3.5-sonnet, claude-3-sonnet, gemini-1.5-flash, gemini-1.5-pro, gemini-2.0-flash, gemini-2.0-flash-thinking`|❌|❌|✔|![](https://img.shields.io/badge/Active-brightgreen)| +|[mhystical.cc](https://mhystical.cc)||[Optional API key](https://mhystical.cc/dashboard)|`g4f.Provider.Mhystical`|`gpt-4`|❌|❌|✔|![Error](https://img.shields.io/badge/Active-brightgreen)| +|[oi-vscode-server.onrender.com](https://oi-vscode-server.onrender.com)|No auth required|`g4f.Provider.OIVSCode`|`gpt-4o-mini`|❌|`gpt-4o-mini`|✔|![](https://img.shields.io/badge/Active-brightgreen)| +|[labs.perplexity.ai](https://labs.perplexity.ai)|No auth required|`g4f.Provider.PerplexityLabs`|`sonar-online, sonar-chat, llama-3.3-70b, llama-3.1-8b, llama-3.1-70b, lfm-40b`|❌|❌|✔|![Error](https://img.shields.io/badge/Active-brightgreen)| +|[pi.ai/talk](https://pi.ai/talk)||[Manual cookies](https://pi.ai/talk)|`g4f.Provider.Pi`|`pi`|❌|❌|✔|![Error](https://img.shields.io/badge/Active-brightgreen)| +|[pizzagpt.it](https://www.pizzagpt.it)|No auth required|`g4f.Provider.Pizzagpt`|`gpt-4o-mini`|❌|❌|✔|![](https://img.shields.io/badge/Active-brightgreen)| +|[pollinations.ai](https://pollinations.ai)|No auth required|`g4f.Provider.PollinationsAI`|`gpt-4o, mistral-large, mistral-nemo, llama-3.3-70b, gpt-4, qwen-2-72b, qwen-2.5-coder-32b, claude-3.5-sonnet, claude-3.5-haiku, command-r, deepseek-chat, llama-3.1-8b, evil, p1, unity, midijourney, rtist`|`flux, flux-realism, flux-cablyai, flux-anime, flux-3d, any-dark, flux-pro, midjourney, dall-e-3, sd-turbo`|❌|✔|![](https://img.shields.io/badge/Active-brightgreen)| +|[app.prodia.com](https://app.prodia.com)|No auth required|`g4f.Provider.Prodia`|❌|✔ _**(46)**_|❌|❌|![](https://img.shields.io/badge/Active-brightgreen)| +|[teach-anything.com](https://www.teach-anything.com)|No auth required|`g4f.Provider.TeachAnything`|`llama-3.1-70b`|❌|❌|✔|![](https://img.shields.io/badge/Active-brightgreen)| +|[you.com](https://you.com)|[Manual cookies](https://you.com)|`g4f.Provider.You`|✔|✔|✔|✔|![](https://img.shields.io/badge/Active-brightgreen)| +|[chat9.yqcloud.top](https://chat9.yqcloud.top)|No auth required|`g4f.Provider.Yqcloud`|`gpt-4`|✔|✔|✔|![](https://img.shields.io/badge/Active-brightgreen)| --- ### Providers HuggingSpace -| Website | Provider | Text Models | Image Models | Vision Models | Stream | Status | Auth | -|----------|-------------|--------------|---------------|--------|--------|------|------| -|[black-forest-labs-flux-1-dev.hf.space](https://black-forest-labs-flux-1-dev.hf.space)|`g4f.Provider.BlackForestLabsFlux1Dev`|❌|`flux-dev`|❌|✔|![](https://img.shields.io/badge/Active-brightgreen)|❌| -|[black-forest-labs-flux-1-schnell.hf.space](https://black-forest-labs-flux-1-schnell.hf.space)|`g4f.Provider.BlackForestLabsFlux1Schnell`|❌|`flux-schnell`|❌|✔|![](https://img.shields.io/badge/Active-brightgreen)|❌| -|[cohereforai-c4ai-command.hf.space](https://cohereforai-c4ai-command.hf.space)|`g4f.Provider.CohereForAI`|`command-r, command-r-plus, command-r7b`|❌|❌|✔|![](https://img.shields.io/badge/Active-brightgreen)|❌| -|[qwen-qvq-72b-preview.hf.space](https://qwen-qvq-72b-preview.hf.space)|`g4f.Provider.Qwen_QVQ_72B`|`qwen-2.5-72b`|❌|❌|✔|![](https://img.shields.io/badge/Active-brightgreen)|❌| -|[qwen-qwen2-72b-instruct.hf.space](https://qwen-qwen2-72b-instruct.hf.space)|`g4f.Provider.Qwen_Qwen_2_72B_Instruct`|`qwen-2.5-72b`|❌|❌|✔|![](https://img.shields.io/badge/Active-brightgreen)|❌| -|[stabilityai-stable-diffusion-3-5-large.hf.space](https://stabilityai-stable-diffusion-3-5-large.hf.space)|`g4f.Provider.StableDiffusion35Large`|❌|`sd-3.5`|❌|✔|![](https://img.shields.io/badge/Active-brightgreen)|❌| -|[voodoohop-flux-1-schnell.hf.space](https://voodoohop-flux-1-schnell.hf.space)|`g4f.Provider.VoodoohopFlux1Schnell`|❌|`flux-schnell`|❌|✔|![](https://img.shields.io/badge/Active-brightgreen)|❌| +| Website | API Credentials | Provider | Text Models | Image Models | Vision Models | Stream | Status | Auth | +|----------|-------------|--------------|---------------|--------|--------|------|------|------| +|[black-forest-labs-flux-1-dev.hf.space](https://black-forest-labs-flux-1-dev.hf.space)|[Get API key](https://huggingface.co/settings/tokens)|`g4f.Provider.BlackForestLabsFlux1Dev`|❌|`flux-dev`|❌|✔|![](https://img.shields.io/badge/Active-brightgreen)| +|[black-forest-labs-flux-1-schnell.hf.space](https://black-forest-labs-flux-1-schnell.hf.space)|[Get API key](https://huggingface.co/settings/tokens)|`g4f.Provider.BlackForestLabsFlux1Schnell`|❌|`flux-schnell`|❌|✔|![](https://img.shields.io/badge/Active-brightgreen)| +|[cohereforai-c4ai-command.hf.space](https://cohereforai-c4ai-command.hf.space)|[Get API key](https://huggingface.co/settings/tokens)|`g4f.Provider.CohereForAI`|`command-r, command-r-plus, command-r7b`|❌|❌|✔|![](https://img.shields.io/badge/Active-brightgreen)| +|[qwen-qvq-72b-preview.hf.space](https://qwen-qvq-72b-preview.hf.space)|[Get API key](https://huggingface.co/settings/tokens)|`g4f.Provider.Qwen_QVQ_72B`|`qvq-72b`|❌|❌|✔|![](https://img.shields.io/badge/Active-brightgreen)| +|[qwen-qwen2-72b-instruct.hf.space](https://qwen-qwen2-72b-instruct.hf.space)|[Get API key](https://huggingface.co/settings/tokens)|`g4f.Provider.Qwen_Qwen_2_72B_Instruct`|`qwen-2-72b`|❌|❌|✔|![](https://img.shields.io/badge/Active-brightgreen)| +|[stabilityai-stable-diffusion-3-5-large.hf.space](https://stabilityai-stable-diffusion-3-5-large.hf.space)|[Get API key](https://huggingface.co/settings/tokens)|`g4f.Provider.StableDiffusion35Large`|❌|`sd-3.5`|❌|✔|![](https://img.shields.io/badge/Active-brightgreen)| +|[voodoohop-flux-1-schnell.hf.space](https://voodoohop-flux-1-schnell.hf.space)|[Get API key](https://huggingface.co/settings/tokens)|`g4f.Provider.VoodoohopFlux1Schnell`|❌|`flux-schnell`|❌|✔|![](https://img.shields.io/badge/Active-brightgreen)| --- + ### Providers Needs Auth -| Provider | Text Models | Image Models | Vision Models | Stream | Status | Auth | -|----------|-------------|--------------|---------------|--------|--------|------| -|[console.anthropic.com](https://console.anthropic.com)|`g4f.Provider.Anthropic`|✔|❌|❌|✔|![](https://img.shields.io/badge/Active-brightgreen)|✔| -|[bing.com/images/create](https://www.bing.com/images/create)|`g4f.Provider.BingCreateImages`|❌|`dall-e-3`|❌|❌|![](https://img.shields.io/badge/Active-brightgreen)|✔| -|[inference.cerebras.ai](https://inference.cerebras.ai/)|`g4f.Provider.Cerebras`|✔|❌|❌|✔|![](https://img.shields.io/badge/Active-brightgreen)|❌| -|[deepinfra.com](https://deepinfra.com)|`g4f.Provider.DeepInfra`|✔|✔|❌|✔|![](https://img.shields.io/badge/Active-brightgreen)|✔| -|[platform.deepseek.com](https://platform.deepseek.com)|`g4f.Provider.DeepSeek`|✔|❌|❌|❌|![](https://img.shields.io/badge/Active-brightgreen)|✔| -|[gemini.google.com](https://gemini.google.com)|`g4f.Provider.Gemini`|`gemini`|`gemini`|`gemini`|❌|![](https://img.shields.io/badge/Active-brightgreen)|✔| -|[ai.google.dev](https://ai.google.dev)|`g4f.Provider.GeminiPro`|`gemini-1.5-flash, gemini-1.5-pro, gemini-2.0-flash`|❌|`gemini-1.5-pro`|❌|![](https://img.shields.io/badge/Active-brightgreen)|✔| -|[developers.sber.ru/gigachat](https://developers.sber.ru/gigachat)|`g4f.Provider.GigaChat`|✔|❌|❌|❌|![](https://img.shields.io/badge/Active-brightgreen)|✔| -|[github.com/copilot](https://github.com/copilot)|`g4f.Provider.GithubCopilot`|✔|❌|❌|❌|![](https://img.shields.io/badge/Active-brightgreen)|✔| -|[glhf.chat](https://glhf.chat)|`g4f.Provider.GlhfChat`|✔|❌|❌|❌|![](https://img.shields.io/badge/Active-brightgreen)|✔| -|[console.groq.com/playground](https://console.groq.com/playground)|`g4f.Provider.Groq`|✔|❌|✔|❌|![](https://img.shields.io/badge/Active-brightgreen)|✔| -|[huggingface.co/chat](https://huggingface.co/chat)|`g4f.Provider.HuggingChat`|`qwen-2.5-72b, llama-3.3-70b, command-r-plus, qwq-32b, nemotron-70b, nemotron-70b, llama-3.2-11b, mistral-nemo, phi-3.5-mini`|`flux-dev`|❌|✔|![](https://img.shields.io/badge/Active-brightgreen)|✔| -|[huggingface.co/chat](https://huggingface.co/chat)|`g4f.Provider.HuggingFace`|✔|✔|❌|✔|![](https://img.shields.io/badge/Active-brightgreen)|✔| -|[api-inference.huggingface.co](https://api-inference.huggingface.co)|`g4f.Provider.HuggingFaceAPI`|✔|❌|✔|❌|![](https://img.shields.io/badge/Active-brightgreen)|✔| -|[meta.ai](https://www.meta.ai)|`g4f.Provider.MetaAI`|`meta-ai`|❌|❌|✔|![](https://img.shields.io/badge/Active-brightgreen)|✔| -|[designer.microsoft.com](https://designer.microsoft.com)|`g4f.Provider.MicrosoftDesigner`|❌|`dall-e-3`|❌|❌|![](https://img.shields.io/badge/Active-brightgreen)|✔| -|[platform.openai.com](https://platform.openai.com)|`g4f.Provider.OpenaiAPI`|✔|❌|❌|✔|![](https://img.shields.io/badge/Active-brightgreen)|✔| -|[chatgpt.com](https://chatgpt.com)|`g4f.Provider.OpenaiChat`|`gpt-4o, gpt-4o-mini, gpt-4`|❌|✔|✔|![](https://img.shields.io/badge/Active-brightgreen)|✔| -|[perplexity.ai](https://www.perplexity.ai)|`g4f.Provider.PerplexityApi`|`gpt-4o, gpt-4o-mini, gpt-4`|❌|❌|✔|![](https://img.shields.io/badge/Active-brightgreen)|✔| -|[poe.com](https://poe.com)|`g4f.Provider.Poe`|✔|❌|❌|✔|![](https://img.shields.io/badge/Active-brightgreen)|✔| -|[raycast.com](https://raycast.com)|`g4f.Provider.Raycast`|✔|❌|❌|✔|![](https://img.shields.io/badge/Active-brightgreen)|✔| -|[chat.reka.ai](https://chat.reka.ai)|`g4f.Provider.Reka`|`reka-core`|❌|✔|✔|![](https://img.shields.io/badge/Active-brightgreen)|✔| -|[replicate.com](https://replicate.com)|`g4f.Provider.Replicate`|✔|❌|❌|✔|![](https://img.shields.io/badge/Active-brightgreen)|✔| -|[beta.theb.ai](https://beta.theb.ai)|`g4f.Provider.Theb`|✔|❌|❌|✔|![](https://img.shields.io/badge/Active-brightgreen)|✔| -|[beta.theb.ai](https://beta.theb.ai)|`g4f.Provider.WhiteRabbitNeo`|✔|❌|❌|✔|![](https://img.shields.io/badge/Active-brightgreen)|✔| -|[whiterabbitneo.com](https://www.whiterabbitneo.com)|`g4f.Provider.WhiteRabbitNeo`|✔|❌|❌|✔|![](https://img.shields.io/badge/Active-brightgreen)|✔| -|[console.x.ai](https://console.x.ai)|`g4f.Provider.xAI`|✔|❌|❌|✔|![](https://img.shields.io/badge/Active-brightgreen)|✔| +| Website | API Credentials | Provider | Text Models | Image Models | Vision Models | Stream | Status | +|----------|-------------|--------------|---------------|--------|--------|------|------| +|[console.anthropic.com](https://console.anthropic.com)|[Get API key](https://console.anthropic.com/settings/keys)|`g4f.Provider.Anthropic`|✔ _**(8+)**_|❌|❌|✔|![](https://img.shields.io/badge/Active-brightgreen)| +|[bing.com/images/create](https://www.bing.com/images/create)|[Manual cookies](https://www.bing.com)|`g4f.Provider.BingCreateImages`|❌|`dall-e-3`|❌|❌|![](https://img.shields.io/badge/Active-brightgreen)| +|[inference.cerebras.ai](https://inference.cerebras.ai/)|[Get API key](https://cloud.cerebras.ai)|`g4f.Provider.Cerebras`|✔ _**(3+)**_|❌|❌|✔|![](https://img.shields.io/badge/Active-brightgreen)| +|[copilot.microsoft.com](https://copilot.microsoft.com)|[Manual cookies](https://copilot.microsoft.com)|`g4f.Provider.CopilotAccount`|✔ _**(1+)**_|✔ _**(1+)**_|✔ _**(1+)**_|✔|![](https://img.shields.io/badge/Active-brightgreen)| +|[deepinfra.com](https://deepinfra.com)|[Get API key](https://deepinfra.com/dash/api_keys)|`g4f.Provider.DeepInfra`|✔ _**(17+)**_|✔ _**(6+)**_|❌|✔|![](https://img.shields.io/badge/Active-brightgreen)| +|[platform.deepseek.com](https://platform.deepseek.com)|[Get API key](https://platform.deepseek.com/api_keys)|`g4f.Provider.DeepSeek`|✔ _**(1+)**_|❌|❌|❌|![](https://img.shields.io/badge/Active-brightgreen)| +|[gemini.google.com](https://gemini.google.com)|[Manual cookies](https://gemini.google.com)|`g4f.Provider.Gemini`|`gemini, gemini-1.5-flash, gemini-1.5-pro`|`gemini`|`gemini`|❌|![](https://img.shields.io/badge/Active-brightgreen)| +|[ai.google.dev](https://ai.google.dev)|[Get API key](https://aistudio.google.com/u/0/apikey)|`g4f.Provider.GeminiPro`|`gemini-1.5-flash, gemini-1.5-pro, gemini-2.0-flash`|❌|`gemini-1.5-pro`|❌|![](https://img.shields.io/badge/Active-brightgreen)| +|[developers.sber.ru/gigachat](https://developers.sber.ru/gigachat)|[Manual cookies](https://developers.sber.ru/gigachat)|`g4f.Provider.GigaChat`|✔ _**(3+)**_|❌|❌|❌|![](https://img.shields.io/badge/Active-brightgreen)| +|[github.com/copilot](https://github.com/copilot)|[Manual cookies](https://github.com/copilot)|`g4f.Provider.GithubCopilot`|✔ _**(4+)**_|❌|❌|❌|![](https://img.shields.io/badge/Active-brightgreen)| +|[glhf.chat](https://glhf.chat)|[Get API key](https://glhf.chat/user-settings/api)|`g4f.Provider.GlhfChat`|✔ _**(22+)**_|❌|❌|❌|![](https://img.shields.io/badge/Active-brightgreen)| +|[console.groq.com/playground](https://console.groq.com/playground)|[Get API key](https://console.groq.com/keys)|`g4f.Provider.Groq`|✔ _**(18+)**_|❌|✔|❌|![](https://img.shields.io/badge/Active-brightgreen)| +|[huggingface.co/chat](https://huggingface.co/chat)|[Manual cookies](https://huggingface.co/chat)|`g4f.Provider.HuggingChat`|`qwen-2.5-72b, llama-3.3-70b, command-r-plus, qwq-32b, nemotron-70b, nemotron-70b, llama-3.2-11b, mistral-nemo, phi-3.5-mini`|`flux-dev, flux-schnell`|❌|✔|![](https://img.shields.io/badge/Active-brightgreen)| +|[huggingface.co/chat](https://huggingface.co/chat)|[API key / Cookies](https://huggingface.co/settings/tokens)|`g4f.Provider.HuggingFace`|✔ _**(47+)**_|✔ _**(9+)**_|❌|✔|![](https://img.shields.io/badge/Active-brightgreen)| +|[api-inference.huggingface.co](https://api-inference.huggingface.co)|[Get API key](https://huggingface.co/settings/tokens)|`g4f.Provider.HuggingFaceAPI`|✔ _**(9+)**_|✔ _**(2+)**_|✔ _**(1+)**_|❌|![](https://img.shields.io/badge/Active-brightgreen)|✔| +|[meta.ai](https://www.meta.ai)|[Manual cookies](https://www.meta.ai)|`g4f.Provider.MetaAI`|`meta-ai`|❌|❌|✔|![](https://img.shields.io/badge/Active-brightgreen)|✔| +|[meta.ai](https://www.meta.ai)|[Manual cookies](https://www.meta.ai)|`g4f.Provider.MetaAIAccount`|❌|`meta-ai`|❌|✔|![](https://img.shields.io/badge/Active-brightgreen)| +|[designer.microsoft.com](https://designer.microsoft.com)|[Manual cookies](https://designer.microsoft.com)|`g4f.Provider.MicrosoftDesigner`|❌|`dall-e-3`|❌|❌|![](https://img.shields.io/badge/Active-brightgreen)| +|[platform.openai.com](https://platform.openai.com)|[Get API key](https://platform.openai.com/settings/organization/api-keys)|`g4f.Provider.OpenaiAPI`|✔|❌|❌|✔|![](https://img.shields.io/badge/Active-brightgreen)| +|[chatgpt.com](https://chatgpt.com)|[Manual cookies](https://chatgpt.com)|`g4f.Provider.OpenaiChat`|`gpt-4o, gpt-4o-mini, gpt-4` _**(8+)**_|✔_**(1)**_|✔_**(8+)**_|✔|![](https://img.shields.io/badge/Active-brightgreen)| +|[perplexity.ai](https://www.perplexity.ai)|[Get API key](https://www.perplexity.ai/settings/api)|`g4f.Provider.PerplexityApi`|✔ _**(6+)**_|❌|❌|✔|![](https://img.shields.io/badge/Active-brightgreen)| +|[chat.reka.ai](https://chat.reka.ai)|[Manual cookies](https://chat.reka.ai)|`g4f.Provider.Reka`|`reka-core`|❌|✔|✔|![](https://img.shields.io/badge/Active-brightgreen)| +|[replicate.com](https://replicate.com)|[Get API key](https://replicate.com/account/api-tokens)|`g4f.Provider.Replicate`|✔ _**(1+)**_|❌|❌|✔|![](https://img.shields.io/badge/Active-brightgreen)| +|[beta.theb.ai](https://beta.theb.ai)|[Get API key](https://beta.theb.ai)|`g4f.Provider.ThebApi`|✔ _**(21+)**_|❌|❌|✔|![](https://img.shields.io/badge/Active-brightgreen)| +|[whiterabbitneo.com](https://www.whiterabbitneo.com)|[Manual cookies](https://www.whiterabbitneo.com)|`g4f.Provider.WhiteRabbitNeo`|✔|❌|❌|✔|![](https://img.shields.io/badge/Active-brightgreen)| +|[console.x.ai](https://console.x.ai)|[Get API key](https://console.x.ai)|`g4f.Provider.xAI`|✔|❌|❌|✔|![](https://img.shields.io/badge/Active-brightgreen)| --- ## Models @@ -120,12 +133,12 @@ This document provides an overview of various AI providers and models, including |meta-ai|Meta|1+ Providers|[ai.meta.com](https://ai.meta.com/)| |llama-2-7b|Meta Llama|2+ Providers|[huggingface.co](https://huggingface.co/meta-llama/Llama-2-7b)| |llama-3-8b|Meta Llama|1+ Providers|[ai.meta.com](https://ai.meta.com/blog/meta-llama-3/)| -|llama-3.1-8b|Meta Llama|5+ Providers|[ai.meta.com](https://ai.meta.com/blog/meta-llama-3-1/)| +|llama-3.1-8b|Meta Llama|6+ Providers|[ai.meta.com](https://ai.meta.com/blog/meta-llama-3-1/)| |llama-3.1-70b|Meta Llama|9+ Providers|[ai.meta.com](https://ai.meta.com/blog/meta-llama-3-1/)| |llama-3.1-405b|Meta Llama|2+ Providers|[huggingface.co](https://huggingface.co/meta-llama/Llama-3.1-405B)| |llama-3.2-1b|Meta Llama|1+ Providers|[huggingface.co](https://huggingface.co/meta-llama/Llama-3.2-1B)| -|llama-3.2-3b|Meta Llama|1+ Providers|[huggingface.co](https://huggingface.co/meta-llama/Llama-3.2-3B)| |llama-3.2-11b|Meta Llama|3+ Providers|[ai.meta.com](https://ai.meta.com/blog/llama-3-2-connect-2024-vision-edge-mobile-devices/)| +|llama-3.2-70b|Meta Llama|1+ Providers|[ai.meta.com](https://ai.meta.com/blog/llama-3-2-connect-2024-vision-edge-mobile-devices/)| |llama-3.2-90b|Meta Llama|2+ Providers|[huggingface.co](https://huggingface.co/meta-llama/Llama-3.2-90B-Vision)| |llama-3.3-70b|Meta Llama|7+ Providers|[llama.com/]()| |mixtral-7b|Mistral AI|1+ Providers|[mistral.ai](https://mistral.ai/news/mixtral-of-experts/)| @@ -137,6 +150,8 @@ This document provides an overview of various AI providers and models, including |hermes-3|NousResearch|2+ Providers|[nousresearch.com](https://nousresearch.com/hermes3/)| |phi-2|Microsoft|1+ Providers|[huggingface.co](https://huggingface.co/microsoft/phi-2)| |phi-3.5-mini|Microsoft|2+ Providers|[huggingface.co](https://huggingface.co/microsoft/Phi-3.5-mini-instruct)| +|wizardlm-2-7b|Microsoft|1+ Providers|[wizardlm.github.io](https://wizardlm.github.io/WizardLM2/)| +|wizardlm-2-8x22b|Microsoft|2+ Providers|[wizardlm.github.io](https://wizardlm.github.io/WizardLM2/)| |gemini|Google DeepMind|2+ Providers|[deepmind.google](http://deepmind.google/technologies/gemini/)| |gemini-1.5-flash|Google DeepMind|5+ Providers|[deepmind.google](https://deepmind.google/technologies/gemini/flash/)| |gemini-1.5-pro|Google DeepMind|7+ Providers|[deepmind.google](https://deepmind.google/technologies/gemini/pro/)| @@ -145,6 +160,7 @@ This document provides an overview of various AI providers and models, including |claude-3-haiku|Anthropic|2+ Providers|[anthropic.com](https://www.anthropic.com/news/claude-3-haiku)| |claude-3-sonnet|Anthropic|1+ Providers|[anthropic.com](https://www.anthropic.com/news/claude-3-family)| |claude-3-opus|Anthropic|2+ Providers|[anthropic.com](https://www.anthropic.com/news/claude-3-family)| +|claude-3.5-haiku|Anthropic|1+ Providers|[anthropic.com](https://www.anthropic.com/news/claude-3-5-sonnet)| |claude-3.5-sonnet|Anthropic|4+ Providers|[anthropic.com](https://www.anthropic.com/news/claude-3-5-sonnet)| |reka-core|Reka AI|1+ Providers|[reka.ai](https://www.reka.ai/ourmodels)| |blackboxai|Blackbox AI|1+ Providers|[docs.blackbox.chat](https://docs.blackbox.chat/blackbox-ai-1)| @@ -156,11 +172,11 @@ This document provides an overview of various AI providers and models, including |qwen-2-72b|Qwen|2+ Providers|[huggingface.co](https://huggingface.co/Qwen/Qwen2-72B)| |qwen-2.5-72b|Qwen|2+ Providers|[huggingface.co](https://huggingface.co/Qwen/Qwen2.5-72B-Instruct)| |qwen-2.5-coder-32b|Qwen|4+ Providers|[huggingface.co](https://huggingface.co/Qwen/Qwen2.5-Coder-32B)| -|qwq-32b|Qwen|4+ Providers|[qwen2.org](https://qwen2.org/qwq-32b-preview/)| +|qwq-32b|Qwen|4+ Providers|[huggingface.co](https://huggingface.co/Qwen/QwQ-32B-Preview)| +|qvq-72b|Qwen|1+ Providers|[huggingface.co](https://huggingface.co/Qwen/QVQ-72B-Preview)| |pi|Inflection|1+ Providers|[inflection.ai](https://inflection.ai/blog/inflection-2-5)| |deepseek-chat|DeepSeek|3+ Providers|[huggingface.co](https://huggingface.co/deepseek-ai/deepseek-llm-67b-chat)| |deepseek-coder|DeepSeek|1+ Providers|[huggingface.co](https://huggingface.co/deepseek-ai/DeepSeek-Coder-V2-Instruct)| -|wizardlm-2-8x22b|WizardLM|1+ Providers|[huggingface.co](https://huggingface.co/alpindale/WizardLM-2-8x22B)| |openchat-3.5|OpenChat|1+ Providers|[huggingface.co](https://huggingface.co/openchat/openchat_3.5)| |grok-2|x.ai|1+ Providers|[x.ai](https://x.ai/blog/grok-2)| |sonar-online|Perplexity AI|1+ Providers|[docs.perplexity.ai](https://docs.perplexity.ai/)| @@ -177,7 +193,6 @@ This document provides an overview of various AI providers and models, including |glm-4|THUDM|1+ Providers|[github.com/THUDM](https://github.com/THUDM/GLM-4)| |evil|Evil Mode - Experimental|2+ Providers|[]( )| |midijourney||1+ Providers|[]( )| -|turbo||1+ Providers|[]( )| |unity||1+ Providers|[]( )| |rtist||1+ Providers|[]( )| @@ -186,7 +201,7 @@ This document provides an overview of various AI providers and models, including | Model | Base Provider | Providers | Website | |-------|---------------|-----------|---------| |sdxl|Stability AI|1+ Providers|[huggingface.co](https://huggingface.co/docs/diffusers/en/using-diffusers/sdxl)| -|sdxl-lora|Stability AI|1+ Providers|[huggingface.co](https://huggingface.co/blog/lcm_lora)| +|sd-turbo||1+ Providers|[huggingface.co](https://huggingface.co/stabilityai/sd-turbo)| |sd-3.5|Stability AI|1+ Providers|[huggingface.co](https://huggingface.co/stabilityai/stable-diffusion-3.5-large)| |flux|Black Forest Labs|4+ Providers|[github.com/black-forest-labs/flux](https://github.com/black-forest-labs/flux)| |flux-pro|Black Forest Labs|2+ Providers|[github.com/black-forest-labs/flux](https://github.com/black-forest-labs/flux)| diff --git a/docs/providers.md b/docs/providers.md deleted file mode 100644 index 502191e9eab..00000000000 --- a/docs/providers.md +++ /dev/null @@ -1,575 +0,0 @@ - -## Free - -### AmigoChat -| Provider | `g4f.Provider.AmigoChat` | -| -------- | ---- | -| **Website** | [amigochat.io](https://amigochat.io/chat/) | -| **Status** | ![Unknown](https://img.shields.io/badge/Unknown-grey) | -| **Models** | gpt-4o, gpt-4o-mini, llama-3.1-405b, mistral-nemo, gemini-flash, gemma-2b, claude-3.5-sonnet, command-r-plus, qwen-2.5-72b, grok-beta (37)| -| **Image Models (Image Generation)** | flux-realism, flux-pro, dall-e-3, flux-dev | -| **Authentication** | ❌ | -| **Streaming** | ✔ī¸ | -| **System message** | ✔ī¸ | -| **Message history** | ✔ī¸ | -### Blackbox AI -| Provider | `g4f.Provider.Blackbox` | -| -------- | ---- | -| **Website** | [blackbox.ai](https://www.blackbox.ai) | -| **Status** | ![Active](https://img.shields.io/badge/Active-brightgreen) | -| **Models** | gpt-4, gpt-4o, llama-3.1-8b, llama-3.1-70b, llama-3.1-405b, gemini-pro, gemini-flash, claude-3.5-sonnet, blackboxai, blackboxai-pro, llama-3.3-70b, mixtral-7b, deepseek-chat, dbrx-instruct, llama-3.1-405b, qwq-32b, hermes-2-dpo (46)| -| **Image Models (Image Generation)** | flux (2)| -| **Vision (Image Upload)** | ✔ī¸ | -| **Authentication** | ❌ | -| **Streaming** | ✔ī¸ | -| **System message** | ✔ī¸ | -| **Message history** | ✔ī¸ | -### Blackbox2 -| Provider | `g4f.Provider.Blackbox2` | -| -------- | ---- | -| **Website** | [blackbox.ai](https://www.blackbox.ai) | -| **Status** | ![Unknown](https://img.shields.io/badge/Unknown-grey) | -| **Models** | llama-3.1-70b (2)| -| **Image Models (Image Generation)** | flux | -| **Authentication** | ❌ | -| **Streaming** | ❌ | -| **System message** | ✔ī¸ | -| **Message history** | ✔ī¸ | -### ChatGpt -| Provider | `g4f.Provider.ChatGpt` | -| -------- | ---- | -| **Website** | [chatgpt.com](https://chatgpt.com) | -| **Status** | ![Unknown](https://img.shields.io/badge/Unknown-grey) | -| **Models** | gpt-3.5-turbo, gpt-4, gpt-4-turbo, gpt-4o, gpt-4o-mini (7)| -| **Authentication** | ❌ | -| **Streaming** | ✔ī¸ | -| **System message** | ✔ī¸ | -| **Message history** | ✔ī¸ | -### ChatGptEs -| Provider | `g4f.Provider.ChatGptEs` | -| -------- | ---- | -| **Website** | [chatgpt.es](https://chatgpt.es) | -| **Status** | ![Active](https://img.shields.io/badge/Active-brightgreen) | -| **Models** | gpt-4, gpt-4o, gpt-4o-mini (3)| -| **Authentication** | ❌ | -| **Streaming** | ✔ī¸ | -| **System message** | ✔ī¸ | -| **Message history** | ✔ī¸ | -### Cloudflare AI -| Provider | `g4f.Provider.Cloudflare` | -| -------- | ---- | -| **Website** | [playground.ai.cloudflare.com](https://playground.ai.cloudflare.com) | -| **Status** | ![Active](https://img.shields.io/badge/Active-brightgreen) | -| **Models** | llama-2-7b, llama-3-8b, llama-3.1-8b, llama-3.2-1b, qwen-1.5-7b (37)| -| **Authentication** | ❌ | -| **Streaming** | ✔ī¸ | -| **System message** | ✔ī¸ | -| **Message history** | ✔ī¸ | -### Microsoft Copilot -| Provider | `g4f.Provider.Copilot` | -| -------- | ---- | -| **Website** | [copilot.microsoft.com](https://copilot.microsoft.com) | -| **Status** | ![Active](https://img.shields.io/badge/Active-brightgreen) | -| **Models** | gpt-4 (1)| -| **Authentication** | ❌ | -| **Streaming** | ✔ī¸ | -| **System message** | ❌ | -| **Message history** | ❌ | -### DuckDuckGo AI Chat -| Provider | `g4f.Provider.DDG` | -| -------- | ---- | -| **Website** | [duckduckgo.com](https://duckduckgo.com/aichat) | -| **Status** | ![Active](https://img.shields.io/badge/Active-brightgreen) | -| **Models** | gpt-4, gpt-4o, gpt-4o-mini, llama-3.1-70b, mixtral-8x7b, claude-3-haiku (8)| -| **Authentication** | ❌ | -| **Streaming** | ✔ī¸ | -| **System message** | ✔ī¸ | -| **Message history** | ✔ī¸ | -### DarkAI -| Provider | `g4f.Provider.DarkAI` | -| -------- | ---- | -| **Website** | [darkai.foundation](https://darkai.foundation/chat) | -| **Status** | ![Active](https://img.shields.io/badge/Active-brightgreen) | -| **Models** | gpt-3.5-turbo, gpt-4o, llama-3.1-70b (3)| -| **Authentication** | ❌ | -| **Streaming** | ✔ī¸ | -| **System message** | ✔ī¸ | -| **Message history** | ✔ī¸ | -### Flux (HuggingSpace) -| Provider | `g4f.Provider.Flux` | -| -------- | ---- | -| **Website** | [black-forest-labs-flux-1-dev.hf.space](https://black-forest-labs-flux-1-dev.hf.space) | -| **Status** | ![Unknown](https://img.shields.io/badge/Unknown-grey) | -| **Image Models (Image Generation)** | flux-dev | -| **Authentication** | ❌ | -| **Streaming** | ✔ī¸ | -| **System message** | ❌ | -| **Message history** | ❌ | -### Free2GPT -| Provider | `g4f.Provider.Free2GPT` | -| -------- | ---- | -| **Website** | [chat10.free2gpt.xyz](https://chat10.free2gpt.xyz) | -| **Status** | ![Active](https://img.shields.io/badge/Active-brightgreen) | -| **Authentication** | ❌ | -| **Streaming** | ✔ī¸ | -| **System message** | ❌ | -| **Message history** | ✔ī¸ | -### FreeGpt -| Provider | `g4f.Provider.FreeGpt` | -| -------- | ---- | -| **Website** | [freegptsnav.aifree.site](https://freegptsnav.aifree.site) | -| **Status** | ![Active](https://img.shields.io/badge/Active-brightgreen) | -| **Models** | gemini-pro (1)| -| **Authentication** | ❌ | -| **Streaming** | ✔ī¸ | -| **System message** | ✔ī¸ | -| **Message history** | ✔ī¸ | -### GizAI -| Provider | `g4f.Provider.GizAI` | -| -------- | ---- | -| **Website** | [app.giz.ai](https://app.giz.ai/assistant) | -| **Status** | ![Active](https://img.shields.io/badge/Active-brightgreen) | -| **Models** | gemini-flash (1)| -| **Authentication** | ❌ | -| **Streaming** | ❌ | -| **System message** | ✔ī¸ | -| **Message history** | ✔ī¸ | -### HuggingFace -| Provider | `g4f.Provider.HuggingFace` | -| -------- | ---- | -| **Website** | [huggingface.co](https://huggingface.co/chat) | -| **Status** | ![Unknown](https://img.shields.io/badge/Unknown-grey) | -| **Models** | llama-3.2-11b, llama-3.3-70b, mistral-nemo, hermes-3, phi-3.5-mini, command-r-plus, qwen-2.5-72b, qwen-2.5-coder-32b, qwq-32b, nemotron-70b (11)| -| **Image Models (Image Generation)** | flux-dev | -| **Authentication** | ❌ | -| **Streaming** | ✔ī¸ | -| **System message** | ❌ | -| **Message history** | ✔ī¸ | -### Liaobots -| Provider | `g4f.Provider.Liaobots` | -| -------- | ---- | -| **Website** | [liaobots.site](https://liaobots.site) | -| **Status** | ![Unknown](https://img.shields.io/badge/Unknown-grey) | -| **Models** | gpt-4, gpt-4o, gpt-4o-mini, o1-preview, o1-mini, gemini-pro, gemini-flash, claude-3-opus, claude-3-sonnet, claude-3.5-sonnet, grok-beta (14)| -| **Authentication** | ❌ | -| **Streaming** | ✔ī¸ | -| **System message** | ✔ī¸ | -| **Message history** | ✔ī¸ | -### GPT4All -| Provider | `g4f.Provider.Local` | -| -------- | ---- | -| **Website** | ❌ | -| **Status** | ![Unknown](https://img.shields.io/badge/Unknown-grey) | -| **Authentication** | ❌ | -| **Streaming** | ✔ī¸ | -| **System message** | ✔ī¸ | -| **Message history** | ✔ī¸ | -### Meta AI -| Provider | `g4f.Provider.MetaAI` | -| -------- | ---- | -| **Website** | [meta.ai](https://www.meta.ai) | -| **Status** | ![Unknown](https://img.shields.io/badge/Unknown-grey) | -| **Models** | meta-ai (1)| -| **Authentication** | ❌ | -| **Streaming** | ✔ī¸ | -| **System message** | ❌ | -| **Message history** | ❌ | -### Mhystical -| Provider | `g4f.Provider.Mhystical` | -| -------- | ---- | -| **Website** | [api.mhystical.cc](https://api.mhystical.cc) | -| **Status** | ![Unknown](https://img.shields.io/badge/Unknown-grey) | -| **Models** | gpt-4 (1)| -| **Authentication** | ❌ | -| **Streaming** | ❌ | -| **System message** | ❌ | -| **Message history** | ✔ī¸ | -### Ollama -| Provider | `g4f.Provider.Ollama` | -| -------- | ---- | -| **Website** | [ollama.com](https://ollama.com) | -| **Status** | ![Unknown](https://img.shields.io/badge/Unknown-grey) | -| **Authentication** | ❌ | -| **Streaming** | ✔ī¸ | -| **System message** | ✔ī¸ | -| **Message history** | ✔ī¸ | -### OpenAI ChatGPT -| Provider | `g4f.Provider.OpenaiChat` | -| -------- | ---- | -| **Website** | [chatgpt.com](https://chatgpt.com) | -| **Status** | ![Active](https://img.shields.io/badge/Active-brightgreen) | -| **Models** | gpt-4, gpt-4o, gpt-4o-mini, o1-preview, o1-mini (8)| -| **Vision (Image Upload)** | ✔ī¸ | -| **Authentication** | ❌ | -| **Streaming** | ✔ī¸ | -| **System message** | ✔ī¸ | -| **Message history** | ✔ī¸ | -### PerplexityLabs -| Provider | `g4f.Provider.PerplexityLabs` | -| -------- | ---- | -| **Website** | [labs.perplexity.ai](https://labs.perplexity.ai) | -| **Status** | ![Unknown](https://img.shields.io/badge/Unknown-grey) | -| **Models** | llama-3.1-8b, llama-3.1-70b, llama-3.3-70b, sonar-online, sonar-chat, lfm-40b (8)| -| **Authentication** | ❌ | -| **Streaming** | ✔ī¸ | -| **System message** | ❌ | -| **Message history** | ❌ | -### Pi -| Provider | `g4f.Provider.Pi` | -| -------- | ---- | -| **Website** | [pi.ai](https://pi.ai/talk) | -| **Status** | ![Active](https://img.shields.io/badge/Active-brightgreen) | -| **Authentication** | ❌ | -| **Streaming** | ✔ī¸ | -| **System message** | ❌ | -| **Message history** | ❌ | -### Pizzagpt -| Provider | `g4f.Provider.Pizzagpt` | -| -------- | ---- | -| **Website** | [pizzagpt.it](https://www.pizzagpt.it) | -| **Status** | ![Active](https://img.shields.io/badge/Active-brightgreen) | -| **Models** | gpt-4o-mini (1)| -| **Authentication** | ❌ | -| **Streaming** | ✔ī¸ | -| **System message** | ❌ | -| **Message history** | ❌ | -### Pollinations AI -| Provider | `g4f.Provider.PollinationsAI` | -| -------- | ---- | -| **Website** | [pollinations.ai](https://pollinations.ai) | -| **Status** | ![Unknown](https://img.shields.io/badge/Unknown-grey) | -| **Models** | gpt-4, gpt-4o, llama-3.1-70b, mistral-nemo, mistral-large, claude-3.5-sonnet, command-r, qwen-2.5-coder-32b, p1, evil, midijourney, unity, rtist (25)| -| **Image Models (Image Generation)** | flux, flux-realism, flux-cablyai, flux-anime, flux-3d, any-dark, flux-pro, turbo, midjourney, dall-e-3 | -| **Authentication** | ❌ | -| **Streaming** | ✔ī¸ | -| **System message** | ✔ī¸ | -| **Message history** | ✔ī¸ | -### Prodia -| Provider | `g4f.Provider.Prodia` | -| -------- | ---- | -| **Website** | [app.prodia.com](https://app.prodia.com) | -| **Status** | ![Active](https://img.shields.io/badge/Active-brightgreen) | -| **Authentication** | ❌ | -| **Streaming** | ✔ī¸ | -| **System message** | ❌ | -| **Message history** | ❌ | -### ReplicateHome -| Provider | `g4f.Provider.ReplicateHome` | -| -------- | ---- | -| **Website** | [replicate.com](https://replicate.com) | -| **Status** | ![Active](https://img.shields.io/badge/Active-brightgreen) | -| **Models** | gemma-2b (4)| -| **Image Models (Image Generation)** | sd-3, sdxl, playground-v2.5 | -| **Authentication** | ❌ | -| **Streaming** | ✔ī¸ | -| **System message** | ✔ī¸ | -| **Message history** | ✔ī¸ | -### Rubiks AI -| Provider | `g4f.Provider.RubiksAI` | -| -------- | ---- | -| **Website** | [rubiks.ai](https://rubiks.ai) | -| **Status** | ![Unknown](https://img.shields.io/badge/Unknown-grey) | -| **Models** | gpt-4o, gpt-4o-mini, o1-mini, llama-3.1-70b, claude-3.5-sonnet, grok-beta (8)| -| **Authentication** | ❌ | -| **Streaming** | ✔ī¸ | -| **System message** | ✔ī¸ | -| **Message history** | ✔ī¸ | -### TeachAnything -| Provider | `g4f.Provider.TeachAnything` | -| -------- | ---- | -| **Website** | [teach-anything.com](https://www.teach-anything.com) | -| **Status** | ![Active](https://img.shields.io/badge/Active-brightgreen) | -| **Models** | llama-3.1-70b (1)| -| **Authentication** | ❌ | -| **Streaming** | ✔ī¸ | -| **System message** | ❌ | -| **Message history** | ❌ | -### TheB.AI -| Provider | `g4f.Provider.Theb` | -| -------- | ---- | -| **Website** | [beta.theb.ai](https://beta.theb.ai) | -| **Status** | ![Unknown](https://img.shields.io/badge/Unknown-grey) | -| **Authentication** | ❌ | -| **Streaming** | ✔ī¸ | -| **System message** | ❌ | -| **Message history** | ❌ | -### You.com -| Provider | `g4f.Provider.You` | -| -------- | ---- | -| **Website** | [you.com](https://you.com) | -| **Status** | ![Unknown](https://img.shields.io/badge/Unknown-grey) | -| **Models** | gpt-4, gpt-4-turbo, gpt-4o, gpt-4o-mini, llama-3.1-70b, claude-3-opus, claude-3-sonnet, claude-3-haiku, claude-3.5-sonnet, command-r-plus, command-r (20)| -| **Authentication** | ❌ | -| **Streaming** | ✔ī¸ | -| **System message** | ❌ | -| **Message history** | ❌ | - -## Auth - -### Airforce -| Provider | `g4f.Provider.Airforce` | -| -------- | ---- | -| **Website** | [llmplayground.net](https://llmplayground.net) | -| **Status** | ![Unknown](https://img.shields.io/badge/Unknown-grey) | -| **Models** | gpt-4, gpt-4-turbo, gpt-4o, gpt-4o-mini, o1-mini, llama-2-7b, llama-3.1-8b, llama-3.1-70b, hermes-2-dpo, hermes-2-pro, phi-2, openchat-3.5, deepseek-coder, german-7b, openhermes-2.5, lfm-40b, zephyr-7b, neural-7b, evil (40)| -| **Image Models (Image Generation)** | flux, flux-realism, flux-anime, flux-3d, flux-disney, flux-pixel, flux-4o, any-dark, midjourney, dall-e-3, sdxl, flux-pro | -| **Authentication** | ✔ī¸ | -| **Streaming** | ✔ī¸ | -| **System message** | ✔ī¸ | -| **Message history** | ✔ī¸ | -### Microsoft Designer in Bing -| Provider | `g4f.Provider.BingCreateImages` | -| -------- | ---- | -| **Website** | [bing.com](https://www.bing.com/images/create) | -| **Status** | ![Unknown](https://img.shields.io/badge/Unknown-grey) | -| **Image Models (Image Generation)** | dall-e-3 | -| **Authentication** | ✔ī¸ | -| **Streaming** | ✔ī¸ | -| **System message** | ❌ | -| **Message history** | ❌ | -### Cerebras Inference -| Provider | `g4f.Provider.Cerebras` | -| -------- | ---- | -| **Website** | [inference.cerebras.ai](https://inference.cerebras.ai/) | -| **Status** | ![Unknown](https://img.shields.io/badge/Unknown-grey) | -| **Models** | llama-3.1-8b, llama-3.1-70b (2)| -| **Authentication** | ✔ī¸ | -| **Streaming** | ✔ī¸ | -| **System message** | ✔ī¸ | -| **Message history** | ✔ī¸ | -### Microsoft Copilot -| Provider | `g4f.Provider.CopilotAccount` | -| -------- | ---- | -| **Website** | [copilot.microsoft.com](https://copilot.microsoft.com) | -| **Status** | ![Active](https://img.shields.io/badge/Active-brightgreen) | -| **Image Models (Image Generation)** | dall-e-3 | -| **Authentication** | ✔ī¸ | -| **Streaming** | ✔ī¸ | -| **System message** | ❌ | -| **Message history** | ❌ | -### DeepInfra -| Provider | `g4f.Provider.DeepInfra` | -| -------- | ---- | -| **Website** | [deepinfra.com](https://deepinfra.com) | -| **Status** | ![Unknown](https://img.shields.io/badge/Unknown-grey) | -| **Authentication** | ✔ī¸ | -| **Streaming** | ✔ī¸ | -| **System message** | ✔ī¸ | -| **Message history** | ✔ī¸ | -### DeepInfra Chat -| Provider | `g4f.Provider.DeepInfraChat` | -| -------- | ---- | -| **Website** | [deepinfra.com](https://deepinfra.com/chat) | -| **Status** | ![Unknown](https://img.shields.io/badge/Unknown-grey) | -| **Models** | llama-3.1-8b, llama-3.1-70b, qwen-2-72b, qwen-2.5-coder-32b, qwq-32b, wizardlm-2-8x22b, nemotron-70b (7)| -| **Authentication** | ✔ī¸ | -| **Streaming** | ✔ī¸ | -| **System message** | ✔ī¸ | -| **Message history** | ✔ī¸ | -### DeepInfraImage -| Provider | `g4f.Provider.DeepInfraImage` | -| -------- | ---- | -| **Website** | [deepinfra.com](https://deepinfra.com) | -| **Status** | ![Unknown](https://img.shields.io/badge/Unknown-grey) | -| **Authentication** | ✔ī¸ | -| **Streaming** | ✔ī¸ | -| **System message** | ❌ | -| **Message history** | ❌ | -### Google Gemini -| Provider | `g4f.Provider.Gemini` | -| -------- | ---- | -| **Website** | [gemini.google.com](https://gemini.google.com) | -| **Status** | ![Active](https://img.shields.io/badge/Active-brightgreen) | -| **Models** | gemini-pro, gemini-flash (3)| -| **Image Models (Image Generation)** | gemini | -| **Authentication** | ✔ī¸ | -| **Streaming** | ✔ī¸ | -| **System message** | ❌ | -| **Message history** | ❌ | -### Google Gemini API -| Provider | `g4f.Provider.GeminiPro` | -| -------- | ---- | -| **Website** | [ai.google.dev](https://ai.google.dev) | -| **Status** | ![Unknown](https://img.shields.io/badge/Unknown-grey) | -| **Models** | gemini-pro, gemini-flash (4)| -| **Authentication** | ✔ī¸ | -| **Streaming** | ✔ī¸ | -| **System message** | ❌ | -| **Message history** | ✔ī¸ | -### GigaChat -| Provider | `g4f.Provider.GigaChat` | -| -------- | ---- | -| **Website** | [developers.sber.ru](https://developers.sber.ru/gigachat) | -| **Status** | ![Unknown](https://img.shields.io/badge/Unknown-grey) | -| **Models** | GigaChat:latest (3)| -| **Authentication** | ✔ī¸ | -| **Streaming** | ✔ī¸ | -| **System message** | ✔ī¸ | -| **Message history** | ✔ī¸ | -### GithubCopilot -| Provider | `g4f.Provider.GithubCopilot` | -| -------- | ---- | -| **Website** | [github.com](https://github.com/copilot) | -| **Status** | ![Unknown](https://img.shields.io/badge/Unknown-grey) | -| **Models** | gpt-4o, o1-preview, o1-mini, claude-3.5-sonnet (4)| -| **Authentication** | ✔ī¸ | -| **Streaming** | ✔ī¸ | -| **System message** | ❌ | -| **Message history** | ❌ | -### Groq -| Provider | `g4f.Provider.Groq` | -| -------- | ---- | -| **Website** | [console.groq.com](https://console.groq.com/playground) | -| **Status** | ![Unknown](https://img.shields.io/badge/Unknown-grey) | -| **Models** | mixtral-8x7b (18)| -| **Authentication** | ✔ī¸ | -| **Streaming** | ✔ī¸ | -| **System message** | ✔ī¸ | -| **Message history** | ✔ī¸ | -### HuggingChat -| Provider | `g4f.Provider.HuggingChat` | -| -------- | ---- | -| **Website** | [huggingface.co](https://huggingface.co/chat) | -| **Status** | ![Active](https://img.shields.io/badge/Active-brightgreen) | -| **Models** | llama-3.2-11b, llama-3.3-70b, mistral-nemo, hermes-3, phi-3.5-mini, command-r-plus, qwen-2.5-72b, qwen-2.5-coder-32b, qwq-32b, nemotron-70b (11)| -| **Image Models (Image Generation)** | flux-dev | -| **Authentication** | ✔ī¸ | -| **Streaming** | ✔ī¸ | -| **System message** | ❌ | -| **Message history** | ❌ | -### HuggingFace (Inference API) -| Provider | `g4f.Provider.HuggingFaceAPI` | -| -------- | ---- | -| **Website** | [api-inference.huggingface.co](https://api-inference.huggingface.co) | -| **Status** | ![Unknown](https://img.shields.io/badge/Unknown-grey) | -| **Authentication** | ✔ī¸ | -| **Streaming** | ✔ī¸ | -| **System message** | ✔ī¸ | -| **Message history** | ✔ī¸ | -### Meta AI -| Provider | `g4f.Provider.MetaAIAccount` | -| -------- | ---- | -| **Website** | [meta.ai](https://www.meta.ai) | -| **Status** | ![Unknown](https://img.shields.io/badge/Unknown-grey) | -| **Models** | meta-ai (1)| -| **Authentication** | ✔ī¸ | -| **Streaming** | ✔ī¸ | -| **System message** | ❌ | -| **Message history** | ❌ | -### Microsoft Designer -| Provider | `g4f.Provider.MicrosoftDesigner` | -| -------- | ---- | -| **Website** | [designer.microsoft.com](https://designer.microsoft.com) | -| **Status** | ![Active](https://img.shields.io/badge/Active-brightgreen) | -| **Image Models (Image Generation)** | dall-e-3 | -| **Authentication** | ✔ī¸ | -| **Streaming** | ✔ī¸ | -| **System message** | ❌ | -| **Message history** | ❌ | -### OpenAI API -| Provider | `g4f.Provider.OpenaiAPI` | -| -------- | ---- | -| **Website** | [platform.openai.com](https://platform.openai.com) | -| **Status** | ![Unknown](https://img.shields.io/badge/Unknown-grey) | -| **Authentication** | ✔ī¸ | -| **Streaming** | ✔ī¸ | -| **System message** | ✔ī¸ | -| **Message history** | ✔ī¸ | -### OpenAI ChatGPT -| Provider | `g4f.Provider.OpenaiAccount` | -| -------- | ---- | -| **Website** | [chatgpt.com](https://chatgpt.com) | -| **Status** | ![Unknown](https://img.shields.io/badge/Unknown-grey) | -| **Models** | gpt-4o-mini, o1-preview, o1-mini (9)| -| **Image Models (Image Generation)** | dall-e-3 | -| **Vision (Image Upload)** | ✔ī¸ | -| **Authentication** | ✔ī¸ | -| **Streaming** | ✔ī¸ | -| **System message** | ✔ī¸ | -| **Message history** | ✔ī¸ | -### Perplexity API -| Provider | `g4f.Provider.PerplexityApi` | -| -------- | ---- | -| **Website** | [perplexity.ai](https://www.perplexity.ai) | -| **Status** | ![Unknown](https://img.shields.io/badge/Unknown-grey) | -| **Authentication** | ✔ī¸ | -| **Streaming** | ✔ī¸ | -| **System message** | ✔ī¸ | -| **Message history** | ✔ī¸ | -### Poe -| Provider | `g4f.Provider.Poe` | -| -------- | ---- | -| **Website** | [poe.com](https://poe.com) | -| **Status** | ![Unknown](https://img.shields.io/badge/Unknown-grey) | -| **Authentication** | ✔ī¸ | -| **Streaming** | ✔ī¸ | -| **System message** | ❌ | -| **Message history** | ❌ | -### Raycast -| Provider | `g4f.Provider.Raycast` | -| -------- | ---- | -| **Website** | [raycast.com](https://raycast.com) | -| **Status** | ![Unknown](https://img.shields.io/badge/Unknown-grey) | -| **Authentication** | ✔ī¸ | -| **Streaming** | ✔ī¸ | -| **System message** | ❌ | -| **Message history** | ❌ | -### Reka -| Provider | `g4f.Provider.Reka` | -| -------- | ---- | -| **Website** | [chat.reka.ai](https://chat.reka.ai/) | -| **Status** | ![Unknown](https://img.shields.io/badge/Unknown-grey) | -| **Authentication** | ✔ī¸ | -| **Streaming** | ✔ī¸ | -| **System message** | ❌ | -| **Message history** | ❌ | -### Replicate -| Provider | `g4f.Provider.Replicate` | -| -------- | ---- | -| **Website** | [replicate.com](https://replicate.com) | -| **Status** | ![Unknown](https://img.shields.io/badge/Unknown-grey) | -| **Authentication** | ✔ī¸ | -| **Streaming** | ✔ī¸ | -| **System message** | ❌ | -| **Message history** | ❌ | -### TheB.AI API -| Provider | `g4f.Provider.ThebApi` | -| -------- | ---- | -| **Website** | [theb.ai](https://theb.ai) | -| **Status** | ![Unknown](https://img.shields.io/badge/Unknown-grey) | -| **Models** | gpt-3.5-turbo, gpt-4, gpt-4-turbo (21)| -| **Authentication** | ✔ī¸ | -| **Streaming** | ✔ī¸ | -| **System message** | ✔ī¸ | -| **Message history** | ✔ī¸ | -### WhiteRabbitNeo -| Provider | `g4f.Provider.WhiteRabbitNeo` | -| -------- | ---- | -| **Website** | [whiterabbitneo.com](https://www.whiterabbitneo.com) | -| **Status** | ![Unknown](https://img.shields.io/badge/Unknown-grey) | -| **Authentication** | ✔ī¸ | -| **Streaming** | ✔ī¸ | -| **System message** | ❌ | -| **Message history** | ✔ī¸ | --------------------------------------------------- -| Label | Provider | Image Model | Vision Model | Website | -| ----- | -------- | ----------- | ------------ | ------- | -| Airforce | `g4f.Provider.Airforce` | flux, flux-realism, flux-anime, flux-3d, flux-disney, flux-pixel, flux-4o, any-dark, midjourney, dall-e-3, sdxl, flux-pro| ❌ | [llmplayground.net](https://llmplayground.net) | -| AmigoChat | `g4f.Provider.AmigoChat` | flux-realism, flux-pro, dall-e-3, flux-dev| ❌ | [amigochat.io](https://amigochat.io/chat/) | -| Microsoft Designer in Bing | `g4f.Provider.BingCreateImages` | dall-e-3| ❌ | [bing.com](https://www.bing.com/images/create) | -| Blackbox AI | `g4f.Provider.Blackbox` | flux| ✔ī¸ | [blackbox.ai](https://www.blackbox.ai) | -| Blackbox2 | `g4f.Provider.Blackbox2` | flux| ❌ | [blackbox.ai](https://www.blackbox.ai) | -| Microsoft Copilot | `g4f.Provider.CopilotAccount` | dall-e-3| ❌ | [copilot.microsoft.com](https://copilot.microsoft.com) | -| DeepInfraImage | `g4f.Provider.DeepInfraImage` | | ❌ | [deepinfra.com](https://deepinfra.com) | -| Flux (HuggingSpace) | `g4f.Provider.Flux` | flux-dev| ❌ | [black-forest-labs-flux-1-dev.hf.space](https://black-forest-labs-flux-1-dev.hf.space) | -| Google Gemini | `g4f.Provider.Gemini` | gemini| ❌ | [gemini.google.com](https://gemini.google.com) | -| HuggingChat | `g4f.Provider.HuggingChat` | flux-dev| ❌ | [huggingface.co](https://huggingface.co/chat) | -| HuggingFace | `g4f.Provider.HuggingFace` | flux-dev| ❌ | [huggingface.co](https://huggingface.co/chat) | -| Meta AI | `g4f.Provider.MetaAIAccount` | | ❌ | [meta.ai](https://www.meta.ai) | -| Microsoft Designer | `g4f.Provider.MicrosoftDesigner` | dall-e-3| ❌ | [designer.microsoft.com](https://designer.microsoft.com) | -| OpenAI ChatGPT | `g4f.Provider.OpenaiAccount` | dall-e-3, gpt-4, gpt-4o, dall-e-3| ✔ī¸ | [chatgpt.com](https://chatgpt.com) | -| OpenAI ChatGPT | `g4f.Provider.OpenaiChat` | ❌| ✔ī¸ | [chatgpt.com](https://chatgpt.com) | -| Pollinations AI | `g4f.Provider.PollinationsAI` | flux, flux-realism, flux-cablyai, flux-anime, flux-3d, any-dark, flux-pro, turbo, midjourney, dall-e-3| ❌ | [pollinations.ai](https://pollinations.ai) | -| Prodia | `g4f.Provider.Prodia` | | ❌ | [app.prodia.com](https://app.prodia.com) | -| ReplicateHome | `g4f.Provider.ReplicateHome` | sd-3, sdxl, playground-v2.5| ❌ | [replicate.com](https://replicate.com) | -| You.com | `g4f.Provider.You` | | ❌ | [you.com](https://you.com) | diff --git a/g4f/Provider/AutonomousAI.py b/g4f/Provider/AutonomousAI.py index 990fb48e9f6..88e1eeeab95 100644 --- a/g4f/Provider/AutonomousAI.py +++ b/g4f/Provider/AutonomousAI.py @@ -32,6 +32,7 @@ class AutonomousAI(AsyncGeneratorProvider, ProviderModelMixin): "qwen-2.5-coder-32b": "qwen_coder", "hermes-3": "hermes", "llama-3.2-90b": "vision", + "llama-3.2-70b": "summary", } @classmethod diff --git a/g4f/Provider/Blackbox.py b/g4f/Provider/Blackbox.py index 71d97ee0f97..6d1b8345c5a 100644 --- a/g4f/Provider/Blackbox.py +++ b/g4f/Provider/Blackbox.py @@ -1,14 +1,12 @@ from __future__ import annotations -from aiohttp import ClientSession, TCPConnector, ClientTimeout +from aiohttp import ClientSession -from pathlib import Path import re import json import random import string - - +from pathlib import Path from ..typing import AsyncResult, Messages, ImagesType from ..requests.raise_for_status import raise_for_status @@ -32,15 +30,14 @@ class Blackbox(AsyncGeneratorProvider, ProviderModelMixin): api_endpoint = "https://www.blackbox.ai/api/chat" working = True - needs_auth = True - supports_stream = False - supports_system_message = False + supports_stream = True + supports_system_message = True supports_message_history = True default_model = "blackboxai" default_vision_model = default_model default_image_model = 'ImageGeneration' - image_models = [default_image_model] + image_models = [default_image_model, "ImageGeneration2"] vision_models = [default_vision_model, 'gpt-4o', 'gemini-pro', 'gemini-1.5-flash', 'llama-3.1-8b', 'llama-3.1-70b', 'llama-3.1-405b'] userSelectedModel = ['gpt-4o', 'gemini-pro', 'claude-sonnet-3.5', 'blackboxai-pro'] @@ -99,7 +96,7 @@ class Blackbox(AsyncGeneratorProvider, ProviderModelMixin): 'builder Agent': {'mode': True, 'id': "builder Agent"}, } - models = list(dict.fromkeys([default_model, *userSelectedModel, *list(agentMode.keys()), *list(trendingAgentMode.keys())])) + models = list(dict.fromkeys([default_model, *userSelectedModel, *image_models, *list(agentMode.keys()), *list(trendingAgentMode.keys())])) model_aliases = { ### chat ### @@ -116,6 +113,7 @@ class Blackbox(AsyncGeneratorProvider, ProviderModelMixin): ### image ### "flux": "ImageGeneration", + "flux": "ImageGeneration2", } @classmethod @@ -215,10 +213,31 @@ async def create_async_generator( 'user-agent': 'Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/131.0.0.0 Safari/537.36' } - connector = TCPConnector(limit=10, ttl_dns_cache=300) - timeout = ClientTimeout(total=30) - - async with ClientSession(headers=headers, connector=connector, timeout=timeout) as session: + async with ClientSession(headers=headers) as session: + if model == "ImageGeneration2": + prompt = messages[-1]["content"] + data = { + "query": prompt, + "agentMode": True + } + headers['content-type'] = 'text/plain;charset=UTF-8' + + async with session.post( + "https://www.blackbox.ai/api/image-generator", + json=data, + proxy=proxy, + headers=headers + ) as response: + await raise_for_status(response) + response_json = await response.json() + + if "markdown" in response_json: + image_url_match = re.search(r'!\[.*?\]\((.*?)\)', response_json["markdown"]) + if image_url_match: + image_url = image_url_match.group(1) + yield ImageResponse(images=[image_url], alt=prompt) + return + if conversation is None: conversation = Conversation(model) conversation.validated_value = await cls.fetch_validated() @@ -312,8 +331,14 @@ async def create_async_generator( yield text_to_yield full_response = text_to_yield - if return_conversation: - conversation.message_history.append({"role": "assistant", "content": full_response}) - yield conversation - - yield FinishReason("stop") + if full_response: + if max_tokens and len(full_response) >= max_tokens: + reason = "length" + else: + reason = "stop" + + if return_conversation: + conversation.message_history.append({"role": "assistant", "content": full_response}) + yield conversation + + yield FinishReason(reason) diff --git a/g4f/Provider/BlackboxCreateAgent.py b/g4f/Provider/BlackboxCreateAgent.py deleted file mode 100644 index f0c9ee91114..00000000000 --- a/g4f/Provider/BlackboxCreateAgent.py +++ /dev/null @@ -1,259 +0,0 @@ -from __future__ import annotations - -import random -import asyncio -import re -import json -from pathlib import Path -from aiohttp import ClientSession -from typing import AsyncIterator, Optional - -from ..typing import AsyncResult, Messages -from ..image import ImageResponse -from .base_provider import AsyncGeneratorProvider, ProviderModelMixin -from ..cookies import get_cookies_dir - -from .. import debug - - -class BlackboxCreateAgent(AsyncGeneratorProvider, ProviderModelMixin): - url = "https://www.blackbox.ai" - api_endpoints = { - "llama-3.1-70b": "https://www.blackbox.ai/api/improve-prompt", - "flux": "https://www.blackbox.ai/api/image-generator" - } - - working = True - supports_system_message = True - supports_message_history = True - - default_model = 'llama-3.1-70b' - chat_models = [default_model] - image_models = ['flux'] - models = [*chat_models, *image_models] - - @classmethod - def _get_cache_file(cls) -> Path: - """Returns the path to the cache file.""" - dir = Path(get_cookies_dir()) - dir.mkdir(exist_ok=True) - return dir / 'blackbox_create_agent.json' - - @classmethod - def _load_cached_value(cls) -> str | None: - cache_file = cls._get_cache_file() - if cache_file.exists(): - try: - with open(cache_file, 'r') as f: - data = json.load(f) - return data.get('validated_value') - except Exception as e: - debug.log(f"Error reading cache file: {e}") - return None - - @classmethod - def _save_cached_value(cls, value: str): - cache_file = cls._get_cache_file() - try: - with open(cache_file, 'w') as f: - json.dump({'validated_value': value}, f) - except Exception as e: - debug.log(f"Error writing to cache file: {e}") - - @classmethod - async def fetch_validated(cls) -> Optional[str]: - """ - Asynchronously retrieves the validated value from cache or website. - - :return: The validated value or None if retrieval fails. - """ - cached_value = cls._load_cached_value() - if cached_value: - return cached_value - - js_file_pattern = r'static/chunks/\d{4}-[a-fA-F0-9]+\.js' - v_pattern = r'L\s*=\s*[\'"]([0-9a-fA-F-]{36})[\'"]' - - def is_valid_context(text: str) -> bool: - """Checks if the context is valid.""" - return any(char + '=' in text for char in 'abcdefghijklmnopqrstuvwxyz') - - async with ClientSession() as session: - try: - async with session.get(cls.url) as response: - if response.status != 200: - debug.log("Failed to download the page.") - return cached_value - - page_content = await response.text() - js_files = re.findall(js_file_pattern, page_content) - - for js_file in js_files: - js_url = f"{cls.url}/_next/{js_file}" - async with session.get(js_url) as js_response: - if js_response.status == 200: - js_content = await js_response.text() - for match in re.finditer(v_pattern, js_content): - start = max(0, match.start() - 50) - end = min(len(js_content), match.end() + 50) - context = js_content[start:end] - - if is_valid_context(context): - validated_value = match.group(1) - cls._save_cached_value(validated_value) - return validated_value - except Exception as e: - debug.log(f"Error while retrieving validated_value: {e}") - - return cached_value - - @classmethod - async def create_async_generator( - cls, - model: str, - messages: Messages, - proxy: str = None, - prompt: str = None, - **kwargs - ) -> AsyncIterator[str | ImageResponse]: - """ - Creates an async generator for text or image generation. - """ - if model in cls.chat_models: - async for text in cls._generate_text(model, messages, proxy=proxy, **kwargs): - yield text - elif model in cls.image_models: - prompt = messages[-1]['content'] - async for image in cls._generate_image(model, prompt, proxy=proxy, **kwargs): - yield image - else: - raise ValueError(f"Model {model} not supported") - - @classmethod - async def _generate_text( - cls, - model: str, - messages: Messages, - proxy: str = None, - max_retries: int = 3, - delay: int = 1, - max_tokens: int = None, - **kwargs - ) -> AsyncIterator[str]: - headers = cls._get_headers() - - for outer_attempt in range(2): # Add outer loop for retrying with a new key - validated_value = await cls.fetch_validated() - if not validated_value: - raise RuntimeError("Failed to get validated value") - - async with ClientSession(headers=headers) as session: - api_endpoint = cls.api_endpoints[model] - - data = { - "messages": messages, - "max_tokens": max_tokens, - "validated": validated_value - } - - for attempt in range(max_retries): - try: - async with session.post(api_endpoint, json=data, proxy=proxy) as response: - response.raise_for_status() - response_data = await response.json() - - if response_data.get('status') == 200 and 'prompt' in response_data: - yield response_data['prompt'] - return # Successful execution - else: - raise KeyError("Invalid response format or missing 'prompt' key") - except Exception as e: - if attempt == max_retries - 1: - if outer_attempt == 0: # If this is the first attempt with this key - # Remove the cached key and try to get a new one - cls._save_cached_value("") - debug.log("Invalid key, trying to get a new one...") - break # Exit the inner loop to get a new key - else: - raise RuntimeError(f"Error after all attempts: {str(e)}") - else: - wait_time = delay * (2 ** attempt) + random.uniform(0, 1) - debug.log(f"Attempt {attempt + 1} failed. Retrying in {wait_time:.2f} seconds...") - await asyncio.sleep(wait_time) - - @classmethod - async def _generate_image( - cls, - model: str, - prompt: str, - proxy: str = None, - **kwargs - ) -> AsyncIterator[ImageResponse]: - headers = { - **cls._get_headers() - } - - api_endpoint = cls.api_endpoints[model] - - async with ClientSession(headers=headers) as session: - data = { - "query": prompt - } - - async with session.post(api_endpoint, json=data, proxy=proxy) as response: - response.raise_for_status() - response_data = await response.json() - - if 'markdown' in response_data: - # Extract URL from markdown format: ![](url) - image_url = re.search(r'\!\[\]\((.*?)\)', response_data['markdown']) - if image_url: - yield ImageResponse(images=[image_url.group(1)], alt=prompt) - else: - raise ValueError("Could not extract image URL from markdown") - else: - raise KeyError("'markdown' key not found in response") - - @staticmethod - def _get_headers() -> dict: - return { - 'accept': '*/*', - 'accept-language': 'en-US,en;q=0.9', - 'authorization': f'Bearer 56c8eeff9971269d7a7e625ff88e8a83a34a556003a5c87c289ebe9a3d8a3d2c', - 'content-type': 'application/json', - 'origin': 'https://www.blackbox.ai', - 'referer': 'https://www.blackbox.ai', - 'user-agent': 'Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/131.0.0.0 Safari/537.36' - } - - @classmethod - async def create_async( - cls, - model: str, - messages: Messages, - proxy: str = None, - **kwargs - ) -> AsyncResult: - """ - Creates an async response for the provider. - - Args: - model: The model to use - messages: The messages to process - proxy: Optional proxy to use - **kwargs: Additional arguments - - Returns: - AsyncResult: The response from the provider - """ - if not model: - model = cls.default_model - if model in cls.chat_models: - async for text in cls._generate_text(model, messages, proxy=proxy, **kwargs): - return text - elif model in cls.image_models: - prompt = messages[-1]['content'] - async for image in cls._generate_image(model, prompt, proxy=proxy, **kwargs): - return image - else: - raise ValueError(f"Model {model} not supported") diff --git a/g4f/Provider/Copilot.py b/g4f/Provider/Copilot.py index e958acfdaed..a259bda8107 100644 --- a/g4f/Provider/Copilot.py +++ b/g4f/Provider/Copilot.py @@ -39,12 +39,15 @@ def __init__(self, conversation_id: str): class Copilot(AbstractProvider, ProviderModelMixin): label = "Microsoft Copilot" url = "https://copilot.microsoft.com" + working = True supports_stream = True + default_model = "Copilot" models = [default_model] model_aliases = { - "gpt-4": "Copilot", + "gpt-4": default_model, + "gpt-4o": default_model, } websocket_url = "wss://copilot.microsoft.com/c/api/chat?api-version=2" @@ -254,4 +257,4 @@ def readHAR(url: str): def get_clarity() -> bytes: #{"e":["0.7.58",5,7284,4779,"n59ae4ieqq","aln5en","1upufhz",1,0,0],"a":[[7323,12,65,217,324],[7344,12,65,214,329],[7385,12,65,211,334],[7407,12,65,210,337],[7428,12,65,209,338],[7461,12,65,209,339],[7497,12,65,209,339],[7531,12,65,208,340],[7545,12,65,208,342],[11654,13,65,208,342],[11728,14,65,208,342],[11728,9,65,208,342,17535,19455,0,0,0,"Annehmen",null,"52w7wqv1r.8ovjfyrpu",1],[7284,4,1,393,968,393,968,0,0,231,310,939,0],[12063,0,2,147,3,4,4,18,5,1,10,79,25,15],[12063,36,6,[11938,0]]]} body = base64.b64decode("H4sIAAAAAAAAA23RwU7DMAwG4HfJ2aqS2E5ibjxH1cMOnQYqYZvUTQPx7vyJRGGAemj01XWcP+9udg+j80MetDhSyrEISc5GrqrtZnmaTydHbrdUnSsWYT2u+8Obo0Ce/IQvaDBmjkwhUlKKIRNHmQgosqEArWPRDQMx90rxeUMPzB1j+UJvwNIxhTvsPcXyX1T+rizE4juK3mEEhpAUg/JvzW1/+U/tB1LATmhqotoiweMea50PLy2vui4LOY3XfD1dwnkor5fn/e18XBFgm6fHjSzZmCyV7d3aRByAEYextaTHEH3i5pgKGVP/s+DScE5PuLKIpW6FnCi1gY3Rbpqmj0/DI/+L7QEAAA==") - return body \ No newline at end of file + return body diff --git a/g4f/Provider/DDG.py b/g4f/Provider/DDG.py index fbe0ad4bf59..254901f8b4f 100644 --- a/g4f/Provider/DDG.py +++ b/g4f/Provider/DDG.py @@ -1,19 +1,33 @@ from __future__ import annotations +import time from aiohttp import ClientSession, ClientTimeout import json import asyncio import random -from ..typing import AsyncResult, Messages +from ..typing import AsyncResult, Messages, Cookies from ..requests.raise_for_status import raise_for_status from .base_provider import AsyncGeneratorProvider, ProviderModelMixin from .helper import format_prompt from ..providers.response import FinishReason, JsonConversation +class DuckDuckGoSearchException(Exception): + """Base exception class for duckduckgo_search.""" + +class RatelimitException(DuckDuckGoSearchException): + """Raised for rate limit exceeded errors during API requests.""" + +class TimeoutException(DuckDuckGoSearchException): + """Raised for timeout errors during API requests.""" + +class ConversationLimitException(DuckDuckGoSearchException): + """Raised for conversation limit during API requests to AI endpoint.""" + class Conversation(JsonConversation): vqd: str = None message_history: Messages = [] + cookies: dict = {} def __init__(self, model: str): self.model = model @@ -39,20 +53,40 @@ class DDG(AsyncGeneratorProvider, ProviderModelMixin): "mixtral-8x7b": "mistralai/Mixtral-8x7B-Instruct-v0.1", } + last_request_time = 0 + + @classmethod + def validate_model(cls, model: str) -> str: + """Validates and returns the correct model name""" + if model in cls.model_aliases: + model = cls.model_aliases[model] + if model not in cls.models: + raise ValueError(f"Model {model} not supported. Available models: {cls.models}") + return model + + @classmethod + async def sleep(cls): + """Implements rate limiting between requests""" + now = time.time() + if cls.last_request_time > 0: + delay = max(0.0, 0.75 - (now - cls.last_request_time)) + if delay > 0: + await asyncio.sleep(delay) + cls.last_request_time = now + @classmethod async def fetch_vqd(cls, session: ClientSession, max_retries: int = 3) -> str: - """ - Fetches the required VQD token for the chat session with retries. - """ + """Fetches the required VQD token for the chat session with retries.""" headers = { "accept": "text/event-stream", - "content-type": "application/json", + "content-type": "application/json", "x-vqd-accept": "1", - "User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/121.0.0.0 Safari/537.36" + "User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36" } for attempt in range(max_retries): try: + await cls.sleep() async with session.get(cls.status_url, headers=headers) as response: if response.status == 200: vqd = response.headers.get("x-vqd-4", "") @@ -81,50 +115,92 @@ async def create_async_generator( messages: Messages, proxy: str = None, timeout: int = 30, + cookies: Cookies = None, conversation: Conversation = None, return_conversation: bool = False, **kwargs - ) -> AsyncResult: - model = cls.get_model(model) - async with ClientSession(timeout=ClientTimeout(total=timeout)) as session: - # Fetch VQD token - if conversation is None: - conversation = Conversation(model) - conversation.vqd = await cls.fetch_vqd(session) - conversation.message_history = [{"role": "user", "content": format_prompt(messages)}] - else: - conversation.message_history.append(messages[-1]) - headers = { - "accept": "text/event-stream", - "content-type": "application/json", - "User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/121.0.0.0 Safari/537.36", - "x-vqd-4": conversation.vqd, - } - data = { - "model": model, - "messages": conversation.message_history, - } - async with session.post(cls.api_endpoint, json=data, headers=headers, proxy=proxy) as response: - await raise_for_status(response) - reason = None - full_message = "" - async for line in response.content: - line = line.decode("utf-8").strip() - if line.startswith("data:"): - try: - message = json.loads(line[5:].strip()) - if "message" in message: - if message["message"]: - yield message["message"] - full_message += message["message"] - reason = "length" - else: - reason = "stop" - except json.JSONDecodeError: - continue - if return_conversation: - conversation.message_history.append({"role": "assistant", "content": full_message}) - conversation.vqd = response.headers.get("x-vqd-4", conversation.vqd) - yield conversation - if reason is not None: - yield FinishReason(reason) + ) -> AsyncResult: + model = cls.validate_model(model) + + if cookies is None and conversation is not None: + cookies = conversation.cookies + + try: + async with ClientSession(timeout=ClientTimeout(total=timeout), cookies=cookies) as session: + if conversation is None: + conversation = Conversation(model) + conversation.vqd = await cls.fetch_vqd(session) + conversation.message_history = [{"role": "user", "content": format_prompt(messages)}] + else: + conversation.message_history.append(messages[-1]) + + headers = { + "accept": "text/event-stream", + "content-type": "application/json", + "User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36", + "x-vqd-4": conversation.vqd, + } + + data = { + "model": model, + "messages": conversation.message_history, + } + + await cls.sleep() + try: + async with session.post(cls.api_endpoint, json=data, headers=headers, proxy=proxy) as response: + await raise_for_status(response) + reason = None + full_message = "" + + async for line in response.content: + line = line.decode("utf-8").strip() + if line.startswith("data:"): + try: + message = json.loads(line[5:].strip()) + + if "action" in message and message["action"] == "error": + error_type = message.get("type", "") + if message.get("status") == 429: + if error_type == "ERR_CONVERSATION_LIMIT": + raise ConversationLimitException(error_type) + raise RatelimitException(error_type) + raise DuckDuckGoSearchException(error_type) + + if "message" in message: + if message["message"]: + yield message["message"] + full_message += message["message"] + reason = "length" + else: + reason = "stop" + except json.JSONDecodeError: + continue + + if return_conversation: + conversation.message_history.append({"role": "assistant", "content": full_message}) + conversation.vqd = response.headers.get("x-vqd-4", conversation.vqd) + conversation.cookies = { + n: c.value + for n, c in session.cookie_jar.filter_cookies(cls.url).items() + } + + if reason is not None: + yield FinishReason(reason) + + if return_conversation: + yield conversation + + except asyncio.TimeoutError as e: + raise TimeoutException(f"Request timed out: {str(e)}") + except Exception as e: + if "time" in str(e).lower(): + raise TimeoutException(f"Request timed out: {str(e)}") + raise DuckDuckGoSearchException(f"Request failed: {str(e)}") + + except Exception as e: + if isinstance(e, (RatelimitException, TimeoutException, ConversationLimitException)): + raise + if "time" in str(e).lower(): + raise TimeoutException(f"Request timed out: {str(e)}") + raise DuckDuckGoSearchException(f"Request failed: {str(e)}") diff --git a/g4f/Provider/not_working/DeepInfraChat.py b/g4f/Provider/DeepInfraChat.py similarity index 92% rename from g4f/Provider/not_working/DeepInfraChat.py rename to g4f/Provider/DeepInfraChat.py index 17e6a2846aa..c37c9e949e6 100644 --- a/g4f/Provider/not_working/DeepInfraChat.py +++ b/g4f/Provider/DeepInfraChat.py @@ -3,15 +3,15 @@ import json from aiohttp import ClientSession -from ...typing import AsyncResult, Messages -from ...requests.raise_for_status import raise_for_status -from ..base_provider import AsyncGeneratorProvider, ProviderModelMixin +from ..typing import AsyncResult, Messages +from ..requests.raise_for_status import raise_for_status +from .base_provider import AsyncGeneratorProvider, ProviderModelMixin class DeepInfraChat(AsyncGeneratorProvider, ProviderModelMixin): url = "https://deepinfra.com/chat" api_endpoint = "https://api.deepinfra.com/v1/openai/chat/completions" - working = False + working = True supports_stream = True supports_system_message = True supports_message_history = True @@ -24,6 +24,7 @@ class DeepInfraChat(AsyncGeneratorProvider, ProviderModelMixin): 'meta-llama/Meta-Llama-3.1-70B-Instruct-Turbo', 'Qwen/QwQ-32B-Preview', 'microsoft/WizardLM-2-8x22B', + 'microsoft/WizardLM-2-7B', 'Qwen/Qwen2.5-72B-Instruct', 'Qwen/Qwen2.5-Coder-32B-Instruct', 'nvidia/Llama-3.1-Nemotron-70B-Instruct', @@ -35,6 +36,7 @@ class DeepInfraChat(AsyncGeneratorProvider, ProviderModelMixin): "llama-3.1-70b": "meta-llama/Meta-Llama-3.1-70B-Instruct-Turbo", "qwq-32b": "Qwen/QwQ-32B-Preview", "wizardlm-2-8x22b": "microsoft/WizardLM-2-8x22B", + "wizardlm-2-7b": "microsoft/WizardLM-2-7B", "qwen-2-72b": "Qwen/Qwen2.5-72B-Instruct", "qwen-2.5-coder-32b": "Qwen/Qwen2.5-Coder-32B-Instruct", "nemotron-70b": "nvidia/Llama-3.1-Nemotron-70B-Instruct", diff --git a/g4f/Provider/Free2GPT.py b/g4f/Provider/Free2GPT.py index 6ba9ac0f8d4..f15f73de775 100644 --- a/g4f/Provider/Free2GPT.py +++ b/g4f/Provider/Free2GPT.py @@ -17,6 +17,7 @@ class Free2GPT(AsyncGeneratorProvider, ProviderModelMixin): working = True supports_message_history = True default_model = 'mistral-7b' + models = [default_model] @classmethod async def create_async_generator( diff --git a/g4f/Provider/Jmuz.py b/g4f/Provider/Jmuz.py index a5084fc01fc..6ace15e3d72 100644 --- a/g4f/Provider/Jmuz.py +++ b/g4f/Provider/Jmuz.py @@ -17,25 +17,27 @@ class Jmuz(OpenaiAPI): default_model = "gpt-4o" model_aliases = { - "gemini": "gemini-exp", - "deepseek-chat": "deepseek-2.5", "qwq-32b": "qwq-32b-preview" } - + @classmethod def get_models(cls): if not cls.models: cls.models = super().get_models(api_key=cls.api_key, api_base=cls.api_base) return cls.models + @classmethod + def get_model(cls, model: str, **kwargs) -> str: + if model in cls.get_models(): + return model + return cls.default_model + @classmethod async def create_async_generator( cls, model: str, messages: Messages, stream: bool = False, - api_key: str = None, - api_base: str = None, **kwargs ) -> AsyncResult: model = cls.get_model(model) @@ -43,11 +45,10 @@ async def create_async_generator( "Authorization": f"Bearer {cls.api_key}", "Content-Type": "application/json", "accept": "*/*", - "cache-control": "no-cache", "user-agent": "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/129.0.0.0 Safari/537.36" } - started = False - async for chunk in super().create_async_generator( + + async for response in super().create_async_generator( model=model, messages=messages, api_base=cls.api_base, @@ -56,10 +57,10 @@ async def create_async_generator( headers=headers, **kwargs ): - if isinstance(chunk, str) and cls.url in chunk: - continue - if isinstance(chunk, str) and not started: - chunk = chunk.lstrip() - if chunk: - started = True - yield chunk + if isinstance(response, str) and "discord.gg" not in response: + yield response + elif not isinstance(response, str): + yield response + + if isinstance(response, dict) and response.get('finish_reason') == 'stop': + break diff --git a/g4f/Provider/Mhystical.py b/g4f/Provider/Mhystical.py index 570cc85cd5f..89f4be9bd09 100644 --- a/g4f/Provider/Mhystical.py +++ b/g4f/Provider/Mhystical.py @@ -19,6 +19,8 @@ class Mhystical(OpenaiAPI): url = "https://mhystical.cc" api_endpoint = "https://api.mhystical.cc/v1/completions" login_url = "https://mhystical.cc/dashboard" + api_key = "mhystical" + working = True needs_auth = False supports_stream = False # Set to False, as streaming is not specified in ChatifyAI @@ -38,12 +40,11 @@ def create_async_generator( model: str, messages: Messages, stream: bool = False, - api_key: str = "mhystical", + api_key: str = None, **kwargs ) -> AsyncResult: - model = cls.get_model(model) headers = { - "x-api-key": api_key, + "x-api-key": cls.api_key, "Content-Type": "application/json", "accept": "*/*", "cache-control": "no-cache", @@ -58,4 +59,4 @@ def create_async_generator( api_endpoint=cls.api_endpoint, headers=headers, **kwargs - ) \ No newline at end of file + ) diff --git a/g4f/Provider/OIVSCode.py b/g4f/Provider/OIVSCode.py new file mode 100644 index 00000000000..56e6ceb8b63 --- /dev/null +++ b/g4f/Provider/OIVSCode.py @@ -0,0 +1,101 @@ +from __future__ import annotations + +import json +from aiohttp import ClientSession + +from ..image import to_data_uri +from ..typing import AsyncResult, Messages, ImagesType +from ..requests.raise_for_status import raise_for_status +from .base_provider import AsyncGeneratorProvider, ProviderModelMixin +from .helper import format_prompt +from ..providers.response import FinishReason + + +class OIVSCode(AsyncGeneratorProvider, ProviderModelMixin): + label = "OI VSCode Server" + url = "https://oi-vscode-server.onrender.com" + api_endpoint = "https://oi-vscode-server.onrender.com/v1/chat/completions" + + working = True + supports_stream = True + supports_system_message = True + supports_message_history = True + + default_model = "gpt-4o-mini-2024-07-18" + default_vision_model = default_model + vision_models = [default_model, "gpt-4o-mini"] + models = vision_models + + model_aliases = {"gpt-4o-mini": "gpt-4o-mini-2024-07-18"} + + @classmethod + async def create_async_generator( + cls, + model: str, + messages: Messages, + stream: bool = False, + images: ImagesType = None, + proxy: str = None, + **kwargs + ) -> AsyncResult: + headers = { + "accept": "*/*", + "accept-language": "en-US,en;q=0.9", + "content-type": "application/json", + "user-agent": "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/131.0.0.0 Safari/537.36" + } + + async with ClientSession(headers=headers) as session: + + if images is not None: + messages[-1]['content'] = [ + { + "type": "text", + "text": messages[-1]['content'] + }, + *[ + { + "type": "image_url", + "image_url": { + "url": to_data_uri(image) + } + } + for image, _ in images + ] + ] + + data = { + "model": model, + "stream": stream, + "messages": messages + } + + async with session.post(cls.api_endpoint, json=data, proxy=proxy) as response: + await raise_for_status(response) + + full_response = "" + + if stream: + async for line in response.content: + if line: + line = line.decode() + if line.startswith("data: "): + if line.strip() == "data: [DONE]": + break + try: + data = json.loads(line[6:]) + if content := data["choices"][0]["delta"].get("content"): + yield content + full_response += content + except: + continue + + reason = "length" if len(full_response) > 0 else "stop" + yield FinishReason(reason) + else: + response_data = await response.json() + full_response = response_data["choices"][0]["message"]["content"] + yield full_response + + reason = "length" if len(full_response) > 0 else "stop" + yield FinishReason(reason) diff --git a/g4f/Provider/PollinationsAI.py b/g4f/Provider/PollinationsAI.py index 7755c930e0f..fe1334f32b1 100644 --- a/g4f/Provider/PollinationsAI.py +++ b/g4f/Provider/PollinationsAI.py @@ -26,7 +26,7 @@ class PollinationsAI(AsyncGeneratorProvider, ProviderModelMixin): # API endpoints text_api_endpoint = "https://text.pollinations.ai/" - image_api_endpoint = "https://image.pollinations.ai/" + image_api_endpoint = "https://image.pollinations.ai" # Models configuration default_model = "openai" @@ -36,19 +36,26 @@ class PollinationsAI(AsyncGeneratorProvider, ProviderModelMixin): models = [] additional_models_image = ["midjourney", "dall-e-3"] - additional_models_text = ["claude", "karma", "command-r", "llamalight", "mistral-large", "sur", "sur-mistral"] + additional_models_text = ["claude", "karma", "command-r", "llamalight", "mistral-large", "sur-mistral", "claude-email"] model_aliases = { - "gpt-4o": default_model, + ### Text Models ### + "gpt-4o-mini": default_model, + "gpt-4o": "openai-large", "qwen-2-72b": "qwen", "qwen-2.5-coder-32b": "qwen-coder", "llama-3.3-70b": "llama", "mistral-nemo": "mistral", #"": "karma", + #"": "sur-mistral", "gpt-4": "searchgpt", + "claude-3.5-haiku": "claude-hybridspace", + "claude-3.5-sonnet": "claude-email", "gpt-4": "claude", - "claude-3.5-sonnet": "sur", "deepseek-chat": "deepseek", - "llama-3.2-3b": "llamalight", + "llama-3.1-8b": "llamalight", + + ### Image Models ### + "sd-turbo": "turbo", } @classmethod diff --git a/g4f/Provider/__init__.py b/g4f/Provider/__init__.py index 6910fbc175d..38028ac444a 100644 --- a/g4f/Provider/__init__.py +++ b/g4f/Provider/__init__.py @@ -17,7 +17,6 @@ from .AIUncensored import AIUncensored from .AutonomousAI import AutonomousAI from .Blackbox import Blackbox -from .BlackboxCreateAgent import BlackboxCreateAgent from .CablyAI import CablyAI from .ChatGLM import ChatGLM from .ChatGpt import ChatGpt @@ -27,6 +26,7 @@ from .Copilot import Copilot from .DarkAI import DarkAI from .DDG import DDG +from .DeepInfraChat import DeepInfraChat from .Free2GPT import Free2GPT from .FreeGpt import FreeGpt from .GizAI import GizAI @@ -35,12 +35,12 @@ from .Jmuz import Jmuz from .Liaobots import Liaobots from .Mhystical import Mhystical +from .OIVSCode import OIVSCode from .PerplexityLabs import PerplexityLabs from .Pi import Pi from .Pizzagpt import Pizzagpt from .PollinationsAI import PollinationsAI from .Prodia import Prodia -from .RubiksAI import RubiksAI from .TeachAnything import TeachAnything from .You import You from .Yqcloud import Yqcloud diff --git a/g4f/Provider/hf_space/Qwen_QVQ_72B.py b/g4f/Provider/hf_space/Qwen_QVQ_72B.py index a9d224ea7d0..01a63b78f9d 100644 --- a/g4f/Provider/hf_space/Qwen_QVQ_72B.py +++ b/g4f/Provider/hf_space/Qwen_QVQ_72B.py @@ -18,7 +18,7 @@ class Qwen_QVQ_72B(AsyncGeneratorProvider, ProviderModelMixin): default_model = "qwen-qvq-72b-preview" models = [default_model] - model_aliases = {"qwq-32b": default_model} + model_aliases = {"qvq-72b": default_model} @classmethod async def create_async_generator( diff --git a/g4f/Provider/hf_space/Qwen_Qwen_2_72B_Instruct.py b/g4f/Provider/hf_space/Qwen_Qwen_2_72B_Instruct.py index a49a0debb69..5ca604ace4d 100644 --- a/g4f/Provider/hf_space/Qwen_Qwen_2_72B_Instruct.py +++ b/g4f/Provider/hf_space/Qwen_Qwen_2_72B_Instruct.py @@ -21,7 +21,7 @@ class Qwen_Qwen_2_72B_Instruct(AsyncGeneratorProvider, ProviderModelMixin): default_model = "qwen-qwen2-72b-instruct" models = [default_model] - model_aliases = {"qwen-2.5-72b": default_model} + model_aliases = {"qwen-2-72b": default_model} @classmethod async def create_async_generator( diff --git a/g4f/Provider/hf_space/__init__.py b/g4f/Provider/hf_space/__init__.py index 98856218574..3d43d8d19b2 100644 --- a/g4f/Provider/hf_space/__init__.py +++ b/g4f/Provider/hf_space/__init__.py @@ -7,10 +7,10 @@ from .BlackForestLabsFlux1Dev import BlackForestLabsFlux1Dev from .BlackForestLabsFlux1Schnell import BlackForestLabsFlux1Schnell from .VoodoohopFlux1Schnell import VoodoohopFlux1Schnell -from .StableDiffusion35Large import StableDiffusion35Large from .CohereForAI import CohereForAI from .Qwen_QVQ_72B import Qwen_QVQ_72B from .Qwen_Qwen_2_72B_Instruct import Qwen_Qwen_2_72B_Instruct +from .StableDiffusion35Large import StableDiffusion35Large class HuggingSpace(AsyncGeneratorProvider, ProviderModelMixin): url = "https://huggingface.co/spaces" @@ -18,9 +18,12 @@ class HuggingSpace(AsyncGeneratorProvider, ProviderModelMixin): working = True - default_model = BlackForestLabsFlux1Dev.default_model + default_model = Qwen_Qwen_2_72B_Instruct.default_model + default_image_model = BlackForestLabsFlux1Dev.default_model default_vision_model = Qwen_QVQ_72B.default_model - providers = [BlackForestLabsFlux1Dev, BlackForestLabsFlux1Schnell, VoodoohopFlux1Schnell, StableDiffusion35Large, CohereForAI, Qwen_QVQ_72B, Qwen_Qwen_2_72B_Instruct] + providers = [BlackForestLabsFlux1Dev, BlackForestLabsFlux1Schnell, VoodoohopFlux1Schnell, CohereForAI, Qwen_QVQ_72B, Qwen_Qwen_2_72B_Instruct, StableDiffusion35Large] + + @classmethod def get_parameters(cls, **kwargs) -> dict: @@ -35,7 +38,6 @@ def get_models(cls, **kwargs) -> list[str]: models = [] for provider in cls.providers: models.extend(provider.get_models(**kwargs)) - models.extend(provider.model_aliases.keys()) models = list(set(models)) models.sort() cls.models = models diff --git a/g4f/Provider/needs_auth/Cerebras.py b/g4f/Provider/needs_auth/Cerebras.py index 996e8e111ca..e91fa8b25ee 100644 --- a/g4f/Provider/needs_auth/Cerebras.py +++ b/g4f/Provider/needs_auth/Cerebras.py @@ -15,11 +15,11 @@ class Cerebras(OpenaiAPI): working = True default_model = "llama3.1-70b" models = [ - "llama3.1-70b", + default_model, "llama3.1-8b", "llama-3.3-70b" ] - model_aliases = {"llama-3.1-70b": "llama3.1-70b", "llama-3.1-8b": "llama3.1-8b"} + model_aliases = {"llama-3.1-70b": default_model, "llama-3.1-8b": "llama3.1-8b"} @classmethod async def create_async_generator( diff --git a/g4f/Provider/needs_auth/DeepInfra.py b/g4f/Provider/needs_auth/DeepInfra.py index 869933145ad..ea537b3b48b 100644 --- a/g4f/Provider/needs_auth/DeepInfra.py +++ b/g4f/Provider/needs_auth/DeepInfra.py @@ -2,41 +2,59 @@ import requests from ...typing import AsyncResult, Messages -from .OpenaiAPI import OpenaiAPI from ...requests import StreamSession, raise_for_status from ...image import ImageResponse +from .OpenaiAPI import OpenaiAPI +from ..base_provider import AsyncGeneratorProvider, ProviderModelMixin -class DeepInfra(OpenaiAPI): +class DeepInfra(OpenaiAPI, AsyncGeneratorProvider, ProviderModelMixin): label = "DeepInfra" url = "https://deepinfra.com" login_url = "https://deepinfra.com/dash/api_keys" working = True - api_base = "https://api.deepinfra.com/v1/openai", + api_base = "https://api.deepinfra.com/v1/openai" needs_auth = True supports_stream = True supports_message_history = True default_model = "meta-llama/Meta-Llama-3.1-70B-Instruct" - default_image_model = '' - image_models = [default_image_model] + default_image_model = "stabilityai/sd3.5" + models = [] + image_models = [] @classmethod def get_models(cls, **kwargs): if not cls.models: url = 'https://api.deepinfra.com/models/featured' - models = requests.get(url).json() - cls.models = [model['model_name'] for model in models if model["type"] == "text-generation"] - cls.image_models = [model['model_name'] for model in models if model["reported_type"] == "text-to-image"] + response = requests.get(url) + models = response.json() + + cls.models = [] + cls.image_models = [] + + for model in models: + if model["type"] == "text-generation": + cls.models.append(model['model_name']) + elif model["reported_type"] == "text-to-image": + cls.image_models.append(model['model_name']) + + cls.models.extend(cls.image_models) + return cls.models + @classmethod + def get_image_models(cls, **kwargs): + if not cls.image_models: + cls.get_models() + return cls.image_models + @classmethod def create_async_generator( cls, model: str, messages: Messages, - stream: bool = True, + stream: bool, temperature: float = 0.7, max_tokens: int = 1028, - prompt: str = None, **kwargs ) -> AsyncResult: headers = { @@ -47,12 +65,6 @@ def create_async_generator( 'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/119.0.0.0 Safari/537.36', 'X-Deepinfra-Source': 'web-embed', } - - # Check if the model is an image model - if model in cls.image_models: - return cls.create_image_generator(messages[-1]["content"] if prompt is None else prompt, model, headers=headers, **kwargs) - - # Text generation return super().create_async_generator( model, messages, stream=stream, @@ -63,7 +75,7 @@ def create_async_generator( ) @classmethod - async def create_image_generator( + async def create_async_image( cls, prompt: str, model: str, @@ -71,13 +83,26 @@ async def create_image_generator( api_base: str = "https://api.deepinfra.com/v1/inference", proxy: str = None, timeout: int = 180, - headers: dict = None, extra_data: dict = {}, **kwargs - ) -> AsyncResult: - if api_key is not None and headers is not None: + ) -> ImageResponse: + headers = { + 'Accept-Encoding': 'gzip, deflate, br', + 'Accept-Language': 'en-US', + 'Connection': 'keep-alive', + 'Origin': 'https://deepinfra.com', + 'Referer': 'https://deepinfra.com/', + 'Sec-Fetch-Dest': 'empty', + 'Sec-Fetch-Mode': 'cors', + 'Sec-Fetch-Site': 'same-site', + 'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/119.0.0.0 Safari/537.36', + 'X-Deepinfra-Source': 'web-embed', + 'sec-ch-ua': '"Google Chrome";v="119", "Chromium";v="119", "Not?A_Brand";v="24"', + 'sec-ch-ua-mobile': '?0', + 'sec-ch-ua-platform': '"macOS"', + } + if api_key is not None: headers["Authorization"] = f"Bearer {api_key}" - async with StreamSession( proxies={"all": proxy}, headers=headers, @@ -85,7 +110,7 @@ async def create_image_generator( ) as session: model = cls.get_model(model) data = {"prompt": prompt, **extra_data} - data = {"input": data} if model == cls.default_image_model else data + data = {"input": data} if model == cls.default_model else data async with session.post(f"{api_base.rstrip('/')}/{model}", json=data) as response: await raise_for_status(response) data = await response.json() @@ -93,4 +118,14 @@ async def create_image_generator( if not images: raise RuntimeError(f"Response: {data}") images = images[0] if len(images) == 1 else images - yield ImageResponse(images, prompt) + return ImageResponse(images, prompt) + + @classmethod + async def create_async_image_generator( + cls, + model: str, + messages: Messages, + prompt: str = None, + **kwargs + ) -> AsyncResult: + yield await cls.create_async_image(messages[-1]["content"] if prompt is None else prompt, model, **kwargs) diff --git a/g4f/Provider/needs_auth/Gemini.py b/g4f/Provider/needs_auth/Gemini.py index 498137e596e..0e1a733fe1f 100644 --- a/g4f/Provider/needs_auth/Gemini.py +++ b/g4f/Provider/needs_auth/Gemini.py @@ -60,13 +60,11 @@ class Gemini(AsyncGeneratorProvider, ProviderModelMixin): working = True default_model = 'gemini' - image_models = ["gemini"] - default_vision_model = "gemini" - models = ["gemini", "gemini-1.5-flash", "gemini-1.5-pro"] - model_aliases = { - "gemini-flash": "gemini-1.5-flash", - "gemini-pro": "gemini-1.5-pro", - } + default_image_model = default_model + default_vision_model = default_model + image_models = [default_image_model] + models = [default_model, "gemini-1.5-flash", "gemini-1.5-pro"] + synthesize_content_type = "audio/vnd.wav" _cookies: Cookies = None diff --git a/g4f/Provider/needs_auth/GigaChat.py b/g4f/Provider/needs_auth/GigaChat.py index 59da21a2691..11eb663552d 100644 --- a/g4f/Provider/needs_auth/GigaChat.py +++ b/g4f/Provider/needs_auth/GigaChat.py @@ -61,7 +61,7 @@ class GigaChat(AsyncGeneratorProvider, ProviderModelMixin): supports_stream = True needs_auth = True default_model = "GigaChat:latest" - models = ["GigaChat:latest", "GigaChat-Plus", "GigaChat-Pro"] + models = [default_model, "GigaChat-Plus", "GigaChat-Pro"] @classmethod async def create_async_generator( diff --git a/g4f/Provider/needs_auth/GlhfChat.py b/g4f/Provider/needs_auth/GlhfChat.py index f3a578af17f..be56ebb6472 100644 --- a/g4f/Provider/needs_auth/GlhfChat.py +++ b/g4f/Provider/needs_auth/GlhfChat.py @@ -5,26 +5,10 @@ class GlhfChat(OpenaiAPI): label = "GlhfChat" url = "https://glhf.chat" - login_url = "https://glhf.chat/users/settings/api" + login_url = "https://glhf.chat/user-settings/api" api_base = "https://glhf.chat/api/openai/v1" + working = True - model_aliases = { - 'Qwen2.5-Coder-32B-Instruct': 'hf:Qwen/Qwen2.5-Coder-32B-Instruct', - 'Llama-3.1-405B-Instruct': 'hf:meta-llama/Llama-3.1-405B-Instruct', - 'Llama-3.1-70B-Instruct': 'hf:meta-llama/Llama-3.1-70B-Instruct', - 'Llama-3.1-8B-Instruct': 'hf:meta-llama/Llama-3.1-8B-Instruct', - 'Llama-3.2-3B-Instruct': 'hf:meta-llama/Llama-3.2-3B-Instruct', - 'Llama-3.2-11B-Vision-Instruct': 'hf:meta-llama/Llama-3.2-11B-Vision-Instruct', - 'Llama-3.2-90B-Vision-Instruct': 'hf:meta-llama/Llama-3.2-90B-Vision-Instruct', - 'Qwen2.5-72B-Instruct': 'hf:Qwen/Qwen2.5-72B-Instruct', - 'Llama-3.3-70B-Instruct': 'hf:meta-llama/Llama-3.3-70B-Instruct', - 'gemma-2-9b-it': 'hf:google/gemma-2-9b-it', - 'gemma-2-27b-it': 'hf:google/gemma-2-27b-it', - 'Mistral-7B-Instruct-v0.3': 'hf:mistralai/Mistral-7B-Instruct-v0.3', - 'Mixtral-8x7B-Instruct-v0.1': 'hf:mistralai/Mixtral-8x7B-Instruct-v0.1', - 'Mixtral-8x22B-Instruct-v0.1': 'hf:mistralai/Mixtral-8x22B-Instruct-v0.1', - 'Nous-Hermes-2-Mixtral-8x7B-DPO': 'hf:NousResearch/Nous-Hermes-2-Mixtral-8x7B-DPO', - 'Qwen2.5-7B-Instruct': 'hf:Qwen/Qwen2.5-7B-Instruct', - 'SOLAR-10.7B-Instruct-v1.0': 'hf:upstage/SOLAR-10.7B-Instruct-v1.0', - 'Llama-3.1-Nemotron-70B-Instruct-HF': 'hf:nvidia/Llama-3.1-Nemotron-70B-Instruct-HF' - } + + default_model = "hf:meta-llama/Llama-3.3-70B-Instruct" + models = ["hf:meta-llama/Llama-3.1-405B-Instruct", default_model, "hf:deepseek-ai/DeepSeek-V3", "hf:Qwen/QwQ-32B-Preview", "hf:huihui-ai/Llama-3.3-70B-Instruct-abliterated", "hf:anthracite-org/magnum-v4-12b", "hf:meta-llama/Llama-3.1-70B-Instruct", "hf:meta-llama/Llama-3.1-8B-Instruct", "hf:meta-llama/Llama-3.2-3B-Instruct", "hf:meta-llama/Llama-3.2-11B-Vision-Instruct", "hf:meta-llama/Llama-3.2-90B-Vision-Instruct", "hf:Qwen/Qwen2.5-72B-Instruct", "hf:Qwen/Qwen2.5-Coder-32B-Instruct", "hf:google/gemma-2-9b-it", "hf:google/gemma-2-27b-it", "hf:mistralai/Mistral-7B-Instruct-v0.3", "hf:mistralai/Mixtral-8x7B-Instruct-v0.1", "hf:mistralai/Mixtral-8x22B-Instruct-v0.1", "hf:NousResearch/Nous-Hermes-2-Mixtral-8x7B-DPO", "hf:Qwen/Qwen2.5-7B-Instruct", "hf:upstage/SOLAR-10.7B-Instruct-v1.0", "hf:nvidia/Llama-3.1-Nemotron-70B-Instruct-HF"] diff --git a/g4f/Provider/needs_auth/HuggingFaceAPI.py b/g4f/Provider/needs_auth/HuggingFaceAPI.py index a3817b15763..b9cc4a58ef4 100644 --- a/g4f/Provider/needs_auth/HuggingFaceAPI.py +++ b/g4f/Provider/needs_auth/HuggingFaceAPI.py @@ -8,9 +8,11 @@ class HuggingFaceAPI(OpenaiAPI): parent = "HuggingFace" url = "https://api-inference.huggingface.com" api_base = "https://api-inference.huggingface.co/v1" + working = True + default_model = "meta-llama/Llama-3.2-11B-Vision-Instruct" + default_image_model = HuggingChat.default_image_model default_vision_model = default_model - models = [ - *HuggingChat.models - ] \ No newline at end of file + image_models = HuggingChat.image_models + models = HuggingChat.models diff --git a/g4f/Provider/needs_auth/OpenaiAccount.py b/g4f/Provider/needs_auth/OpenaiAccount.py index 5e6c944969b..75944d81f2e 100644 --- a/g4f/Provider/needs_auth/OpenaiAccount.py +++ b/g4f/Provider/needs_auth/OpenaiAccount.py @@ -6,7 +6,7 @@ class OpenaiAccount(OpenaiChat): needs_auth = True parent = "OpenaiChat" image_models = ["dall-e-3", "gpt-4", "gpt-4o"] - default_vision_model = "gpt-4o" + default_model = "gpt-4o" + default_vision_model = default_model default_image_model = "dall-e-3" - fallback_models = [*OpenaiChat.fallback_models, default_image_model] - model_aliases = {default_image_model: default_vision_model} \ No newline at end of file + fallback_models = [*OpenaiChat.fallback_models, default_image_model] \ No newline at end of file diff --git a/g4f/Provider/needs_auth/PerplexityApi.py b/g4f/Provider/needs_auth/PerplexityApi.py index 77d71c214bd..3d8aa9bc449 100644 --- a/g4f/Provider/needs_auth/PerplexityApi.py +++ b/g4f/Provider/needs_auth/PerplexityApi.py @@ -11,7 +11,7 @@ class PerplexityApi(OpenaiAPI): default_model = "llama-3-sonar-large-32k-online" models = [ "llama-3-sonar-small-32k-chat", - "llama-3-sonar-small-32k-online", + default_model, "llama-3-sonar-large-32k-chat", "llama-3-sonar-large-32k-online", "llama-3-8b-instruct", diff --git a/g4f/Provider/needs_auth/Replicate.py b/g4f/Provider/needs_auth/Replicate.py index 3c9b23cdb76..328f701f3f3 100644 --- a/g4f/Provider/needs_auth/Replicate.py +++ b/g4f/Provider/needs_auth/Replicate.py @@ -13,9 +13,7 @@ class Replicate(AsyncGeneratorProvider, ProviderModelMixin): working = True needs_auth = True default_model = "meta/meta-llama-3-70b-instruct" - model_aliases = { - "meta-llama/Meta-Llama-3-70B-Instruct": default_model - } + models = [default_model] @classmethod async def create_async_generator( diff --git a/g4f/Provider/needs_auth/__init__.py b/g4f/Provider/needs_auth/__init__.py index 0389801325c..426c9874849 100644 --- a/g4f/Provider/needs_auth/__init__.py +++ b/g4f/Provider/needs_auth/__init__.py @@ -1,3 +1,4 @@ +from .Anthropic import Anthropic from .BingCreateImages import BingCreateImages from .Cerebras import Cerebras from .CopilotAccount import CopilotAccount @@ -20,8 +21,6 @@ from .OpenaiAPI import OpenaiAPI from .OpenaiChat import OpenaiChat from .PerplexityApi import PerplexityApi -from .Poe import Poe -from .Raycast import Raycast from .Reka import Reka from .Replicate import Replicate from .ThebApi import ThebApi diff --git a/g4f/Provider/needs_auth/Poe.py b/g4f/Provider/not_working/Poe.py similarity index 100% rename from g4f/Provider/needs_auth/Poe.py rename to g4f/Provider/not_working/Poe.py diff --git a/g4f/Provider/needs_auth/Raycast.py b/g4f/Provider/not_working/Raycast.py similarity index 98% rename from g4f/Provider/needs_auth/Raycast.py rename to g4f/Provider/not_working/Raycast.py index 008fcad8fd1..67c3393920a 100644 --- a/g4f/Provider/needs_auth/Raycast.py +++ b/g4f/Provider/not_working/Raycast.py @@ -12,7 +12,7 @@ class Raycast(AbstractProvider): url = "https://raycast.com" supports_stream = True needs_auth = True - working = True + working = False models = [ "gpt-3.5-turbo", diff --git a/g4f/Provider/RubiksAI.py b/g4f/Provider/not_working/RubiksAI.py similarity index 96% rename from g4f/Provider/RubiksAI.py rename to g4f/Provider/not_working/RubiksAI.py index 43760f9ddd0..0f3610b48dc 100644 --- a/g4f/Provider/RubiksAI.py +++ b/g4f/Provider/not_working/RubiksAI.py @@ -8,9 +8,9 @@ from aiohttp import ClientSession -from ..typing import AsyncResult, Messages -from .base_provider import AsyncGeneratorProvider, ProviderModelMixin, Sources -from ..requests.raise_for_status import raise_for_status +from ...typing import AsyncResult, Messages +from ..base_provider import AsyncGeneratorProvider, ProviderModelMixin, Sources +from ...requests.raise_for_status import raise_for_status class RubiksAI(AsyncGeneratorProvider, ProviderModelMixin): label = "Rubiks AI" diff --git a/g4f/Provider/not_working/__init__.py b/g4f/Provider/not_working/__init__.py index 7bec0a36c89..04b6aca05d0 100644 --- a/g4f/Provider/not_working/__init__.py +++ b/g4f/Provider/not_working/__init__.py @@ -6,13 +6,15 @@ from .Chatgpt4o import Chatgpt4o from .Chatgpt4Online import Chatgpt4Online from .ChatgptFree import ChatgptFree -from .DeepInfraChat import DeepInfraChat from .FlowGpt import FlowGpt from .FreeNetfly import FreeNetfly from .Koala import Koala from .MagickPen import MagickPen from .MyShell import MyShell +from .Poe import Poe +from .Raycast import Raycast from .ReplicateHome import ReplicateHome from .RobocodersAPI import RobocodersAPI +from .RubiksAI import RubiksAI from .Theb import Theb from .Upstage import Upstage diff --git a/g4f/client/__init__.py b/g4f/client/__init__.py index 69ed37feb65..b4d6a39e629 100644 --- a/g4f/client/__init__.py +++ b/g4f/client/__init__.py @@ -512,7 +512,7 @@ def __init__(self, client: AsyncClient, provider: Optional[ProviderType] = None) self.client: AsyncClient = client self.provider: ProviderType = provider - def create( + async def create( self, messages: Messages, model: str, @@ -529,7 +529,7 @@ def create( ignore_working: Optional[bool] = False, ignore_stream: Optional[bool] = False, **kwargs - ) -> Awaitable[ChatCompletion]: + ) -> Awaitable[ChatCompletion, AsyncIterator[ChatCompletionChunk]]: model, provider = get_model_and_provider( model, self.provider if provider is None else provider, @@ -542,6 +542,7 @@ def create( kwargs["images"] = [(image, image_name)] if ignore_stream: kwargs["ignore_stream"] = True + response = async_iter_run_tools( provider.get_async_create_function(), model, @@ -555,9 +556,14 @@ def create( ), **kwargs ) + response = async_iter_response(response, stream, response_format, max_tokens, stop) response = async_iter_append_model_and_provider(response, model, provider) - return response if stream else anext(response) + + if stream: + return response + else: + return await anext(response) def stream( self, diff --git a/g4f/models.py b/g4f/models.py index f05ea71f6de..51ddc912a3b 100644 --- a/g4f/models.py +++ b/g4f/models.py @@ -4,13 +4,12 @@ from .Provider import IterListProvider, ProviderType from .Provider import ( + ### no auth required ### AIChatFree, Airforce, AIUncensored, AutonomousAI, Blackbox, - BlackboxCreateAgent, - BingCreateImages, CablyAI, ChatGLM, ChatGpt, @@ -18,30 +17,34 @@ ChatGptt, Cloudflare, Copilot, - CopilotAccount, DarkAI, DDG, - GigaChat, - Gemini, - GeminiPro, - HuggingChat, - HuggingFace, + DeepInfraChat, HuggingSpace, GPROChat, Jmuz, Liaobots, Mhystical, - MetaAI, - MicrosoftDesigner, - OpenaiChat, - OpenaiAccount, + OIVSCode, PerplexityLabs, Pi, PollinationsAI, - Reka, - RubiksAI, TeachAnything, Yqcloud, + + ### needs auth ### + BingCreateImages, + CopilotAccount, + Gemini, + GeminiPro, + GigaChat, + HuggingChat, + HuggingFace, + MetaAI, + MicrosoftDesigner, + OpenaiAccount, + OpenaiChat, + Reka, ) @dataclass(unsafe_hash=True) @@ -74,15 +77,16 @@ class ImageModel(Model): DDG, Blackbox, Copilot, + DeepInfraChat, ChatGptEs, ChatGptt, PollinationsAI, Jmuz, CablyAI, - OpenaiChat, + OIVSCode, DarkAI, - Yqcloud, AIUncensored, + OpenaiChat, Airforce, Cloudflare, ]) @@ -104,20 +108,20 @@ class ImageModel(Model): gpt_4 = Model( name = 'gpt-4', base_provider = 'OpenAI', - best_provider = IterListProvider([DDG, Blackbox, Jmuz, ChatGptEs, ChatGptt, PollinationsAI, Copilot, Yqcloud, OpenaiChat, Liaobots, Mhystical]) + best_provider = IterListProvider([DDG, Blackbox, Jmuz, ChatGptEs, ChatGptt, PollinationsAI, Yqcloud, Copilot, OpenaiChat, Liaobots, Mhystical]) ) # gpt-4o gpt_4o = Model( name = 'gpt-4o', base_provider = 'OpenAI', - best_provider = IterListProvider([Blackbox, ChatGptt, Jmuz, ChatGptEs, PollinationsAI, DarkAI, ChatGpt, Liaobots, OpenaiChat]) + best_provider = IterListProvider([Blackbox, ChatGptt, Jmuz, ChatGptEs, PollinationsAI, DarkAI, Copilot, ChatGpt, Liaobots, OpenaiChat]) ) gpt_4o_mini = Model( name = 'gpt-4o-mini', base_provider = 'OpenAI', - best_provider = IterListProvider([DDG, ChatGptEs, ChatGptt, Jmuz, ChatGpt, RubiksAI, Liaobots, OpenaiChat]) + best_provider = IterListProvider([DDG, ChatGptEs, ChatGptt, Jmuz, PollinationsAI, OIVSCode, ChatGpt, Liaobots, OpenaiChat]) ) # o1 @@ -170,13 +174,13 @@ class ImageModel(Model): llama_3_1_8b = Model( name = "llama-3.1-8b", base_provider = "Meta Llama", - best_provider = IterListProvider([Blackbox, Jmuz, Cloudflare, Airforce, PerplexityLabs]) + best_provider = IterListProvider([Blackbox, DeepInfraChat, Jmuz, PollinationsAI, Cloudflare, Airforce, PerplexityLabs]) ) llama_3_1_70b = Model( name = "llama-3.1-70b", base_provider = "Meta Llama", - best_provider = IterListProvider([DDG, Jmuz, Blackbox, BlackboxCreateAgent, TeachAnything, DarkAI, Airforce, RubiksAI, PerplexityLabs]) + best_provider = IterListProvider([DDG, Jmuz, Blackbox, TeachAnything, DarkAI, Airforce, PerplexityLabs]) ) llama_3_1_405b = Model( @@ -192,29 +196,29 @@ class ImageModel(Model): best_provider = Cloudflare ) -llama_3_2_3b = Model( - name = "llama-3.2-3b", - base_provider = "Meta Llama", - best_provider = PollinationsAI -) - llama_3_2_11b = Model( name = "llama-3.2-11b", base_provider = "Meta Llama", best_provider = IterListProvider([Jmuz, HuggingChat, HuggingFace]) ) +llama_3_2_70b = Model( + name = "llama-3.2-70b", + base_provider = "Meta Llama", + best_provider = AutonomousAI +) + llama_3_2_90b = Model( name = "llama-3.2-90b", base_provider = "Meta Llama", - best_provider = IterListProvider([AutonomousAI, Jmuz]) + best_provider = AutonomousAI ) # llama 3.3 llama_3_3_70b = Model( name = "llama-3.3-70b", base_provider = "Meta Llama", - best_provider = IterListProvider([Blackbox, PollinationsAI, AutonomousAI, Jmuz, HuggingChat, HuggingFace, PerplexityLabs]) + best_provider = IterListProvider([Blackbox, DeepInfraChat, PollinationsAI, AutonomousAI, Jmuz, HuggingChat, HuggingFace, PerplexityLabs]) ) ### Mistral ### @@ -263,6 +267,7 @@ class ImageModel(Model): ### Microsoft ### +# phi phi_2 = Model( name = "phi-2", base_provider = "Microsoft", @@ -275,6 +280,19 @@ class ImageModel(Model): best_provider = IterListProvider([HuggingChat, HuggingFace]) ) +# wizardlm +wizardlm_2_7b = Model( + name = 'wizardlm-2-7b', + base_provider = 'Microsoft', + best_provider = DeepInfraChat +) + +wizardlm_2_8x22b = Model( + name = 'wizardlm-2-8x22b', + base_provider = 'Microsoft', + best_provider = IterListProvider([DeepInfraChat, Jmuz]) +) + ### Google DeepMind ### # gemini gemini = Model( @@ -283,6 +301,13 @@ class ImageModel(Model): best_provider = IterListProvider([Jmuz, Gemini]) ) +# gemini-exp +gemini_exp = Model( + name = 'gemini-exp', + base_provider = 'Google', + best_provider = Jmuz +) + # gemini-1.5 gemini_1_5_pro = Model( name = 'gemini-1.5-pro', @@ -327,11 +352,17 @@ class ImageModel(Model): claude_3_opus = Model( name = 'claude-3-opus', base_provider = 'Anthropic', - best_provider = IterListProvider([Jmuz, Liaobots]) + best_provider = Liaobots ) # claude 3.5 +claude_3_5_haiku = Model( + name = 'claude-3.5-haiku', + base_provider = 'Anthropic', + best_provider = PollinationsAI +) + claude_3_5_sonnet = Model( name = 'claude-3.5-sonnet', base_provider = 'Anthropic', @@ -389,26 +420,33 @@ class ImageModel(Model): qwen_2_72b = Model( name = 'qwen-2-72b', base_provider = 'Qwen', - best_provider = PollinationsAI + best_provider = IterListProvider([DeepInfraChat, PollinationsAI]) ) # qwen 2.5 qwen_2_5_72b = Model( name = 'qwen-2.5-72b', base_provider = 'Qwen', - best_provider = IterListProvider([Jmuz, HuggingSpace]) + best_provider = HuggingSpace ) qwen_2_5_coder_32b = Model( name = 'qwen-2.5-coder-32b', base_provider = 'Qwen', - best_provider = IterListProvider([Jmuz, PollinationsAI, AutonomousAI, HuggingChat]) + best_provider = IterListProvider([DeepInfraChat, PollinationsAI, AutonomousAI, HuggingChat]) ) +# qwq/qvq qwq_32b = Model( name = 'qwq-32b', base_provider = 'Qwen', - best_provider = IterListProvider([Blackbox, Jmuz, HuggingSpace, HuggingChat]) + best_provider = IterListProvider([Blackbox, DeepInfraChat, Jmuz, HuggingChat]) +) + +qvq_72b = Model( + name = 'qvq-72b', + base_provider = 'Qwen', + best_provider = HuggingSpace ) ### Inflection ### @@ -422,7 +460,7 @@ class ImageModel(Model): deepseek_chat = Model( name = 'deepseek-chat', base_provider = 'DeepSeek', - best_provider = IterListProvider([Blackbox, Jmuz, PollinationsAI]) + best_provider = IterListProvider([Blackbox, PollinationsAI]) ) deepseek_coder = Model( @@ -431,13 +469,6 @@ class ImageModel(Model): best_provider = Airforce ) -### WizardLM ### -wizardlm_2_8x22b = Model( - name = 'wizardlm-2-8x22b', - base_provider = 'WizardLM', - best_provider = Jmuz -) - ### OpenChat ### openchat_3_5 = Model( name = 'openchat-3.5', @@ -470,7 +501,7 @@ class ImageModel(Model): nemotron_70b = Model( name = 'nemotron-70b', base_provider = 'Nvidia', - best_provider = IterListProvider([HuggingChat, HuggingFace]) + best_provider = IterListProvider([DeepInfraChat, HuggingChat, HuggingFace]) ) ### Teknium ### @@ -550,11 +581,6 @@ class ImageModel(Model): base_provider = 'Other', best_provider = PollinationsAI ) -turbo = Model( - name = 'turbo', - base_provider = 'Other', - best_provider = PollinationsAI -) unity = Model( name = 'unity', @@ -577,7 +603,12 @@ class ImageModel(Model): name = 'sdxl', base_provider = 'Stability AI', best_provider = Airforce - +) + +sd_turbo = ImageModel( + name = 'sd-turbo', + base_provider = 'Stability AI', + best_provider = PollinationsAI ) sd_3_5 = ImageModel( @@ -586,12 +617,11 @@ class ImageModel(Model): best_provider = HuggingSpace ) - ### Flux AI ### flux = ImageModel( name = 'flux', base_provider = 'Flux AI', - best_provider = IterListProvider([Blackbox, BlackboxCreateAgent, PollinationsAI, Airforce]) + best_provider = IterListProvider([Blackbox, PollinationsAI, Airforce]) ) flux_pro = ImageModel( @@ -609,7 +639,7 @@ class ImageModel(Model): flux_schnell = ImageModel( name = 'flux-schnell', base_provider = 'Flux AI', - best_provider = IterListProvider([HuggingSpace, HuggingFace]) + best_provider = IterListProvider([HuggingSpace, HuggingChat, HuggingFace]) ) flux_realism = ImageModel( @@ -722,8 +752,8 @@ class ModelUtils: # llama-3.2 llama_3_2_1b.name: llama_3_2_1b, - llama_3_2_3b.name: llama_3_2_3b, llama_3_2_11b.name: llama_3_2_11b, + llama_3_2_70b.name: llama_3_2_70b, llama_3_2_90b.name: llama_3_2_90b, # llama-3.3 @@ -741,13 +771,21 @@ class ModelUtils: hermes_3.name: hermes_3, ### Microsoft ### + # phi phi_2.name: phi_2, phi_3_5_mini.name: phi_3_5_mini, + + # wizardlm + wizardlm_2_7b.name: wizardlm_2_7b, + wizardlm_2_8x22b.name: wizardlm_2_8x22b, ### Google ### # gemini gemini.name: gemini, + # gemini-exp + gemini_exp.name: gemini_exp, + # gemini-1.5 gemini_1_5_pro.name: gemini_1_5_pro, gemini_1_5_flash.name: gemini_1_5_flash, @@ -764,6 +802,7 @@ class ModelUtils: claude_3_haiku.name: claude_3_haiku, # claude 3.5 + claude_3_5_haiku.name: claude_3_5_haiku, claude_3_5_sonnet.name: claude_3_5_sonnet, ### Reka AI ### @@ -791,14 +830,14 @@ class ModelUtils: # qwen 2.5 qwen_2_5_72b.name: qwen_2_5_72b, qwen_2_5_coder_32b.name: qwen_2_5_coder_32b, + + # qwq/qvq qwq_32b.name: qwq_32b, + qvq_72b.name: qvq_72b, ### Inflection ### pi.name: pi, - ### WizardLM ### - wizardlm_2_8x22b.name: wizardlm_2_8x22b, - ### OpenChat ### openchat_3_5.name: openchat_3_5, @@ -848,7 +887,6 @@ class ModelUtils: ### Other ### midijourney.name: midijourney, - turbo.name: turbo, unity.name: unity, rtist.name: rtist, @@ -858,6 +896,7 @@ class ModelUtils: ### Stability AI ### sdxl.name: sdxl, + sd_turbo.name: sd_turbo, sd_3_5.name: sd_3_5, ### Flux AI ###