Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update model configurations, provider implementations, and documentation #2577

Open
wants to merge 29 commits into
base: main
Choose a base branch
from

Conversation

kqlio67
Copy link
Contributor

@kqlio67 kqlio67 commented Jan 16, 2025

Various updates to model configurations, provider implementations, and documentation:

  • Updated model names and aliases for Qwen QVQ 72B and Qwen 2 72B (@TheFirstNoob)
  • Revised HuggingSpace class configuration, added default_image_model
  • Added "llama-3.2-70b" alias for Llama 3.2 70B model in AutonomousAI
  • Removed BlackboxCreateAgent class
  • Added "gpt-4o" alias for Copilot model
  • Moved api_key to Mhystical class attribute
  • Added models property with default_model value for Free2GPT
  • Simplified Jmuz class implementation
  • Improved image generation and model handling in DeepInfra
  • Standardized default models and removed aliases in Gemini
  • Replaced model aliases with direct model list in GlhfChat (@TheFirstNoob)
  • Removed trailing slash from image generation URL in PollinationsAI (Image generation error #2571)
  • Updated llama and qwen model configurations
  • Enhanced provider documentation and model details
  • Add error handling and rate limiting to DDG provider
  • Restored provider DeepInfraChat which was not working and was disabled
  • Fixing a bug with Streaming Completions
  • Added another model for generating images 'ImageGeneration2' to the 'Blackbox' provider
  • Added a new OIVSCode provider from the Text Models and Vision (Image Upload) model
  • A conversation memory class with context handling has been added to the documentation. @TheFirstNoob

These changes include model name updates, removal of deprecated classes and aliases, addition of new models and providers, improvements in image handling, and documentation updates.

Note: There may be a model verification error in the Airforce provider due to the inability to dynamically fetch models. This is because the api.airforce service is currently unavailable and unstable.

kqlio67 added 9 commits January 17, 2025 00:47
- Updated model names and aliases for Qwen QVQ 72B and Qwen 2 72B (@TheFirstNoob)
- Revised HuggingSpace class configuration, added default_image_model
- Added llama-3.2-70b alias for Llama 3.2 70B model in AutonomousAI
- Removed BlackboxCreateAgent class
- Added gpt-4o alias for Copilot model
- Moved api_key to Mhystical class attribute
- Added models property with default_model value for Free2GPT
- Simplified Jmuz class implementation
- Improved image generation and model handling in DeepInfra
- Standardized default models and removed aliases in Gemini
- Replaced model aliases with direct model list in GlhfChat (@TheFirstNoob)
- Removed trailing slash from image generation URL in PollinationsAI (xtekky#2571)
- Updated llama and qwen model configurations
- Enhanced provider documentation and model details
…rror 'ResponseStatusError: Response 429: 文字过长,请删减后重试。'
…o DDG provider

- Add custom exception classes for rate limits, timeouts, and conversation limits
- Implement rate limiting with sleep between requests (0.75s minimum delay)
- Add model validation method to check supported models
- Add proper error handling for API responses with custom exceptions
- Improve session cookie handling for conversation persistence
- Clean up User-Agent string and remove redundant code
- Add proper error propagation through async generator

Breaking changes:
- New custom exceptions may require updates to error handling code
- Rate limiting affects request timing and throughput
- Model validation is now stricter

Related:
- Adds error handling similar to standard API clients
- Improves reliability and robustness of chat interactions
@TheFirstNoob
Copy link

Hey!
Check please Pollinational provider i check models and now we didnt have qwen, llama models
And check image endpoint cause i have 404 error with trying use

image

@kqlio67
Copy link
Contributor Author

kqlio67 commented Jan 18, 2025

Hi @TheFirstNoob! 👋

Thank you for bringing this to my attention. I've investigated the issues you mentioned and would like to provide some clarification:

  1. I've fixed the 404 error with the image endpoint, and it should be working correctly now.
  2. Regarding the models (qwen, llama, etc.) - I'm curious whether you were testing on the official repository or my fork? In my latest fork commit to the Pollinational provider, I can confirm that all models listed below are fully functional and have been thoroughly tested:

Here's the complete list of currently available models in my G4F-GUI fork:

- openai
- openai-large
- qwen
- llama
- mistral
- unity
- midijourney
- rtist
- searchgpt
- evil
- p1
- deepseek
- claude-hybridspace
- claude
- karma
- command-r
- llamalight
- mistral-large
- sur-mistral
- flux (Image Generation)
- flux-realism (Image Generation)
- flux-cablyai (Image Generation)
- flux-anime (Image Generation)
- flux-3d (Image Generation)
- any-dark (Image Generation)
- flux-pro (Image Generation)
- turbo (Image Generation)
- midjourney (Image Generation)
- dall-e-3 (Image Generation)

All these models, including qwen and llama, are working properly in my implementation. I've personally tested each one to ensure functionality.

I'm looking forward to having these fixes merged into the main project branch, which will resolve the issues with the PollinationsAI provider.

@TheFirstNoob
Copy link

TheFirstNoob commented Jan 19, 2025

Hey! I check latest update. Minor moment for improve docs and infos.

  1. Thanks for answer about PollinationalAI. I test on website and direct call. Think this regional situation. I check after merge again later.
  2. DeepInfraChat need move to Need_auth in models.py and docs cause w\o .har or cookies json provider return "Not authorized" (I test this provider on 0.4.1.2 ver and add cookies file but still request not auth... maybe my problem to add cookies in folder but with HuggingFace work stable and with first try)
  3. Recommend create docs for some provider like "Need token/api" for more user friendly informations like Jmuz, DeepSeek providers.
  4. Maybe add in docs item like "Conversation memory" with basic code like here? How to keep the conversation context? #2580

Thanks for your work <3

@kqlio67
Copy link
Contributor Author

kqlio67 commented Jan 21, 2025

@TheFirstNoob, Thank you for your detailed feedback! Let me address each point:

1. Regarding DeepInfraChat provider:

  • I've thoroughly tested this provider, and it doesn't support cookie/HAR authentication as it's not implemented in the provider.
  • The provider only includes models that work without authentication.
  • The "Not authorized" error that appeared a few days ago has been resolved, and now all listed models work without authentication requirements.

Here's my test code and results:

from g4f.client import Client
from g4f.Provider import DeepInfraChat

def test_deepinfra_chat():
    client = Client(provider=DeepInfraChat)
    models = DeepInfraChat.models

    for model in models:
        print(f"Testing model: {model}")
        
        try:
            response = client.chat.completions.create(
                model=model,
                messages=[{"role": "user", "content": "Say this is a test"}],
                web_search=False
            )
            
            print("Response:")
            print(response.choices[0].message.content)
            print("\n")
            
        except Exception as e:
            print(f"Error: {str(e)}\n")

test_deepinfra_chat()

Test Results:

Testing model: meta-llama/Llama-3.3-70B-Instruct
Using DeepInfraChat provider and meta-llama/Llama-3.3-70B-Instruct model
Response:
Hello! I'm just a language model, so I don't have feelings or emotions like humans do, but I'm functioning properly and ready to help with any questions or tasks you may have! How about you? How's your day going so far?


Testing model: meta-llama/Meta-Llama-3.1-8B-Instruct
Using DeepInfraChat provider and meta-llama/Meta-Llama-3.1-8B-Instruct model
Response:
I'm just a computer program, so I don't have feelings, but thank you for asking! How can I assist you today?


Testing model: meta-llama/Llama-3.3-70B-Instruct-Turbo
Using DeepInfraChat provider and meta-llama/Llama-3.3-70B-Instruct-Turbo model
Response:
Hello! I'm just a language model, so I don't have emotions or feelings like humans do, but I'm functioning properly and ready to help with any questions or tasks you may have. How about you? How's your day going so far?


Testing model: meta-llama/Meta-Llama-3.1-70B-Instruct-Turbo
Using DeepInfraChat provider and meta-llama/Meta-Llama-3.1-70B-Instruct-Turbo model
Response:
Hello! I'm just a computer program, so I don't have feelings or emotions like humans do, but I'm functioning properly and ready to help with any questions or tasks you have! How about you? How's your day going so far?


Testing model: Qwen/QwQ-32B-Preview
Using DeepInfraChat provider and Qwen/QwQ-32B-Preview model
Response:
As an AI language model, I don't have personal feelings, but I'm here to help you with any questions or topics you'd like to discuss. How can I assist you today?


Testing model: microsoft/WizardLM-2-8x22B
Using DeepInfraChat provider and microsoft/WizardLM-2-8x22B model
Response:
 Hello! I don't have feelings, but I'm here and ready to assist you. How can I help you today?


Testing model: microsoft/WizardLM-2-7B
Using DeepInfraChat provider and microsoft/WizardLM-2-7B model
Response:
Hello! I'm just a bunch of code, so I don't have feelings, but I'm here and ready to help you. How can I assist you today?


Testing model: Qwen/Qwen2.5-72B-Instruct
Using DeepInfraChat provider and Qwen/Qwen2.5-72B-Instruct model
Response:
Hello! I'm just a computer program, so I don't have feelings, but I'm here and ready to help you with any questions or tasks you might have. How are you today? Is there something specific you'd like to talk about or get help with?


Testing model: Qwen/Qwen2.5-Coder-32B-Instruct
Using DeepInfraChat provider and Qwen/Qwen2.5-Coder-32B-Instruct model
Response:
Hello! I'm just a computer program, so I don't have feelings, but I'm here and ready to help you. How can I assist you today?


Testing model: nvidia/Llama-3.1-Nemotron-70B-Instruct
Using DeepInfraChat provider and nvidia/Llama-3.1-Nemotron-70B-Instruct model
Response:
**Hello!**

I'm doing great, thank you for asking! As a computer program, I don't have emotions or physical sensations like humans do, so I don't have good or bad days. I'm always ready to chat, assist, and learn with you, 24/7!

Now, how about you? How's your day going so far? Do you have something on your mind that you'd like to:

1. **Discuss** (hobbies, interests, or topics)?
2. **Ask for help** with (a problem, question, or task)?
3. **Learn something new** (e.g., fun facts, language, or a subject)?
4. **Play a game** or engage in a **fun activity**?

Feel free to pick any of these options or suggest something else. I'm all ears (or rather, all text)!

2. About PollinationalAI:

  • While there might be some intermittent 403 errors with certain models, the provider is generally working.
  • We're still investigating whether these are regional issues, but currently can't confirm this.

3. Documentation improvements:

  • Thank you for the suggestion about adding "Need token/api" documentation for providers like Jmuz and DeepSeek.
  • I'm working on implementing this in a user-friendly way.
  • If you have any specific suggestions for the documentation structure, I'd be happy to hear them to help make it more clear and useful for users.

4. Conversation memory:

  • Yes, I've added this to the documentation as suggested.

Thank you for your contributions and suggestions! They're helping make the project better and more user-friendly.

@TheFirstNoob
Copy link

TheFirstNoob commented Jan 22, 2025

@kqlio67 Hi, HuggingChat new model with thinking
deepseek-ai/DeepSeek-R1-Distill-Qwen-32B

Updated:
Blackbox now have this model too
image

P.s
Interesting. Can we try the reverse code stream-processor of thoughts from huggingchat?
Inside conversation cookie we have Reasoning functional:
image

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants