Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Docs/Added detailed AI Image Generation Model Documentation #244

Open
wants to merge 3 commits into
base: main
Choose a base branch
from

Conversation

ParagGhatage
Copy link

Description:

This PR introduces the following updates:

  • Added a new section for Model Documentation in the main README.md to guide users toward more detailed information about the AI models used in the project.
  • Included a link to the Model Documentation in the main README.md file, directing users to a new documentation file (docs\AI-Models\Image-Generation\stable_deffusion.md) for in-depth setup and configuration details of the AI models.
  • Updated the Additional Resources section with links to relevant external documentation for Tauri, React, and FastAPI.

Changes:

  • README.md:
    • Added the "Model Documentation" section with a link to docs\AI-Models\Image-Generation\stable_deffusion.md.
    • Updated the "Additional Resources" section to include external documentation links.
  • stable_deffusion.md:
    -Added detailed model documentation for AI image generation.

Why this is important:

This update provides clearer guidance to users on where to find comprehensive information about the AI models integrated into the project. It also consolidates helpful resources to make the setup process smoother for contributors and users.

Changes to Documentation:

  • Yes, the main README.md was updated with a new "Model Documentation" section and additional external links.

@ParagGhatage
Copy link
Author

This PR adds detailed Documnetation for #242

@Jibesh10101011
Copy link
Contributor

Jibesh10101011 commented Jan 17, 2025

Hey @ParagGhatage

In the documentation, the model you provided is 3.8GB:

image

image

After that, we need to copy the entire folder to image_generation.

This model is consuming a significant amount of storage. Its size may lead to a considerable decline in performance if we use it as it is.

Additionally, I couldn't find any documentation related to this specific model in the ONNX Model Zoo.
If your model doesn't support ONNX Runtime, it won't be compatible with all devices, which limits its integration into the Pictopy backend architecture.

image

@ParagGhatage
Copy link
Author

Hey @Jibesh10101011 .
Model size is around 4BG.

Its doesnt consume that much for actual image generation task.

We have tested it for different performace issues.
And it will not affect performace at all.

@Karn-x7
Copy link
Contributor

Karn-x7 commented Jan 17, 2025

@Jibesh10101011
WE made sure to our through testing don't worry about it its work well and fine and image generated by this is also pretty goood

@Jibesh10101011
Copy link
Contributor

Hey @Jibesh10101011 . Model size is around 4BG.

Its doesnt consume that much for actual image generation task.

We have tested it for different performace issues. And it will not affect performace at all.

Then overall size of the Project become more than 5 to 6 GB

@Jibesh10101011
Copy link
Contributor

And also if We use Docker then it will come to 8 to 9 GB

@Karn-x7
Copy link
Contributor

Karn-x7 commented Jan 17, 2025

we used fast api for intergration have a look @Jibesh10101011 HAVE A LOOK

@ParagGhatage
Copy link
Author

@Jibesh10101011 ,

ONNX doesnt work for all the models.
and right now, we are focusing on getting features up and running.

Be assured , it will not degrade performance.

And about multi-device compatibility, it will be future optimization for this feature.

@Jibesh10101011
Copy link
Contributor

@Jibesh10101011 WE made sure to our through testing don't worry about it its work well and fine and image generated by this is also pretty goood

It is not about being good; it's about system performance and compatibility.

@ParagGhatage
Copy link
Author

Hey @Jibesh10101011 . Model size is around 4BG.
Its doesnt consume that much for actual image generation task.
We have tested it for different performace issues. And it will not affect performace at all.
we used fast api for intergration have a look

Our current docker setup doesnt work for local file system.

So, it will be point to consider when we find a way to make model more efficient.

@Jibesh10101011
Copy link
Contributor

@Jibesh10101011 ,

ONNX doesnt work for all the models. and right now, we are focusing on getting features up and running.

Be assured , it will not degrade performance.

And about multi-device compatibility, it will be future optimization for this feature.

But it is violating the architecture of the PictoPy backend

@Jibesh10101011
Copy link
Contributor

Jibesh10101011 commented Jan 17, 2025

Hey @Jibesh10101011 . Model size is around 4BG.
Its doesnt consume that much for actual image generation task.
We have tested it for different performace issues. And it will not affect performace at all.
we used fast api for intergration have a look

Our current docker setup doesnt work for local file system.

So, it will be point to consider when we find a way to make model more efficient.

It works run it via docker compose everything works now , I have completed that issue some days ago

@ParagGhatage
Copy link
Author

@Jibesh10101011 ,
ONNX doesnt work for all the models. and right now, we are focusing on getting features up and running.
Be assured , it will not degrade performance.
And about multi-device compatibility, it will be future optimization for this feature.

But it is violating the architecture of the PictoPy backend

What do you mean?

@Jibesh10101011
Copy link
Contributor

@Jibesh10101011 ,
ONNX doesnt work for all the models. and right now, we are focusing on getting features up and running.
Be assured , it will not degrade performance.
And about multi-device compatibility, it will be future optimization for this feature.

But it is violating the architecture of the PictoPy backend

What do you mean?

I mean this

image

@Karn-x7
Copy link
Contributor

Karn-x7 commented Jan 17, 2025

@Jibesh10101011 ,

It is not about being good; it's about system performance and compatibility.

Even without ONNX, the feature will work fine because the underlying logic and models are compatible with the current backend setup. ONNX is primarily a performance optimization tool, and its absence won't hinder the core functionality. and we made sure to do our testing

@ParagGhatage
Copy link
Author

@Jibesh10101011 ,
ONNX doesnt work for all the models. and right now, we are focusing on getting features up and running.
Be assured , it will not degrade performance.
And about multi-device compatibility, it will be future optimization for this feature.

But it is violating the architecture of the PictoPy backend

What do you mean?

I mean this

image

Like I said, ONNX isnt useful for all models.

PictoPy's current ONNX models are performing below average for image tagging.

Whats point of ONNX model if it cant have even 30% Accuracy?

@Jibesh10101011
Copy link
Contributor

@Jibesh10101011 ,

It is not about being good; it's about system performance and compatibility.

Even without ONNX, the feature will work fine because the underlying logic and models are compatible with the current backend setup. ONNX is primarily a performance optimization tool, and its absence won't hinder the core functionality. The priority is delivering a working feature, and enhancements like ONNX can be integrated later without impacting usability. and we made sure to do our testing

You can not say this , Every one used different type of Machine , OS and also all devices configuration are not same

@Karn-x7
Copy link
Contributor

Karn-x7 commented Jan 17, 2025

bro @Jibesh10101011 i did my testing as my laptop have the feature to analyze everything while limiting the performance of my laptop
as my laptop optimus support which allow me to limit the performance of laptop so i made sure to test it as low as i can get this
Screenshot 2025-01-17 174450
and please make sure you not so casual with your words like violating

@Jibesh10101011
Copy link
Contributor

@Jibesh10101011 ,
ONNX doesnt work for all the models. and right now, we are focusing on getting features up and running.
Be assured , it will not degrade performance.
And about multi-device compatibility, it will be future optimization for this feature.

But it is violating the architecture of the PictoPy backend

What do you mean?

I mean this
image

Like I said, ONNX isnt useful for all models.

PictoPy's current ONNX models are performing below average for image tagging.

Whats point of ONNX model if it cant have even 30% Accuracy?

When it comes to ONNX, it's about compatibility, not accuracy. If the model you provide doesn't work on all devices, then for those devices, the accuracy becomes zero. However, with ONNX, you can ensure that the model runs on any device, and also models size should be as minimized as well.

@Karn-x7
Copy link
Contributor

Karn-x7 commented Jan 17, 2025

@Jibesh10101011 While ONNX enhances compatibility and minimizes model size, the immediate focus is to deliver functionality rather than universal device coverage. The current approach ensures the feature works on a significant range of devices, prioritizing usability for the majority. Future iterations can incorporate ONNX for broader compatibility and size optimization without delaying initial user access to the feature.

@Karn-x7
Copy link
Contributor

Karn-x7 commented Jan 17, 2025

@Jibesh10101011 ,
ONNX doesnt work for all the models. and right now, we are focusing on getting features up and running.
Be assured , it will not degrade performance.
And about multi-device compatibility, it will be future optimization for this feature.

But it is violating the architecture of the PictoPy backend

What do you mean?

I mean this
image

Like I said, ONNX isnt useful for all models.
PictoPy's current ONNX models are performing below average for image tagging.
Whats point of ONNX model if it cant have even 30% Accuracy?

When it comes to ONNX, it's about compatibility, not accuracy. If the model you provide doesn't work on all devices, then for those devices, the accuracy becomes zero. However, with ONNX, you can ensure that the model runs on any device, and also models size should be as minimized as well.

@Jibesh10101011
Copy link
Contributor

bro @Jibesh10101011 i did my testing as my laptop have the feature to analyze everything while limiting the performance of my laptop as my laptop optimus support which allow me to limit the performance of laptop so i made sure to test it as low as i can get this Screenshot 2025-01-17 174450 and please make sure you not so casual with your words like violating

This thing never be said like this , whatever you are saying if all models can be ac

@Jibesh10101011 While ONNX enhances compatibility and minimizes model size, the immediate focus is to deliver functionality rather than universal device coverage. The current approach ensures the feature works on a significant range of devices, prioritizing usability for the majority. Future iterations can incorporate ONNX for broader compatibility and size optimization without delaying initial user access to the feature.

In the future, there is no certainty whether ONNX will continue to be supported.

and also the most important thing to note is that your model size is 4.0 GB

@ParagGhatage
Copy link
Author

ParagGhatage commented Jan 17, 2025

@Jibesh10101011 ,
ONNX doesnt work for all the models. and right now, we are focusing on getting features up and running.
Be assured , it will not degrade performance.
And about multi-device compatibility, it will be future optimization for this feature.

But it is violating the architecture of the PictoPy backend

What do you mean?

I mean this
image

Like I said, ONNX isnt useful for all models.
PictoPy's current ONNX models are performing below average for image tagging.
Whats point of ONNX model if it cant have even 30% Accuracy?

When it comes to ONNX, it's about compatibility, not accuracy. If the model you provide doesn't work on all devices, then for those devices, the accuracy becomes zero. However, with ONNX, you can ensure that the model runs on any device, and also models size should be as minimized as well.

Stable Diffusion is compatible with most major operating systems, including Windows, macOS, and Linux. However, the installation process and system requirements may vary depending on the OS.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants