-
-
Notifications
You must be signed in to change notification settings - Fork 112
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ScanFile() increased memory by 300MB after scanning a large file and did not release #156
Comments
I've found this to happen whenever elf modules are used regularly: |
If this go away if you don't import the elf module in your ruleset, I suggest that this is an issue in YARA itself. Is there anything specific about the file you are scanning or does the same memory leakage happen if you scan 300 MB zeroes? Can you share a file (or point me to a public file) that can be used to demonstrate the issue? |
Can you also share the Yara version you compiled with @xlango ? |
I use ubuntu22.04 system, kernel version 5.19, yara version is compiled Yara-4.4.0 and Yara-4.3.2 both have this problem. |
I've tested your rule with dockerd binary separately and also all the files under /usr/bin on Ubuntu22.04 arm64 with our product that uses Yara 4.5.1 and go-yara@latest. I didn't see any memory issue. Maybe, you didn't call scanner's Destroy method explicitly or it's about Yara 4.4 but I didn't see anything related to elf module in the release notes. Please, try calling Destroy() and/or runtime.GC() after scanning to see if there is such a big leak. |
s, err := yara.NewScanner(yaraRules)
if err != nil {
return matchRuleTypes, err
}
I scanned a dockerd binary file 80MB, after scanning only found that the process memory increased 300MB and has not been released, May I ask why?
The text was updated successfully, but these errors were encountered: