-
Notifications
You must be signed in to change notification settings - Fork 321
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[WIP] ncmec: store checkpoint occasionally when start, end diff is one second #1731
base: main
Are you sure you want to change the base?
Conversation
928ddce
to
4f12e50
Compare
2965a46
to
5270515
Compare
5270515
to
d7f207e
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Overall looking good, thanks for making this change, and I think it will help a lot!
I am slightly suspicious that the paging URLs can go sour (e.g. I have noticed that NCMEC API tends to throw exceptions near the very end of the paging list that make me think that they are invaliding), so I think adding the time-based invalidation logic is a requirement.
As part of your test plan, can you also attempt fetching past an extremely dense time segment in the NCMEC API and confirm the behavior works as expected?
python-threatexchange/threatexchange/exchanges/tests/test_state_compatibility.py
Outdated
Show resolved
Hide resolved
python-threatexchange/threatexchange/exchanges/impl/ncmec_api.py
Outdated
Show resolved
Hide resolved
|
||
updates.extend(entry.updates) | ||
|
||
if i % 100 == 0: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
blocking: by change this from elif
to if
, I think it will now print the large update warning every update, which is incorrect, no?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
it would print for the 0th, which we would not want. I updated this to be (i + 1) % 100 == 0, so it's every 100th iteration
we need to extend updates
everytime, regardless of i
, so this was cleaner than other things I thought of
but please suggest alternatives
python-threatexchange/threatexchange/exchanges/impl/ncmec_api.py
Outdated
Show resolved
Hide resolved
python-threatexchange/threatexchange/exchanges/tests/test_state_compatibility.py
Outdated
Show resolved
Hide resolved
python-threatexchange/threatexchange/exchanges/clients/ncmec/hash_api.py
Outdated
Show resolved
Hide resolved
log(f"large fetch ({i}), up to {len(updates)}") | ||
updates.extend(entry.updates) | ||
# so store the checkpoint occasionally | ||
log(f"large fetch ({i}), up to {len(updates)}. storing checkpoint") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: You don't actually store the checkpoint by yielding, technically the caller can decide whether to keep calling or store.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ah so the original elif
block doesn't need to change? the only real change that's needed is to use the next_url in the for loop on L283?
edit: I think the yield is still needed, just the comment might be incorrect.. let me know if not
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
updated the comment 👍
start_timestamp=current_start, end_timestamp=current_end | ||
start_timestamp=current_start, | ||
end_timestamp=current_end, | ||
next_=current_next_fetch, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
blocking: Danger! It's actually very easy to mess up this argument and accidentally trigger and endless loop. It may be that you have done so in the current code, but it's hard to tell.
The only time current_next_fetch
should be populated is when you are resuming from checkpoint, and you need to explicitly disable the overfetch check (L290) then.
There might be a refactoring of this code that makes this easier, or now that we are switching over to the next pointer version we can get rid of the probing behavior, which simplifies the implementation quite a bit.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah as I mentioned in slack looks like we need the probing behavior so I wasn't able to simplify. I added a check to disable the overfetch when resuming from a checkpoint
start_timestamp=current_start, end_timestamp=current_end | ||
start_timestamp=current_start, | ||
end_timestamp=current_end, | ||
next_=current_next_fetch, | ||
) | ||
): | ||
if i == 0: # First batch, check for overfetch |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As a comment, it turns out my implementation for estimation of the entries in range was completely off, and so this is basically always overly cautious. Not sure what to do about it, since the alternatives that I can think of are complicated.
python-threatexchange/threatexchange/exchanges/impl/ncmec_api.py
Outdated
Show resolved
Hide resolved
python-threatexchange/threatexchange/exchanges/impl/ncmec_api.py
Outdated
Show resolved
Hide resolved
82bc20b
to
c4a004e
Compare
3488550
to
83ebd79
Compare
83ebd79
to
b0f7997
Compare
# note: the default_factory value was not being set correctly when | ||
# reading from pickle | ||
if not "last_fetch_time" in d: | ||
d["last_fetch_time"] = int(time.time()) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I was getting AttributeError: 'NCMECCheckpoint' object has no attribute 'last_fetch_time'
without this in the test_state_compatibility
test
seems sort of related to pydantic/pydantic#7821, since default
was working (but wouldn't work if we want to set it to the current time)
Summary
sometimes ncmec fails to make progress after hitting a second w/ a large number of results: #1679. when that happens (diff of end and start is a second and we have lots of data), store checkpoints occasionally via a next pointer
Test Plan
confirmed that resuming from a checkpoint works around the cursed second