Privacy of transformers models? #50
Unanswered
burdonmark
asked this question in
Q&A
Replies: 1 comment
-
|
Hi, apologies - package is not currently maintained. Is the issue here around using the Hugging Face hub i.e. do Hugging Face know which models you search for, or downloading a model such as an LLM and then using it for inference? As far as I know, once you download the model and run it on a local GPU you don't need to worry about Hugging Face taking your data. Take that with a pinch of salt, though. From the doc you linked, which is a part of the HF hub, there's this section: ` Sends telemetry that helps tracking the examples use. I don't know of anything similar for actually running inference locally/privately |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Hi,
It's not clear to me if queries fed into transformers via huggingfaceR are ultimately reported back to HuggingFace. It seems there is a telemetry setting in the python package that can be turned off:
https://github.com/huggingface/transformers/blob/d211a84aca8a8513a599171173c29bc8d3061fba/src/transformers/utils/hub.py#L106
DISABLE_TELEMETRY = os.getenv("DISABLE_TELEMETRY", False) in ENV_VARS_TRUE_VALUES
Definitely possible I am misunderstanding this but there doesn't seem to be a clear statement online about whether queries to the open-source models are reported.
Beta Was this translation helpful? Give feedback.
All reactions