Agreed! On open source though - can't you just pull the model and use the weights? I confess I have no idea what the licensing would be for an open source-backed browser deploying weights, but it seems like unless you made a huge amount of money off it, it would be unproblematic, and even then could be just fine.
Sure you could but now you have two identical models running in the browser, one of which needs to be loaded in.
Ideally they would expose the model via a browser api like they do for the prompt api.
FWIW I asked someone on the chrome team about this and they don’t plan to expose native embedding APIs citing lack of dimensionality standards as the reason.
> Yes – Chromium now ships a tiny on‑device sentence‑embedding model, but it’s strictly an internal feature.
What it’s for
“History Embeddings.” Since ~M‑128 the browser can turn every page‑visit title/snippet and your search queries into dense vectors so it can do semantic history search and surface “answer” chips. The whole thing is gated behind two experiments:
How does this affect Chromes load on the system? Will this make older devices fan start spinning as soon as I load up Chrome? Anyone who’s more into embeddings and can tell?
TIL that Chrome ships an internal embedding model, interesting!
It's a shame that it's not open source, unlikely that there's anything super proprietary in an embeddings model that's optimized to run on CPU.
(I'd use it if it were released; in the meantime, MiniLM-L6-v2 works reasonably well. https://brokk.ai/blog/brokk-under-the-hood)
Apparently, it’s for fighting scams: https://blog.google/technology/safety-security/how-were-usin...
Agreed! On open source though - can't you just pull the model and use the weights? I confess I have no idea what the licensing would be for an open source-backed browser deploying weights, but it seems like unless you made a huge amount of money off it, it would be unproblematic, and even then could be just fine.
Sure you could but now you have two identical models running in the browser, one of which needs to be loaded in.
Ideally they would expose the model via a browser api like they do for the prompt api.
FWIW I asked someone on the chrome team about this and they don’t plan to expose native embedding APIs citing lack of dimensionality standards as the reason.
> Yes – Chromium now ships a tiny on‑device sentence‑embedding model, but it’s strictly an internal feature.
What it’s for “History Embeddings.” Since ~M‑128 the browser can turn every page‑visit title/snippet and your search queries into dense vectors so it can do semantic history search and surface “answer” chips. The whole thing is gated behind two experiments:
^ response from chatgpt
How does this affect Chromes load on the system? Will this make older devices fan start spinning as soon as I load up Chrome? Anyone who’s more into embeddings and can tell?
What does Chrome use embeddings for?
It’s mentioned in the article. Semantic search over history and other similar tasks
Very good question, I would like to know this too.