Make your model smarter through self-surveillance
Those who cannot remember Microsoft Recall are condemned to repeat it.
Today, that applies to OpenAI, which has quietly introduced an opt-in research preview called Chronicle .
It's designed to capture the user's screen and feed those images to OpenAI's Codex agent so it has access to more contextual information.
"Chronicle augments Codex memories with context from your screen," the company explains in its documentation.
"When you prompt Codex, those memories can help it understand what you've been working on with less need for you to restate context."
The cybersecurity community promptly piled on, describing Recall as a keylogger, a privacy nightmare, and litigation bait.
After a few months of public bludgeoning, Microsoft made some revisions to appease critics.
Nonetheless, browser maker Brave went on to offer Recall screenshot blocking , which looks like a worthwhile endeavor given our own tests that found Recall saving images of credit card numbers and passwords despite supposed sensitive information filters.
OpenAI perhaps forgot about Microsoft's reputational flogging, or maybe it believes the needs of the model outweigh the needs of the few who bother with security and privacy.
Another possibility is that the AI biz has embraced masochism as a public relations strategy.
No sooner had OpenAI's Chronicle documentation appeared this week than security researcher Michael Taggart took note of the resemblance, writing, "Oh my god, OpenAI reinvented Recall, but for macOS."
On the plus side, Chronicle is self-inflicted – it's opt-in – and available only in the Codex app for macOS.
The strikes against it are more extensive.
OpenAI's documentation explains some of these problems: "Before enabling, be aware that Chronicle uses rate limits quickly, increases risk of prompt injection, and stores memories unencrypted on your device."
So it burns through Codex rate limits faster, increases the user's exposure to prompt injection through screen captures that may contain malicious instructions, and sends selected screenshot data to OpenAI's servers to generate local memories from OCR and other extracted context.
That's not the most compelling sales pitch.
At least the local image storage is brief – OpenAI says its screenshots are only stored for six hours.
But the data derived from those images via OCR text extraction may persist beyond that time in " memories " – text-based Markdown files that make information available in later sessions.
OpenAI's description of the memory generation process omits some details.
The company says screen captures are temporarily stored on-device, then processed on its servers to generate "memories," which in turn get stored on-device.
The screen captures transmitted to OpenAI are not used for training or stored – unless required by law – the documentation claims.
However, it's not clear whether the memories – the OCR-derived text – are stored on company servers, or could be stored given a lawful demand to do so.
The Register asked OpenAI to clarify, and will update this story if we hear back.
You've been warned: The footgun shoots you in the foot.
®
Related Stories
Source: This article was originally published by The Register
Read Full Original Article →
Comments (0)
No comments yet. Be the first to comment!
Leave a Comment