That's not the worst idea ever. Say a screenshot is 10 mb. 10x60x
8 hours =4800mb per work day. 30 days is 150gb worst case scenario. I suppose you could check the previous screenshot and if it's the same, then don't write a new file. Combine that with OCR and a utility to scroll forward and backward through time, it might be a useful tool.
Also, 1MB on full resolution. You could also downscale the images dramatically after you OCR them. So let's say we shoot in full res, OCR and then downscale to 50%. Still enough so everything is human readable, combined with searchable OCR you're down to 7,5GB for a whole month.
Absolutely feasable. Let's say we're up to 8GB to include the OCR text and additional metadata and just reserve 10GB on your system for that to make double sure.
Now you have 10GB to track your whole 3440x1440 display.
In order to be certified for running Recall, machines currently must have an NPU (Neural Processing Unit, basically an AI coprocessor). I assume that is what makes it practical to do by offloading the required computation from the CPU.
Apparently it IS possible to circumvent that requirement using a hack, which is what some of the researchers reporting on it have done, but I haven't read any reports on how that affects CPU usage in practice.
As said one comment above, check if it's the same composition as before and don't take a screenshot if it didn't change. Make some rules to filter out video content so if you have a youtube video open it doesn't take a screenshot every second just because the video is running.
Or you could actually integrate this with your window manager. Only take a screenshot if you move / resize / open / close a window. Make a small extension for browsers that tell it to make a screenshot if you scroll / close / open a page. Then you don't have to make a screenshot and compare with the one before.
This wouldn't be as thorough as just forcing screenshots all the time and you would probably not catch stuff like writing a text in libreoffice as you don't change anything with the window. But it could be a resourceful way to do that.
And if for example no screenshot was taken for 1 minute because nothing called for that, you could just take one regardless. That way you have a minimum of one screenshot per minute or as often as window manager / browser calls for it.
But obviously I don't want a malware company like Microsoft doing that "for me" (actually the purpose is hyperspecific ads if not long term planning to exfiltrate the data).
Not sure if I even trust myself with the security that data would require.
I mean taking the screenshot is the easy part, getting reliable OCR on the other hand ...
In my experience (tesseract) current OCR works well for continuous text blocks but it has a hard time with tables, illustrations, graphs, gui widgets, etc.
I suppose you could check the previous screenshot and if it’s the same
Hmmm... this gives me an idea... maybe we could even write a special algorithm that checks whether only certain parts of picture have changed, and store only those, while re-using the parts that haven't changed. It would be a specialized compression algorithm for Moving Pictures. But that sounds difficult, it would probably need a whole Group of Experts to implement. Maybe we can call it something like Moving Picture Experts Group, or MPEG for short : )