A keylogger isn't retroactive to before the keylogger was installed though. Recall is. Also, with Recall you don't need to write keylogging software and get it past antimalware scans (and keep it from getting detected), you just have to get an infostealer past them one single time to take the Recall database.
With the tab-completion in Powershell, for someone who doesn't know all the grep flags by heart, it might be easier to stumble through the options to find the ones you want without looking it up.
Someone going into the subject may not have any pre-existing knowledge of the subject (like what a tree is) and may be intending to learn it from their classes. Unless we require everyone to take a class that covers it first, you can't really guarantee that people have that knowledge. While people may have known it by necessity before, computers, for better or worse, have gotten easier to use for the average person and it's no longer essential knowledge. Or they may not have even be using a traditional desktop/laptop OS that has those concepts.
As for how it's confusing, have you seen the default UI for Google Docs/Sheets/Drive or Microsoft Office recently? Google's products default to a file view listed in most recently used order with a search bar at the top, no folders. The Microsoft Office suite defaults to saving to OneDrive without any folders. If this is all people have needed to use when growing up, is it any wonder why they never learned about hierarchical folders in a filesystem?
Chrome already does have DoH enabled by default from what I can tell.
https://support.google.com/chrome/answer/10468685
By default, Secure DNS in Chrome is turned on in automatic mode. If Chrome has issues looking up a site in this mode, it'll look up the site in the unencrypted mode.
My worry is what the EU changes might mean for the mobile web and beyond. With iOS's market share and only the same rendering engine Apple used in Safari being available, sites/apps had to support more than just Chrome. If forcing iOS users to Chrome is an option (either through pointing them to the browser or an app built with that rendering engine), then there's even less of an incentive to test with anything else. It's great that users get more choice but if providers use it as an opportunity to reduce support for other browsers then it might not be a great benefit after all.
That's true. I know they did increase the number of filters from the initial amount but they really should just make it effectively infinite.
As long as that extension developer can be trusted to have access to read and modify the data of any site you load and to not sell the extension (and its userbase) for a quick buck (see Hover Zoom+ for an example of how much they're willing to offer, as recently as today).
There are definitely trade-offs between the permissions allowed in V2 versus V3. It really depends on where you think the main threat is (websites and online tracking versus extension developers).
https://blog.mozilla.org/addons/2024/05/14/manifest-v3-updates/
We also wanted to take this opportunity to address a couple common questions we’ve been seeing in the community, specifically around the webRequest API and MV2:
- The webRequest API is not on a deprecation path in Firefox
- Mozilla has no current plans to deprecate MV2 as mentioned in our previous MV3 update
That said, I believe Firefox users have gotten a lot of benefits by having extensions made that work in both Firefox and Chromium-based browsers. I don't believe there will still be as much effort for a Firefox-only extension but I believe there will be a sufficient number of motivated users and developers to still develop blockers and other extensions that take advantage of Firefox continuing to support MV2 and webRequest.
It's basically similar to this example from the health field:
Like givesomefucks said, it's probably not that they were actually after that information specifically, but that it just got caught up in regular website analytics that services put on their sites. You can still infer a lot about a person's health information by just looking at the URLs they visit, so I'd say it is a concern but I'm not sure it should go beyond companies/agencies/organizations needing to know about the risks and a "stop doing this" warning. If analytics services were doing this intentionally and evaluating and using that data explicitly at the direction of some human in their company, then I think it would be a much bigger issue and a much bigger story.
That's a sentiment that quite a few others online feel too:
https://www.techdirt.com/2019/03/13/do-people-want-better-facebook-dead-facebook/
I do get the argument though that if no improvement will ever be good enough for some people, then what incentive do they have to change for the better if it won't make a difference to those people either way?
Trump v. United States
https://www.supremecourt.gov/oral_arguments/argument_transcripts/2023/23-939_f2qg.pdf
CHIEF JUSTICE ROBERTS: Well, that's what I -- I mean, shortly after that statement in the court, that -- court's opinion, that's what they said, but there's no reason to worry because the prosecutor will act in good faith and there's no reason to worry because a grand jury will have returned the indictment. Now you know how easy it is in many cases for a prosecutor to get a grand jury to bring an indictment, and reliance on the faith -- good faith of the prosecutor may not be enough in the -- some cases. I'm not suggesting here.
It also doesn't help housing prices that the landlords are colluding to raise prices:
https://www.ftc.gov/business-guidance/blog/2024/03/price-fixing-algorithm-still-price-fixing
It isn't just Airbnb's fault, it's landlords wanting to maximize their return, no matter the method (short-term rentals or price fixing collusion).
The trust in the unknown systems of the VPN provider may still be better than the known practices of your local ISP/government though. You shouldn't necessarily rely on it too heavily but it's good to have the option.
I think it was more targeting the client ISP side, than the VPN provider side. So something like having your ISP monitor your connection (voluntarily or forced to with a warrant/law) and report if your connection activity matches that of someone accessing a certain site that your local government might not like for example. In that scenario they would be able to isolate it to at least individual customer accounts of an ISP, which usually know who you are or where to find you in order to provide service. I may be misunderstanding it though.
Edit: On second reading, it looks like they might just be able to buy that info directly from monitoring companies and get much of what they need to do correlation at various points along a VPN-protected connection's route. The Mullvad post has links to Vice articles describing the data that is being purchased by governments.
One example:
By observing that when someone visits site X, it loads resources A, B, C, etc in a specific order with specific sizes, then with enough distinguishable resources loaded like that someone would be able to determine that you're loading that site, even if it's loaded inside a VPN connection. Think about when you load Lemmy.world, it loads the main page, then specific images and style sheets that may be recognizable sizes and are generally loaded in a particular order as they're encountered in the main page, scripts, and things included in scripts. With enough data, instead of writing static rules to say x of size n was loaded, y of size m was loaded, etc, it can instead be used with an AI model trained on what connections to specific sites typically look like. They could even generate their own data for sites in both normal traffic and the VPN encrypted forms and correlate them together to better train their model for what it might look like when a site is accessed over a VPN. Overall, AI allows them to simplify and automate the identification process when given enough samples.
Mullvad is working on enabling their VPN apps to: 1. pad the data to a single size so that the different resources are less identifiable and 2. send random data in the background so that there is more noise that has to be filtered out when matching patterns. I'm not sure about 3 to be honest.
I don't propose we break the laws, I propose we change them.
For me it's not boot licking but recognizing that IA made a huge unforced error that may cost us all not just that digital lending program but stuff like the Wayback Machine and all the other good projects the IA runs.
The Internet Archive refused to follow industry standards for ebook licensing, because they aren’t a library.
It's worse than that. They did use "Controlled Digital Lending" to limit the number of people who can access a book at one time to something resembling the number of physical books that they had. And then they turned that restriction off because of the pandemic. There is no pandemic exception to copyright laws, even if that would make sense from a public health perspective to prevent people from having unnecessary contact at libraries. They screwed themselves and I can only hope that the Wayback Machine archives get a home somewhere else if they do go under.
Laws can very well be wrong, in a moral sense, and quite a few of them still in existence today are, but trying to argue that in court is usually a bad idea.
TL;DR: Automated certificate issuance and management strengthens the underlying security assurances provided by Transport Layer Security (TL...
> Upcoming Policy Changes > > One of the major focal points of Version 1.5 requires that applicants seeking inclusion in the Chrome Root Store must support automated certificate issuance and management. [...] It’s important to note that these new requirements do not prohibit Chrome Root Store applicants from supporting “non-automated” methods of certificate issuance and renewal, nor require website operators to only rely on the automated solution(s) for certificate issuance and renewal. The intent behind this policy update is to make automated certificate issuance an option for a CA owner’s customers.
TL;DR: Automated certificate issuance and management strengthens the underlying security assurances provided by Transport Layer Security (TL...
Google is looking to change the policy of the Chrome Root Store (used by Chrome to verify TLS certificates that protect websites and other services) to require "that applicants seeking inclusion in the Chrome Root Store must support automated certificate issuance and management". They can still provide a manual method for sites that want to get certificates the old way but they will need to have some kind of automated method available.
> [...] > > To provide better security, Google introduced an Enhanced Safe Browsing feature in 2020 that offers real-time protection from malicious sites you are visiting. It does this by checking in real-time against Google's cloud database to see if a site is malicious and should be blocked. > > [...] > > Google announced today that it is rolling out the Enhanced Safe Browsing feature to all Chrome users over the coming weeks without any way to go back to the legacy version. > > The browser developer says it's doing this as the locally hosted Safe Browsing list is only updated every 30 to 60 minutes, but 60% of all phishing domains last only 10 minutes. This creates a significant time gap that leaves people are unprotected from new malicious URLs. > > [...]
For the past several years, more than 90% of Chrome users' navigations have been to HTTPS sites, across all major platforms. Thankfully, th...
cross-posted from: https://lemmy.world/post/3301227
> Chrome will be experimenting with defaulting to https:// if the site supports it, even when an http:// link is used and will warn about downloads from insecure sources for "high-risk files" (example given is an exe). They're also planning on enabling it by default for Incognito Mode and "sites that Chrome knows you typically access over HTTPS".
For the past several years, more than 90% of Chrome users' navigations have been to HTTPS sites, across all major platforms. Thankfully, th...
Chrome will be experimenting with defaulting to https:// if the site supports it, even when an http:// link is used and will warn about downloads from insecure sources for "high-risk files" (example given is an exe). They're also planning on enabling it by default for Incognito Mode and "sites that Chrome knows you typically access over HTTPS".
Teams across Google are working hard to prepare the web for the migration to quantum-resistant cryptography. Continuing with our strategy f...
A hybrid quantum-resistant Key Encapsulation Method combined with a regular elliptic curve backup will be available in Chrome 116 for securing connections.
Teams across Google are working hard to prepare the web for the migration to quantum-resistant cryptography. Continuing with our strategy f...
Google Chrome will soon be supporting a hybrid elliptic curve + quantum-resistant Kyber-768 system for key exchange in Chrome 116. This should provide some protection in case the quantum-resistant part has flaws, like some other proposed solutions have had. They're looking into this now to give time for it to get implemented by browsers, servers, and middleboxes, and hopefully prevent Harvest Now, Decrypt Later attacks.