Skip Navigation
Health Canada must reconsider man's bid to use magic mushrooms for cluster headaches, Federal Court rules
  • The difficulty in patenting and mass producing bacteriaphages is the reason we don't have them available in Western countries. But they are available in other places like Russia.

    Bacteriaphages would be a huge boon to dealing with antibiotic resistant strains.

  • UK Woman Mistaken As Shoplifter By Facewatch, Now She's Banned From All Stores With Facial Recognition Tech
  • GrapheneOS isn't a complete solution, especially if you still use things like Facebook and Whatsapp. Although it is a massive plus to privacy.

    Quick question. I've been hesitating with jumping to Graphene for a little while now. The two things that have held me back is losing access to Google Camera and Android Pay (or Google Pay, or Google Wallet, or Android Wallet. Whatever Google's calling it these days).

    The Google Wallet feature I think has taken care of itself. They pushed an update that requires you to re-authenticate after the initial tap for "security". Which means half the time the transaction fails and the cashier has to redo the payment process. So I just gave up and have gone back to tapping with my cards directly for the past month.

    So that just leaves the Google Camera. How's the quality with Graphene?

  • UK Woman Mistaken As Shoplifter By Facewatch, Now She's Banned From All Stores With Facial Recognition Tech
  • I've been hearing about them being wrong fairly frequently, especially on darker skinned people, for a long time now.

    I can guarantee you haven't. I've worked in the FR industry for a decade and I'm up to speed on all the news. There's a story about a false arrest from FR at most once every 5 or 6 months.

    You don't see any reports from the millions upon millions of correct detections that happen every single day. You just see the one off failure cases that the cops completely mishandled.

    I'm assuming that of apple because it's been around for a few years longer than the current AI craze has been going on.

    No it hasn't. FR systems have been around a lot longer than Apple devices doing FR. The current AI craze is mostly centered around LLMs, object detection and FR systems have been evolving for more than 2 decades.

    We've been doing facial recognition for decades now, with purpose built algorithms. It's not mucb of leap to assume that's what they're using.

    Then why would you assume companies doing FR longer than the recent "AI craze" would be doing it with "black boxes"?

    I'm not doing a bunch of research to prove the point.

    At least you proved my point.

  • UK Woman Mistaken As Shoplifter By Facewatch, Now She's Banned From All Stores With Facial Recognition Tech
  • people with totally different facial structures get identified as the same person all the time with the "AI" facial recognition

    All the time, eh? Gonna need a citation on that. And I'm not talking about just one news article that pops up every six months. And nothing that links back to the UCLA's 2018 misleading "report".

    I'm assuming Apple's software is a purpose built algorithm that detects facial features and compares them, rather than the black box AI where you feed in data and it returns a result.

    You assume a lot here. People have this conception that all FR systems are trained blackbox models. This is true for some systems, but not all.

    The system I worked with, which ranked near the top of the NIST FRVT reports, did not use a trained AI algorithm for matching.

  • UK Woman Mistaken As Shoplifter By Facewatch, Now She's Banned From All Stores With Facial Recognition Tech
  • I think from a purely technical point of view, you’re not going to get FaceID kind of accuracy on theft prevention systems. Primarily because FaceID uses IR array scanning within arm’s reach from the user, whereas theft prevention is usually scanned from much further away. The distance makes it much harder to get the fidelity of data required for an accurate reading.

    This is true. The distance definitely makes a difference, but there are systems out there that get incredibly high accuracy even with surveillance footage.

  • UK Woman Mistaken As Shoplifter By Facewatch, Now She's Banned From All Stores With Facial Recognition Tech
  • But for everyone else who is just trying to live their life, this can be extremely invasive technology.

    Now here's where I drop what seems like a whataboutism.

    You already have an incredibly invasive system tracking you. It's the phone in your pocket.

    There's almost nothing a widespread FR system could do to a person's privacy that isn't already covered by phone tracking.

    Edit: and including already existing CCTV systems that have existed for decades now. /edit

    Even assuming a FR system is on every street corner, at best it knows where you are, when you're there, and maybe who you interact with. That's basically it. And that will be limited to just the company/org that operates that system.

    Your cellphone tracks where are you, when you're there, who's with you, the sites you visit, your chats, when you're at home/friends place, how long you're there, can potentially listen to your conversations, activate your camera. On and on.

    On the flip side a FR system can notify stores and places when a previous shoplifter or violent person arrives at the store (not trying to fear monger, just an example) and a cellphone wouldn't be able to do that.

    The boogyman that everyone sees with FR and privacy doesn't exist.

    Edit 2: as an example, there was an ad SDK from a number of years ago that when you had an app open that used that SDK it would listen to your microphone and pickup high pitched tones from TV ads to identify which ads were playing nearby or on your TV.

    https://arstechnica.com/tech-policy/2015/11/beware-of-ads-that-use-inaudible-sound-to-link-your-phone-tv-tablet-and-pc/

    https://www.forbes.com/sites/thomasbrewster/2015/11/16/silverpush-ultrasonic-tracking/

  • UK Woman Mistaken As Shoplifter By Facewatch, Now She's Banned From All Stores With Facial Recognition Tech
  • We were cheaper on both hardware and software costs than just about anyone else, and we placed easily in the top 5 for performance and accuracy.

    The main issue was the covid came around, and since we're not a US company and the vast majority of interest was in the US we were dead in the water.

    What I've learned through the years is that the best rarely costs the most. Most corporate/vendor software out there are chosen by just about every consideration aside from quality.

  • UK Woman Mistaken As Shoplifter By Facewatch, Now She's Banned From All Stores With Facial Recognition Tech
  • Based on your comments I feel that you're projecting the confidence in that system onto the broader topic of facial recognition in general; you're looking at a good example and people here are (perhaps cynically) pointing at the worst ones. Can you offer any perspective from your career experience that might bridge the gap? Why shouldn't we treat all facial recognition implementations as unacceptable if only the best -- and presumably most expensive -- ones are?

    It's a good question, and I don't have the answer to it. But a good example I like to point at is the ACLU's announcement of their test on Amazon's Rekognition system.

    They tested the system using the default value of 80% confidence, and their test resulted in 20% false identification. They then boldly claimed that FR systems are all flawed and no one should ever use them.

    Amazon even responded saying that the ACLU's test with the default values was irresponsible, and Amazon's right. This was before such public backlash against FR, and the reasoning for a default of 80% confidence was the expectation that most people using it would do silly stuff like celebrity lookalikes. That being said, it was stupid to set the default to 80%, but that's just hindsight speaking.

    My point here is that, while FR tech isn't perfect, the public perception is highly skewed. If there was a daily news report detailing the number of correct matches across all systems, these few showing a false match would seem ridiculous. The overwhelming vast majority of news reports on FR are about failure cases. No wonder most people think the tech is fundamentally broken.

    A rhetorical question aside from that: is determining one's identity an application where anything below the unachievable success rate of 100% is acceptable?

    I think most systems in use today are fine in terms of accuracy. The consideration becomes "how is it being used?" That isn't to say that improvements aren't welcome, but in some cases it's like trying to use the hook on the back of a hammer as a screw driver. I'm sure it can be made to work, but fundamentally it's the wrong tool for the job.

    FR in a payment system is just all wrong. It's literally forcing the use of a tech where it shouldn't be used. FR can be used for validation if increased security is needed, like accessing a bank account. But never as the sole means of authentication. You should still require a bank card + pin, then the system can do FR as a kind of 2FA. The trick here would be to first, use a good system, and then second, lower the threshold that borders on "fairly lenient". That way you eliminate any false rejections while still maintaining an incredibly high level of security. In that case the chances of your bank card AND pin being stolen by someone who looks so much like you that it tricks FR is effectively impossible (but it can never be truly zero). And if that person is being targeted by a threat actor who can coordinate such things then they'd have the resources to just get around the cyber security of the bank from the comfort of anywhere in the world.

    Security in every single circumstance is a trade-off with convenience. Always, and in every scenario.

    FR works well with existing access control systems. Swipe your badge card, then it scans you to verify you're the person identified by the badge.

    FR also works well in surveillance, with the incredibly important addition of human-in-the-loop. For example, the system I worked on simply reported detections to a SoC (with all the general info about the detection including the live photo and the reference photo). Then the operator would have to look at the details and manually confirm or reject the detection. The system made no decisions, it simply presented the info to an authorized person.

    This is the key portion that seems to be missing in all news reports about false arrests and whatnot. I've looked into all the FR related false arrests and from what I could determine none of those cases were handled properly. The detection results were simply taken as gospel truth and no critical thinking was applied. In some of those cases the detection photo and reference (database) photo looked nothing alike. It's just the people operating those systems are either idiots or just don't care. Both of those are policy issues entirely unrelated to the accuracy of the tech.

  • UK Woman Mistaken As Shoplifter By Facewatch, Now She's Banned From All Stores With Facial Recognition Tech
  • Np.

    As someone else pointed out in another comment. I've been saying the x% accuracy number incorrectly. It's just a colloquial way of conveying the accuracy. The truth is that no one in the industry uses "percent accuracy" and instead use FMR (false match rate) and FNMR (false non-match rate) as well as some other metrics.

  • UK Woman Mistaken As Shoplifter By Facewatch, Now She's Banned From All Stores With Facial Recognition Tech
  • Ok, sure

    Edit: the truth is that saying x% accuracy isn't entirely correct, because the Numbers just don't work that way. It's just a way we convey the data to the average person. I can't count the amount of times I've had asked "ok, but what doesn't mean in terms of accuracy? What's the accuracy percentage?"

    And I understand what you're saying now. Yes I did have the number written down incorrectly as a percentage. I'm on mobile this whole time doing a hundred other things. I added two extra digits.

  • UK Woman Mistaken As Shoplifter By Facewatch, Now She's Banned From All Stores With Facial Recognition Tech
  • You know what overfitting is, right?

    As you reply to someone who spent a decade in the AI industry.

    This has nothing to do with overfitting. Particularly because our matching algorithm isn't trained on data.

    The face detection portion is, but that's simply finding the face in an image.

    The system I worked with used a threshold value that equates to an FMR of 1e-07. And it wasn't used in places like subways or city streets. The point I'm making is that in the few years of real world operation (before I left for another job) we didn't see a single false detection. In fact, one of the facility owners asked us to lower the threshold temporarily to verify the system was actually working properly.

  • UK Woman Mistaken As Shoplifter By Facewatch, Now She's Banned From All Stores With Facial Recognition Tech
  • My references are the NIST tests.

    https://pages.nist.gov/frvt/reports/1N/frvt_1N_report.pdf

    That might be the one you're looking at.

    Another thing to remember about the NIST tests is that they try to use a standardized threshold across all vendors. The point is to compare the results in a fair manner across systems.

    The system I worked on was tested by NIST with an FMR of 1e-5. But we never used that threshold and always used a threshold that equated to 1e-7, which is orders of magnitude more accurate.

    And my opinion: Entities using facial recognition are going to choose the lowest bidder for their system unless there's a higher security need than, say, a grocery store. So, we have to look at the weakest performing algorithms.

    This definitely is a massive problem and likely does contribute to poor public perception.

  • PSA: Eclipse Glasses from Canadian Tire

    Just a warning to anyone interested in the coming eclipse. Avoid the glasses sold at Canadian Tire. I just bought two pairs of eclipse glasses at Canadian Tire. I decided to test them right after I got outside and immediately noticed an issue. Both glasses have a hazy appearance around the sun.

    With proper eclipse glasses the sun should appear crisp and not too bright. You should be able to distinctly see the edge of the sun and even see detail on the surface. There should be no haze of any kind. If you see any kind of haze, immediately take off the glasses and throw them out.

    I knew the glasses from Canadian Tire were suspect because they had "NASA APPROVED" on the sides, which is why I bought them. I wanted to test them.

    There is no such thing as "NASA APPROVED". NASA doesn't approve or certify anything. If you see that, it's a massive red flag.

    One very good trusted source for where to buy eclipse glasses is the American Astronomical Society which has a list of trusted vendors. I myself bought a bulk quantity from solareyewear.ca and have tested the glasses to be working properly.

    https://eclipse.aas.org/eye-safety/viewers-filters

    Unfortunately, there will be a massive amount of scams going around with eclipse glasses and shady people trying to make a quick buck. Don't risk it and make sure your glasses will protect your eyes.

    If someone tries to convince you that their glasses are genuine because they have ISO-12312-2 printed on them, it doesn't matter. I could print that on a pair of toilet paper rolls taped together, but that doesn't mean it'll protect your eyes.

    Here are some resources to check out:

    https://aas.org/press/american-astronomical-society-warns-counterfeit-fake-eclipse-glasses

    https://opto.ca/eye-health-library/solar-eclipse-safety

    35
    PopSci doesn't understand difference between "far side" and "dark side" of the moon
    www.popsci.com Why astronomers want to put a telescope on the dark side of the moon

    LuSEE-Night is designed to provide never-before-seen glimpses of one of the universe's least understood eras.

    Why astronomers want to put a telescope on the dark side of the moon

    I find it hard to believe that Popular Science would make such a mistake, but the author has reinforced the mistake by starting the article with "The dark side of the moon, despite its name, is a perfect vantage point for observing the universe."

    But all the quotes from scientists correctly say "far side of the moon".

    I know this isn't really "science news" but I couldn't find any other community to share this.

    23
    OpenSubtitles Hostility

    So this isn't meant to be a post bashing the devs/owner of OpenSubtitles. This is meant simply as awareness.

    A few months ago I signed up for the VIP tier at OST ($5/mo for 1000 downloads a day) for a bit to populate my catalogue of videos with subtitles as my father uses my Jellyfin server and he's lost a lot of his hearing. I also wanted to support the development a bit. At first the service seemed to be downloading a bit, but then it stopped. I waited a few days and it would download at most one or two a day (despite a few thousand videos not having any subtitles). I look around online and found that OST had changed their API and the Jellyfin plugin still needed to catch-up with a newer release. No big deal, so I just waited.

    Then the update released which specifically stated that the changes to the API calls were made. I waited a few days, nothing. I uninstalled the OST plugin and reinstalled, still nothing.

    So I figured something was wrong either on my end or the server-side, but I didn't want to bother getting into it. I've been planning to rebuild my Jellyfin server with newer hardware with HW acceleration for decoding and encoding. I sent an email to OST support explaining what I've been seeing and asked if I could get a refund.

    The person who responded asked for logs so that they could help troubleshoot. So I obliged.

    !Email response from OpenSubtitles support confirming there was an issue

    They said it wasn't much help and to get even more logs. Which I provided again.

    !Screenshot of user CeeBee providing logs via email to OpenSubtitles support

    I even removed over 14 thousand "[query]" lines to make the logs more readable. They said there wasn't anything there that was useful, and asked me to try again. I indicated that Jellyfin has a scheduled job that checks for missing subtitles and pulls as needed once a day. But I said that at this point I'm just looking for the refund.

    A while passes by but then I get a notification that the subscription is going to be renewed again, so I cancelled before that happened and reached out again about the refund. At this point it was more about the principle of the matter as I originally just asked for a refund and that got side-stepped into a support request.

    Then I got this as a response:

    !Email response from OpenSubtitles support being aggressive and accusatory

    Which resulted in this:

    !Email response from OpenSubtitles support saying "I'm tired of you" and deleted my account

    I waited over two weeks to write this post. I wanted to wait and see if somebody replied back to me with even just an apology or something. If they had originally told me that doing refunds is hassle for them I would have let it go. But telling me off and then deleting my account is just... special. I was astonished at the response and cannot fathom that being the response from any company taking payments for a service.

    And I'm not holding a grudge of any kind and I get it, I used to do IT support and some days can be tough dealing with annoying emails. But in my defence all I asked for was a refund because something wasn't working. In any case, I just wanted to bring this to the attention of the Self-hosting community so that others can make more informed decisions. To be clear, I'm not advocating anyone to pull support. In face I think they should have more support as it's an invaluable service. Despite the treatment I still plan on getting the VIP subscription again at some point after I rebuild my Jellyfin server. But I also don't think that customers should be treated like this.

    89
    "Best" remote desktop solution?

    I have two systems right next to each other. One of them doesn't have a monitor attached and it's not simple to do so. They are both on the same 1Gbps LAN.

    I'm currently using FreeRDP/xRDP to remote into the other system, but the latency is just terrible. I'm able to deal with it, but in some circumstances (like if dynamic ads or videos autoplay on websites) the latency just skyrockets to a a frame very few seconds. The remote system is Kubuntu 20.04 using X.org and not Wayland. I have just about every window decoration turned off or down. And just about every setting I can find to increase performance. The connection settings are the best they can be (I've looked through so many guides and posts about increasing performance), and they have helped a bit, but it's still far from ideal.

    I've even tried VNC and AnyDesk and they're both just as bad, which is odd because I've connected to my brother's system in another part of the country on many occasions, and even when he loads up a Youtube video the connection and latency is buttery smooth.

    Does anyone here have any recommendations or suggestions on what I can use or do to improve the connection quality?

    26
    InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)CE
    CeeBee @lemmy.world
    Posts 4
    Comments 687