Skip Navigation
.......We won!
  • WWII sent a very clear message. You can annex Austria. You can invade Czechoslovakia. You can take over Lithunia. But you don't fuck with Poland

    Well, I mean, you can fuck with Poland a little bit. You just can't take over, like, too much of Poland.

  • [Retro Platinum] King's Quest: Quest for the Crown (1984-05-10)
  • Some extra info about Sierra's game engines....

    AGI was indeed first used in KQ1, though earlier Sierra adventure games (even going back to Mystery House in 1980) used something extremely similar. AGI was just formalizing what they'd done before and setting it as a common platform for all future games.

    In those days, it was, of course, not possible to write an entire adventure game in machine code because there wasn't even memory to hold more than a handful of screens. The use of bytecode was as much a compression scheme as anything else. So AGI was just a bytecode interpreter. Vector graphics primitives (e.g., draw line, flood fill) could be written in just a few bytes, much better than machine code.

    Ken Williams made a splash with early Sierra games because he had an extremely simple insight that most others at the time didn't seem to have: for graphics operations, allow points to be on even-numbered x coordinates only. Most platforms had a horizontal resolution of 320, too much for 1 byte. Ken Williams had his early game engines divide every x coordinate by 2 so that it could fit into a single bit (essentially getting only 160 horizontal pixels). A silly trick, but it got big memory savings, and allowed him to pack more graphics into RAM than many other people could at the time.

    After AGI (KQ3 was the last King's Quest to use AGI), Sierra switched over to their new game engine/bytecode interpreter: SCI. SCI was rolled out in two stages, though.

    SCI0 (e.g., KQ4) was 16 colours and still revolved around the text parser. SCI1 (e.g., KQ5) was 256 colours and was point-and-click. (SCI2 and later were full multimedia)

    For the game player, the major differences you'll notice between AGI and SCI0 (both 16 colours, both text-based) are that SCI0 renders using dithering, gets full horizontal precision (x coordinates stored in 2 bytes), multiple fonts, support for real sound devices (MT32, Adlib). For the programmer, though, AGI and SCI0 were pretty radically different. SCI0 as a programming language was an object-oriented vaguely Scheme-inspired sort of language, and was actually pretty radically different from AGI.

  • Stack Overflow bans users en masse for rebelling against OpenAI partnership — users banned for deleting answers to prevent them being used to train ChatGPT
  • Yeah during the reddit exodus, people were recommending to overwrite your comment with garbage before deleting it. This (probably) forces them to restore your comment from backup. But realistically they were always going to harvest the comments stored in backup anyway, so I don't think it caused them any more work.

    If anything, this probably just makes reddit's/SO's partnership more valuable because your comments are now exclusive to reddit's/SO's backend, and other companies can't scrape it.

  • South Korean military set to ban iPhones over ‘security’ concerns
  • Why the quotes?

    If you ever see quotation marks in a headline, it simply means they're attributing the word/phrase to a particular source. In this case, they're saying that the word "security" was used verbatim in the intranet document. Scare quotes are never used in journalism, so they're not implying anything by putting the word in quotation marks. They're simply saying that they're not paraphrasing.

  • Which communication protocol or open standard in software do you wish was more common or used more?
  • Heads up for anyone (like me) who isn't already familiar with SimpleX, unfortunately its name makes it impossible to search for unless you already know what it is. I was only able to track it down after a couple frustrating minutes after I added "linux" into the search on a lark.

    Anyway it's a chat protocol

  • Deleted
    File Mutability and Why Unix-based systems are insecure
  • Reminds me a little of the old Jonathan Shapiro research OSes (Coyotos, EROS, CapROS), though toned down a little bit. The EROS family was about eliminating the filesystem entirely at the OS level since you can simulate files with capabilities anyway. Serenum seems to be toning that down a little and effectively having file- or directory-level capabilities, which I think is sensible if you're going to have a capability-based OS, since they end up being a bit more user-visible as an OS.

    He's got the same problem every research OS has: zero software. He's probably smart to ditch the idea of hardware entirely and just fix on one hardware platform.

    I wish him luck selling his computer systems, but I doubt he's going to do very well. What would a customer do with one of these? Edit files? And then...edit them again? I guess you can show off how inconvenient it is to edit things due to its security.

    I just mean it's a bit optimistic to try and fund this by selling it. I understand he doesn't have a research grant, but it's clearly just a research OS.

  • Oblig
    The World’s E-Waste Has Reached a Crisis Point
  • I feel like the answer is recycling deposits somehow. I've seen attempts at them here and there, but I guess we haven't quite figured out the details yet. I guess electronics are a bit trickier to set up a deposit system for than pop cans. Even the places that do have electronics deposits, often you have to drive to a special recycling centre out past the airport that's open 3 hours in the middle of the day, only for them to tell you that everything's glued together so they can't really separate out the parts they need and most of it will probably end up just going to the landfill anyway.

    But theoretically, if we could get a serious deposit system that allowed for recycling to be profitable and gave manufacturers and incentive for making their stuff easier to take apart and recycling (and hence easier to repair), that would be pretty sweet.

  • The World’s E-Waste Has Reached a Crisis Point
  • I'm guessing childless adults are significantly less than that. Just thinking about my kids and all of their book readers, barking animal toys, light-up fairy wands, I have a bad feeling they may be bringing up that average.

    Though the nice thing about kids' electronics is they never get obsoleted. A light-up fairy wand is just as fun in 2074 as it is in 2024. So they just get cycled through the 2nd hand mommy communities until they break. It was $40 new, you buy it "mostly undamaged" for $20, hope your kid doesn't scratch it too badly so you can sell it a couple years down the line for $10 or so.

    The bad thing about kids' electronics is it's that for new stuff, it's really impossible to tell how long it's going to last. Could be 20 years, could be 20 minutes.

  • Your Car Is Spying on You for Insurance Companies
  • Sure! We can insure that for you! Oh we just noticed that our InsureLink service isn't connecting to your car. So I'll just need you to sign this waiver saying that you're declining the InsureLink Safety discount. Just sign right here. It's just saying that we cannot offer you all of our insurance services, just like if you get in an accident or something and we can't remotely verify what you were doing at the time, we can't help you. Great! And without the Safety discount your premiums will go up by only 372.50 a month.

  • Unpatchable vulnerability in Apple chip leaks secret encryption keys
  • The threat resides in the chips’ data memory-dependent prefetcher

    Well that sounds extremely familiar. Nice to see the spirit of Spectre is still living on. The holy grail of speculation without any timing attack leaks is still eluding us, I guess.

  • Keeping on top of emails

    I'm a university professor and I often found myself getting stressed/anxious/overwhelmed by email at certain times (especially end-of-semester/final grades). The more emails that started to pile in, the more I would start to avoid them, which then started to snowball when people would send extra emails like "I sent you an email last week and haven't got a response yet...", which turned into a nasty feedback loop.

    My solution was to create 10 new email folders, called "1 day", "2 days", "3 days", "4 days", "5 days", "6 days", "7 days", "done", "never" and "TIL", which I use during stressful times of the year. Within minutes of an email coming into my inbox, I move it into one of those folders. "never" is for things that don't require any attention or action by me (mostly emails from the department about upcoming events that don't interest me). "TIL" is for things that don't require an action or have a deadline, but I know I'll be referring to a lot. Those are things like contact information, room assignments, plans for things, policy updates.

    The "x days" folders are for self-imposed deadlines. If I want to ensure I respond to an email within 2 days, I put it in the "2 days" folder, for example.

    And the "done" folder is for when I have completed dealing with an email. This even includes emails where the matter isn't resolved, but I've replied to it, so it's in the other person's court, so to speak. When they reply, it'll pop out of "done" back into the main inbox for further categorizing, so it's no problem.

    So during stressful, email-heavy times of year, I wake up to a small number of emails in my inbox. To avoid getting stressed, I don't even read them fully. I read just enough of them that I can decide if I'll respond to them (later) or not, categorize everything, and my inbox is then perfectly clean.

    Then I turn my attention to the "1 day" box, which probably only has about 3 or 4 emails in it. Not so overwhelming to only look at those, and once I get started, I find I can get through them pretty quickly.

    The thing I've noticed is that once I get over the initial dread of looking at my emails (which used to be caused by looking at a giant dozens-long list of them), going through them is pretty quick and smooth. The feeling of cleaning out my "1 day" inbox is a bit intoxicating/addictive, so then I'll want to peek into my "2 days" box to get a little ahead of schedule and so on. (And if I don't want to peek ahead that day, hey, no big deal)

    Once I'm done with my emails, I readjust them (e.g., move all the "2 days" into "1 day", then all the "3 days" into "2 days", and so on) and completely forget about them guilt-free for the rest of day.

    Since implementing this system a year ago, I have never had an email languish for more than a couple weeks, and I don't get anxiety attacks from checking email any more.

    0
    Thomas Gleixner aims for "decrapification" of Linux APIC code, longs for removing 32-bit code

    Thomas Glexiner of Linutronix (now owned by Intel) has posted 58 patches for review into the Linux kernel, but they're only the beginning! Most of the patches are just first steps at doing more major renovations into what he calls "decrapification". He says:

    > While working on a sane topology evaluation mechanism, which addresses the short-comings of the existing tragedy held together with duct-tape and hay-wire, I ran into the issue that quite some of this tragedy is deeply embedded in the APIC code and uses an impenetrable maze of callbacks which might or might not be correct at the point where the CPUs are registered via MPPARSE or ACPI/MADT.

    > So I stopped working on the topology stuff and decided to do an overhaul of the APIC code first. Cleaning up old gunk which dates back to the early SMP days, making the CPU registration halfways understandable and then going through all APIC callbacks to figure out what they actually do and whether they are required at all. There is also quite some overhead through the indirect calls and some of them are actually even pointlessly indirected twice. At some point Peter yelled static_call() at me and that's what I finally ended up implementing.

    He also, at one point, (half-heartedly) argues for the removal of 32-bit x86 code entirely, arguing that it would simplify APIC code and reduce the chance for introducing bugs in the future:

    > Talking about those museums pieces and the related historic maze, I really have to bring up the question again, whether we should finally kill support for the museum CPUs and move on.

    > Ideally we remove 32bit support alltogether. I know the answer... :(

    > But what I really want to do is to make x86 SMP only. The amount of #ifdeffery and hacks to keep the UP support alive is amazing. And we do this just for the sake that it runs on some 25+ years old hardware for absolutely zero value. It'd be not the first architecture to go SMP=y.

    > Yes, we "support" Alpha, PARISC, Itanic and other oddballs too, but that's completely different. They are not getting new hardware every other day and the main impact on the kernel as a whole is mostly static. They are sometimes in the way of generalizing things in the core code. Other than that their architecture code is self contained and they can tinker on it as they see fit or let it slowly bitrot like Itanic.

    > But x86 is (still) alive and being extended and expanded. That means that any refactoring of common infrastructure has to take the broken hardware museum into account. It's doable, but it's not pretty and of really questionable value. I wouldn't mind if there were a bunch of museum attendants actively working on it with taste, but that's obviously wishful thinking. We are even short of people with taste who work on contemporary hardware support...

    > While I cursed myself at some point during this work for having merged i386/x86_64 back then, I still think that it was the correct decision at that point in time and saved us a lot of trouble. It admittedly added some trouble which we would not have now, but it avoided the insanity of having to maintain two trees with different bugs and "fixes" for the very same problems. TBH quite some of the horrors which I just removed came out of the x86/64 side. The oddballs of i386 early SMP support are a horror on their own of course.

    > As we made that decision more than 15 years [!] ago, it's about time to make new decisions.

    Linus responded to one of the patches, saying "I'm cheering your patch series", but has obviously diplomatically not acknowledged the plea to remove 32-bit support.

    18
    How much does it bother you that OpenAI is trained on your data? What can we do about it?

    It feels like we have a new privacy threat that's emerged in the past few years, and this year especially. I kind of think of the privacy threats over the past few decades as happening in waves of:

    1. First we were concerned about governments spying on us. The way we fought back (and continue to fight back) was through encrypted and secure protocols.
    2. Then we were concerned about corporations (Big Tech) taking our data and selling it to advertisers to target us with ads, or otherwise manipulate us. This is still a hard battle being fought, but we're fighting it mostly by avoiding Big Tech ("De-Googling", switching from social media to communities, etc.).
    3. Now we're in a new wave. Big Tech is now building massive GPTs (ChatGPT, Google Bard, etc.) and it's all trained on our data. Our reddit posts and Stack Overflow posts and maybe even our Mastodon or Lemmy posts! Unlike with #2, avoiding Big Tech doesn't help, since they can access our posts no matter where we post them.

    So for that third one...what do we do? Anything that's online is fair game to be used to train the new crop of GPTs. Is this a battle that you personally care a lot about, or are you okay with GPTs being trained on stuff you've provided? If you do care, do you think there's any reasonable way we can fight back? Can we poison their training data somehow?

    14
    InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)DU
    duncesplayed @lemmy.one
    Posts 4
    Comments 254