Skip Navigation

InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)GA
Posts
2
Comments
689
Joined
2 yr. ago

  • Windows is the first thing I can think of that used the word "application" in that way, I think even back before Windows could be considered an OS (and had a dependency on MS-DOS). Back then, the Windows API referred to the Application Programming Interface.

    Here's a Windows 3.1 programming guide from 1992 that freely refers to programs as applications:

    Common dialog boxes make it easier for you to develop applications for the Microsoft Windows operating system. A common dialog box is a dialog box that an application displays by calling a single function rather than by creating a dialog box procedure and a resource file containing a dialog box template.

  • Some people actively desire this kind of algorithm because they find it easier to find content they like this way.

    Raw chronological order tends to overweight the frequent posters. If you follow someone who posts 10 times a day, and 99 people who post once a week, your feed will be dominated by 1% of the users representing 40% of the posts you see.

    One simple algorithm that is almost always better for user experiences is to retrieve the most recent X posts from each of the followed accounts and then sort that by chronological order. Once you're doing that, though, you're probably thinking about ways to optimize the experience in other ways. What should the value of X be? Do you want to hide posts the user has already seen, unless there's been a lot of comment/followup activity? Do you want to prioritize posts in which the user was specifically tagged in a comment? Or the post itself? If so, how much?

    It's a non-trivial problem that would require thoughtful design, even for a zero advertising, zero profit motive service.

  • Instead, I actively avoided conversations with my peers, particularly because I had nothing in common with them.

    Looking at your own social interactions with others, do you now consider yourself to be socially well adjusted? Was the "debating child in a coffee shop" method actually useful at developing the social skills that are useful in adulthood?

    I have some doubts.

  • It's worth pointing out that browser support is a tiny, but important, part of overall ecosystem support.

    TIFF is the dominant standard for certain hardware and processes for digitizing physical documents, or publishing/printing digital files as physical prints. But most browsers don't bother supporting displaying TIFF, because that's not a good format for web use.

    Note also that non-backwards-compatible TIFF extensions are usually what cameras capture as "raw" image data and what image development software stores as "digital negatives."

    JPEG XL is trying to replace TIFF at the interface between the physical analog world and the digital files we use to represent that image data. I'm watching this space in particular, because the original web generation formats of JPEG, PNG, and GIF (and newer web-oriented formats like webp and avif) aren't trying to do anything with physical sensors, scans, prints, etc.

    Meanwhile, JPEG XL is trying to replace JPEG on the web, with photographic images compressed with much more efficient and much higher quality compression. And it's trying to replace PNG for lossless compression.

    It's trying to do it all, so watching to see where things get adopted and supported will be interesting. Apple appears to be going all in on JXL, from browser support to file manager previews to actual hardware sensors storing raw image data in JXL. Adobe supports it, too, so we might start to see full JXL workflows from image capture to postprocessing to digital/web publishing to full blown paper/print publishing.

  • Some people struggle with the difference between arguing about descriptive statements, about what things are, and arguing about normative statements, about what things should be. And these topics are nuanced.

    Decompiling to learn functionality is fair use (because like I said in my previous comment, functionality can't be copyrighted), but actually using and redistributing code (whether the original source code, the compiled binary derived from the source code, or decompiled code derived from the binary) is pretty risky from a legal standpoint. I'd advise against trying to build a business around the practice.

  • Yeah, that's why all the IBM clones had to write their BIOS firmware in clean room implementations of new software that implemented the same functionality as IBM's own documentation described.

    Functionality can't be copyrighted, but code can be. So the easiest way to prove that you made something without the copyrighted code is to mimic the functionality through your own implementation, not by transforming the existing copyrighted code, through decompilation or anything like that.

  • Was that in 2000? My own vague memory was that Linux started picking up some steam in the early 2000's and then branched out to a new audience shortly after Firefox and Ubuntu hit the scene around 2004, and actually saw some adoption when Windows XP's poor security and Windows Vista's poor hardware support started breaking things.

    So depending on the year, you could both be right.

  • My anecdotal observation is the same. Most of my friends in Silicon Valley are using Macbooks, including some at some fairly mature companies like Google and Facebook.

    I had a 5-year sysadmin career, dealing with some Microsoft stuff especially on identity/accounts/mailboxes through Active Directory and Exchange, but mainly did Linux specific stuff on headless servers, with desktop Linux at home.

    When I switched to a non-technical career field I went with a MacBook for my laptop daily driver on the go, and kept desktop Linux at home for about 6 or 7 more years.

    Now, basically a decade after that, I'm pretty much only driving MacOS on a laptop as my normal OS, with no desktop computer (just a docking station for my Apple laptop). It's got a good command line, I can still script things, I can still rely on a pretty robust FOSS software repository in homebrew, and the filesystem in MacOS makes a lot more sense to me than the Windows lettered drives and reserved/specialized folders I can never remember anymore. And nothing beats the hardware (battery life, screen resolution, touchpad feel, lid hinge quality), in my experience.

    It's a balance. You want the computer to facilitate your actual work, but you also don't want to spend too much time and effort administering your own machine. So the tradeoff is between the flexibility of doing things your way versus outsourcing a lot of the things to the maintainer defaults (whether you're on Windows, MacOS, or a specific desktop environment in Linux), mindful of whether your own tweaks will break on some update.

    So it's not surprising to me when programmers/developers happen to be issued a MacBook at their jobs.

  • Year of birth matters a lot for this experiment.

    Macintosh versus some IBM (or clone) running MS DOS is a completely different era than Windows Vista versus PowerPC Macs, which was a completely different era from Windows Store versus Mac App Store versus something like a Chromebook or iPad as a primary computing device.

  • Installing MacOS on Intel Macs is really easy if you still have your recovery partition. It's not even hard even if you've overwritten the recovery partition, so long as you have the ability to image a USB drive with a MacOS installer (which is trivial if you have another Mac running MacOS).

    I haven't messed around with the Apple silicon versions, though. Maybe I'll give it a try sometime, used M1 MacBooks are selling for pretty cheap.

  • I don't really mind when a cloud-connected device gracefully falls back to an offline-only device. It seems like it retains all of the non-cloud functionality: reading and setting temps, including on a schedule.

    It'd be nicer if they gave the option to let people run their own servers for the the networked functionality, but it doesn't seem like they've reduced or otherwise held back on the offline functionality one would expect from a thermostat.

  • In heat

    Jump
  • Longer queries give better opportunities for error correction, like searching for synonyms and misspellings, or applying the right context clues.

    In this specific example, "is Angelina Jolie in Heat" gives better results than "Angelina Jolie heat," because the words that make it a complete sentence question are also the words that give confirmation that the searcher is talking about the movie.

    Especially with negative results, like when you ask a question where the answer is no, sometimes the semantic links in the kndex can get the search engine to make suggestions of a specific mistaken assumption you've made.

  • In heat

    Jump
  • Why do people Google questions anyway?

    Because it gives better responses.

    Google and all the other major search engines have built in functionality to perform natural language processing on the user's query and the text in its index to perform a search more precisely aligned with the user's desired results, or to recommend related searches.

    If the functionality is there, why wouldn't we use it?