Skip Navigation
People still working in IT, thoughts on IPv6?
  • Time isn't the only factor for adoption. Between the adoption of IPv4 and IPv6, the networking stack shifted away from network companies like Novell to the OSes like Windows, which delayed IPv6 support until Vista.

    When IPv4 was adopted, the networking industry was a competitive space. When IPv6 came around, it was becoming stagnant, much like Internet Explorer. It wasn't until Windows Vista that IPv6 became an option, Windows 7 for professionals to consider it, and another few years later for it to actually deployable in a secure manner (and that's still questionable).

    Most IT support and developers can even play with IPv6 during the early 2000s because our operating systems and network stacks didn't support it. Meanwhile, there was a boom of Internet connected devices that only supported IPv4. There are a few other things that affected adoption, but it really was a pretty bad time for IPv6 migration. It's a little better now, but "better" still isn't very good.

  • Mass shooting kills 4, wounds 17 in nightlife district in Birmingham, Alabama.
  • You should probably read/know the actual law, rather than just getting it close. You're probably referring to 18 USC 922 (d) (10), which includes any felony-- not just shooting. That's one of 11 listed requirements in that section, which assumes that the first requirement (a) (1) is met: not an interstate nor foreign transaction. There's a lot more to it than just "as long as you don't have good evidence they're going to go shoot someone"

    Even after the sale, ownership is still illegal under section (g)-- it just isn't the seller's fault anymore.

    This is basic information that should be known to any gun safety advocate. "Responsible" gun owners must know those laws, plus others backward and forward. One small slip-up is a felony, jail, and permanent loss of gun ownership/use. Are they really supposed to listen to those who can't even talk about current law correctly?

    The law can be better, but you won't do yourself any favors by misrepresenting it.

  • Trying to build viable third parties by voting for them in presidential elections is like trying to build a third door in your house by repeatedly walking into the wall where you want the door to be.
  • It seems you are mixing the concepts of voting systems and candidate selection. FPP nor FPTP should not sound scary. As a voting systems, FPP works well enough more often than many want to admit. The name just describes it in more detail: First Preference Plurality.

    Every voting system is as bottom-up or top-down as the candidate selection process. The voting system itself doesn't really affect whether it is top down or bottom up. Requiring approval/voting from the current rulers would be top-down. Only requiring ten signatures on a community petition is more bottom up.

    The voting systems don't care about the candidate selection process. Some require precordination for a "party", but that could also be a party of 1. A party of 1 might not be able to get as much representation as one with more people: but that's also the case for every voting system that selects the same number of candidates.

    Voting systems don't even need to be used for representation systems. If a group of friends are voting on where to eat, one problem might be selecting the places to vote on, but that's before the vote. With the vote, FPP might have 70% prefer pizza over Indian food, but the Indian food vote might still win because the pizza voters had another first choice. Having more candidates often leads to minority rule/choice, and that's not very good for food choice nor community representation.

  • Microsoft finally officially confirms it's killing Windows Control Panel sometime soon
  • That many steps? WindowsKey+Break > Change computer name.

    If you're okay with three steps, on Windows 10 and newer, you can right click the start menu and generally open system. Just about any version supports right clicking "My Computer" or "This PC" and selecting properties, as well.

  • US Considers a Rare Antitrust Move: Breaking Up Google
  • Do you use Android? AI was the last thing on their minds for AOSP until OpenAI got popular. They've been refining the UIs, improving security/permissions, catching up on features, bringing WearOS and Android TV up to par, and making a Google Assistant incompetent. Don't take my word for it; you'll rarely see any AI features before OpenAI's popularity: v15, v14, v13, and v12. As an example of the benefits: Google and Samsung collaborating on WearOS allowed more custom apps and integrations for nearly all users. Still, there was a major drop in battery life and compatibility with non-Android devices compared to Tizen.

    There are plenty of other things to complain about with their Android development. Will they continue to change or kill things like they do all their other products? Did WearOS need to require Android OSes and exclude iOS? Do Advertising APIs belong in the base OS? Should vendors be allowed to lock down their devices as much as they do? Should so many features be limited to Pixel devices? Can we get Google Assistant to say "Sorry, something went wrong. When you're ready: give it another try" less often instead of encouraging stupidity? (It's probably not going to work if you try again).

    Google does a lot of wrong, even in Android. AI on Android isn't one of them yet. Most other commercially developed operating systems are proprietary, rather than open to users and OEMs. The collaboration leaves much to be desired, but Android is unfortunately one of the best examples of large-scale development of more open and libre/free systems. A better solution than trying to break Android up, is taking/forking Android and making it better than Google seems capable of.

  • Don't bother promoting IPv6 as "the future". It's never going to be the default.
  • I’m fully aware how rirs allocate ipv6. The smallest allocation is a /64, that’s 65535 /64’s. There are 2^32 /32’s available, and a /20 is the minimum allocatable now. These aren’t /8’s from IPv4, let’s look at it from a /56, there are 10^16 /56 networks, roughly 17 million times more network ranges than IPv4 addresses.

    /48s are basically pop level allocations, few end users will be getting them. In fact comcast which used to give me /48s is down to /60 now.

    I’ll repeat, we aren’t running out any time soon, even with default allocations in the /3 currently existing for ipv6.

    Sorry, but your reply suggests otherwise.

    The RIRs (currently) never allocate a /64 nor a /56. /48 is their (currently) smallest allocation. For example, of the ~800,000 /32's ARIN has, only ~47k are "fragmented" (smaller than /32) and <4,000 are /48s. If /32s were the average, we'd be fine, but in our infinite wisdom, we assign larger subnets (like Comcast's 2601::/20 and 2603:2000::/20).

    These aren’t /8’s from IPv4. let’s look at it from a /56, there are 10^16 /56 networks, roughly 17 million times more network ranges than IPv4 addresses.

    Taking into account the RIPE allocations, noted above, the closer equivalent to /8 is the 1.048M /20s available. Yes, it's more than the 8-bit class-A blocks, but does 1 million really sound like the scale you were talking about? "enough addresses in ipv6 to address every known atom on earth"

    The situation for /48s is better, but still not as significant as one would think. With Cloudflare as an extreme example: They have 6639 IPv4 /24 blocks, but 670,550 IPv4 /48 blocks. Same number of networks in theory, but growing from needing 13-bits of networks in IPv4 to 19-bits of networks: 5 extra bits of usage from just availability.

    That sort of increase of networks is likely-- especially in high-density data centers where one server is likely to have multiple IPv6 networks assigned to it. What do you think the assignments will look like as we expand to extra-terrestrial objects like satellites, moons, planets, and other spacecraft?

    I’ll repeat, we aren’t running out any time soon

    Soon vs never. OP I replied to said "never". Your post implied similarly, too-- that these numbers are far too big for humans to imagine or ever reach. The IPv6 address space is large enough for that: yes. But our allocations still aren't. The number of bits we're actually allocating (which is the metric used for running out) is significantly smaller than most think. In the post above, you're suggesting 56-64 bits, but the reality is currently 20-32 bits-- 1M-4B allocations.

    If everyone keeps treating IPv6 as infinite, the current allocation sizes would take longer than IPv4 to run out, but it isn't really an unfathomable number like the number of atoms on Earth. 281T /48s works more sanely: likely enough for our planet-- but RIPEs seem to avoid allocating subnets that small.

    IPv4-style policy shifts could happen: requirements for address blocks rise, allocation sizes shrink, older holders have /20 blocks (instead of 8-bit class A blocks), and newer organizations limited to /48 blocks or smaller with proper justification. The longer we keep giving away /20s and /32s like candy, the more likely we'll see the allocations run out sooner (especially compared to never). My initial message tried to imply that it depends on how fast we grow and achieve network growth goals:

    30 years? Optimistically, including interstellar networks and continued exponential growth in IP-connected devices? Yes.

    . . .

    Realistically, it’s probably more than 100 years away, maybe outside our lifetimes

  • Don't bother promoting IPv6 as "the future". It's never going to be the default.
  • That wasn't what I said. 2^56 was NOT a reference to bits, but to how many IPs we could assign every visible star, if it weren't for subnet limitations. IPv6 isn't classless like IPv4. There will be a lot of wasted/unrunused/unroutable addresses due to the reserved 64-bits.

    The problem isn't the number of addresses, but the number of allocations. Our smallest allocation, today, for a 128-bit address: is only 48-bits. Allocation-wise, we effectively only have 48-bits of allocations, not 128. To run out like with IPv6 , we only need to assign 48-bits of networks, rather than the 24-bits for IPv4. Go read up on how ARIN/RIPE/APNIC allocate IPs. It's pretty wasteful.

  • Don't bother promoting IPv6 as "the future". It's never going to be the default.
  • There's a large possibility we'll run out of IPv6 addresses sooner than we think.

    Theoretically, 128-bits should be enough for anything. IPv6 can theoretically give 2^52 IPs to every star in the universe: that would be a 76-bit subnet for each star rather than the required 64 minimum. Today, we (like ARIN) do 32-48-bit allocations for IPv6.

    With IPv4, we did 8-24-bit allocations. IPv6 gives only 24 extra allocation bits, which may last longer than IPv4. We basically filled up IPv4's 24-bits of allocations in 30 years. 281 trillion (2^48) allocations is fairly reachable. There doesn't seem to be any slowdown of Internet nor IP growth. Docker and similar are creating more reasons to allocate IPs (per container). We're also still in the early years of interstellar communications. With IPv4, we could adopt classless subnetting early to delay the problem. IPv6's slow adoption probably makes a similar shift in subnetting unlikely.

    If we continue the current allocation trend, can we run out of the 281 trillion allocations in 30 years? Optimistically, including interstellar networks and continued exponential growth in IP-connected devices? Yes. Realistically, it's probably more than 100 years away, maybe outside our lifetimes, but that still sounds low when IPv6 has enough IPs for assigning an IP to every blade of grass, given every visible star has an Earth. We're basically allocating a 32-48 bit subnet to every group of grass, and there are not really enough addresses for that.

  • Samsung is sunsetting Tizen and fully ending support for the smartwatch OS
  • I'm still rocking a Galaxy Watch 4: one of the first Samsung watches with WearOS. It has a true always-on screen, and most should. The always-on was essential to me. I generally notice within 60 minutes if an update or some "feature" tries to turn it off. Unfortunately, that's the only thing off about your comment.

    It's a pretty rough experience. The battery is hit or miss. At good times, I could get 3 days. Keeping it locked, (like after charging) used to kill it within 60 minute (thankfully, fixed after a year). Bad updates can kill the battery life, even when new: from 3 days life to 10 hours, then to 3 days again. Now, after almost 3 years, it's probably about 30 hours, rather than 3 days.

    In general, the battery life with always-on display should last more than 24 hours. That'd be pretty acceptable for a smartwatch, but is it a smartwatch?

    It can't play music on its own without overheating. It can't hold a phone call on its own without overheating. App support is limited, but the processor seems to struggle most of the time. Actually smart features seem rare, especially for something that needs consistent charging.

    Most would be better off with a Pebble or less "smart" watch: better water resistance, better battery, longer support, 90% of the usable features, and other features to help make up for any differences.

  • More Gen Z Americans identify as LGBTQ than as Republican
  • Vote. Seriously. (If practical: get involved, too). The U.S. is currently in the middle of a large shift of generational power.

    Many of these changes are fairly recent:

    • 2020 was the first federal election where the Baby Boomers didn't make up the largest voting generation.
    • It was only in 2016 that the number Gen X and younger voting numbers grew larger than the boomer and older numbers.
    • Those numbers had been possible since 2010. Despite having more eligible voters (135M vs 93M), the "GenXers and younger" only had ~36M actual voters, compared to ~57M older ones.

    Looking forward, the numbers only get better for younger voters. There hasn't been a demographic shift like this in the U.S. in a long time (ever?). The current power structures can not be maintained for much longer. It is still possible for that shift to be peaceful. Please encourage the peaceful transfer: vote. Vote in the primaries. Maybe even vote for better voting systems. This time is unique, but change takes time. Don't let them fool you otherwise: that's just them trying to hold on to their power.

  • Why a kilobyte is 1000 and not 1024 bytes
  • To me, your attempt at defending it or calling it a retcon is an awkward characterization. Even in your last reply: now you're calling it an approximation. Dividing by 1024 is an approximation? Did computers have trouble dividing by 1000? Did it lead to a benefit of the 640KB/320KB memory split in the conventional memory model? Does it lead to a benefit today?

    Somehow, every other computer measurement avoids this binary prefix problem. Some, like you, seem to try to defend it as the more practical choice compared to the "standard" choice every other unit uses (e.g: 1.536 Mbps T1 or "54" Mbps 802.11g).

    The confusion this continues to cause does waste quite a bit of time and money today. Vendors continue to show both units on the same specs sheets (open up a page to buy a computer/server). News still reports differences as bloat. Customers still complain to customer support, which goes up to management, and down to project management and development. It'd be one thing if this didn't waste time or cause confusion, but we're still doing it today. It's long past time to move on.

    The standard for "kilo" was 1000 centuries before computer science existed. Things that need binary units have an option to use, but its probably not needed: even in computer science. Trying to call kilo/kibi a retcon just seems to be trying to defend the use of the 1024 usage today: despite the fact that nearly nothing else (even in computers) uses the binary prefixes.

  • Why a kilobyte is 1000 and not 1024 bytes
  • 209GB? That probably doesn't include all of the RAM: like in the SSD, GPU, NIC, and similar. Ironically, I'd probably approximate it to 200GB if that was the standard, but it isn't. It wouldn't be that much of a downgrade to go to 200GB from 192GiB. Is 192 and 209 that different? It's not much different from remembering the numbers for a 1.44MiB floppy, 1.5436Mbps T1 lines, or ~3.14159 pi approximation. Numbers generally end up getting weird: trying to keep it in binary prefixes doesn't really change that.

    The definition of kilo being "1000" was standard before computer science existed. If they used it in a non-standard way: it may have been common or a decent approximation at the time, but not standard. Does that justify the situation today, where many vendors show both definitions on the same page, like buying a computer or a server? Does that justify the development time/confusion from people still not understanding the difference? Was it worth the PR reaction from Samsung, to: yet again, point out the difference?

    It'd be one thing if this confusion had stopped years ago, and everyone understood the difference today, but we're not: and we're probably not going to get there. We have binary prefixes, it's long past time to use them when appropriate-- but even appropriate uses are far fewer than they appear: it's not like you have a practical 640KiB/2GiB limit per program anymore. Even in the cases you do: is it worth confusing millions/billions on consumer spec sheets?

  • Why a kilobyte is 1000 and not 1024 bytes
  • This is all explained in the post we're commenting on. The standard "kilo" prefix, from the metric system, predates modern computing and even the definition of a byte: 1700s vs 1900s. It seems very odd to argue that the older definition is the one trying to retcon.

    The binary usage in software was/is common, but it's definitely more recent, and causes a lot of confusion because it doesn't match the older and bigger standard. Computers are very good at numbers, they never should have tried the hijack the existing prefix, especially when it was already defined by existing International standards. One might be able to argue that the US hadn't really adopted the metric system at the point of development, but the usage of 1000 to define the kilo is clearly older than the usage of 1024 to define the kilobyte. The main new (last 100 years) thing here is 1024 bytes is a kibibyte.

    Kibi is the recon. Not kilo.

  • Why a kilobyte is 1000 and not 1024 bytes
  • How do you define a recon? Were kilograms 1024 grams, too? When did that change? It seems it's meant 1000 since metric was created in the 1700s, along with a binary prefix.

    From the looks of it, software vendors were trying to recon the definition of "kilo" to be 1024.

  • 8GB RAM on M3 MacBook Pro 'Analogous to 16GB' on PCs, Claims Apple
  • tl;dr

    The memory bandwidth isn't magic, nor special, but generally meaningless. MT/s matter more, but Apple's non-magic is generally higher than the industry standard in compact form factors.

    Long version:

    How are such wrong numbers are so widely upvoted? The 6400Mbps is per pin.

    Generally, DDR5 has a 64-bit data bus. The standard names also indicate the speeds per module: PC5-32000 transfers 32GB/s with 64-bits at 4000MT/s, and PC5-64000 transfers 64GB/s with 64-bits at 8000MT/s. With those speeds, it isn't hard for a DDR5 desktop or server to reach similar bandwidth.

    Apple doubles the data bus from 64-bits to 128-bits (which is still nothing compared to something like an RTX 4090, with a 384-bit data bus). With that, Apple can get 102.4GB/s with just one module instead of the standard 51.2GB/s. The cited 800GB/s is with 8: most comparable hardware does not allow 8 memory modules.

    Ironically, the memory bandwidth is pretty much irrelevant compared to the MT/s. To quote Dell defending their CAMM modules:

    In a 12th-gen Intel laptop using two SO-DIMMs, for example, you can reach DDR5/4800 transfer speeds. But push it to a four-DIMM design, such as in a laptop with 128GB of RAM, and you have to ratchet it back to DDR5/4000 transfer speeds.

    That contradiction makes it hard to balance speed, capacity, and upgradability. Even the upcoming Core Ultra 9 185H seems rated for 5600 MT/s-- after 2 years, we're almost getting PC laptops that have the memory speed of Macbooks. This wasn't Apple being magical, but just taking advantage of OEMs dropping the ball on how important memory can be to performance. The memory bandwidth is just the cherry on top.

    The standard supports these speeds and faster. To be clear, these speeds and capacity don't do ANYTHING to support "8GB is analogous to..." statements. It won't take magic to beat, but the PC industry doesn't yet have much competition in the performance and form factors Apple is targeting. In the meantime, Apple is milking its customers: The M3s have the same MT/s and memory technology as two years ago. It's almost as if they looked at the next 6-12 months and went: "They still haven't caught up, so we don't need too much faster, yet-- but we can make a lot of money until while we wait."

  • Powerful Malware Disguised as Crypto Miner Infects 1M+ Windows, Linux PCs
  • They describe an SSH infector, as well as a credentials scanner. To me, that sounds like it started like from exploited/infected Windows computers with SSH access, and then continued from there.

    With how many unencrypted SSH keys there are, how most hosts keep a list of the servers they SSH into, and how they can probably bypass some firewall protections once they're inside the network: not a bad idea.

  • InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)EY
    Eyron @lemmy.world
    Posts 0
    Comments 19