Skip Navigation
Deleted
*Permanently Deleted*
  • In lower level languages like C/C++ the reason becomes much more apparent when you learn about memory allocation and management (as a bonus it also really helps to understand how OS's handle memory). Dynamically declaring variables in a loop would mean you need to allocate a chunk of memory for each variable that's generated on the fly, most of, if not all of the dynamically declared variables would not even use most of their allocated memory resulting in a ton of extra overhead and wasted space within memory. An array is usually the answer when someone asks how to dynamically define variables. With an array you allocate the space needed in memory and can iterate across it block by block resulting in more control and efficiency within your reserved memory block. Linked lists are also a fun thing to look into when you aren't sure how big your array needs to be. It's a hard question to answer in a 100 level class because the answer actually goes pretty deep into low level programming, operating system and hardware principles.

  • I just developed and deployed the first real-time protection for lemmy against CSAM!
  • Well... not really no. Hashing is a one-way function so it cannot be reversed. Hashing an image makes it impossible to determine anything about the image, it would be complete nonsense if you tried to view it as an image and image contents are not leaked. After hashing, the image just becomes a string of mixed numbers and characters.

    The most common and straightforward way of blocking CSAM is using a database of hashes from known offending content and compare that database against the hash of the uploaded image. I'm not entirely sure how AI plays into this though... you don't need AI to do a database query and if a single pixel is changed on an image the hash will be completely different and unrecognizable. If you were comparing images with images, sure, modified offending content could still be caught and reported/filtered through AI fairly effectively... but then you go right back to the original issue of hosting illegal and unethical content.

    So yeah I definitely agree with you, training an AI on hashes does not seem particularly useful but as long as all the data is hashed there should not be any leaked image data. This just seems like yet another inappropriate use of AI for the clickbait.

  • Learn the art of seedin' torrents and boostin' the pirate community's strength, aye?
  • Port forwarding allows you to bypass your NAT firewall which will naturally block all unsolicited traffic on a closed port. What that means for a torrent download is peers cannot introduce themselves to you and create a new connection, you can only connect to active peers who have their ports open.

    Just to add more background to that, before your torrent can begin downloading pieces from various peers, you need to know the address of the peers sharing the pieces you need. Typically that is handled by the tracker and/or DHT. A tracker acts as sort of a logistics middle-man. It helps facilitate efficient transmission between peers by tracking what each peer has and needs. If peer B needs piece X, the tracker will supply peer B with the address to peer A who has piece X. Assuming peer A has their incoming port open, they will accept the request for piece X and send it to peer B. If their port is closed, the request will simply be denied and no traffic will be shared between the peers. The tracker's address, as well as the data hash and some other misc data is coded into the torrent file. DHT is a little more unique and complicated. It is a fully distributed hash table on a P2P network and does not rely on a tracker at all, it's strictly P2P. The only little catch to that is to initially introduce yourself into the network you need to bootstrap your connection using some hardcoded addresses, often from a very centralized source. Port forwarding becomes much more important for DHT because after the initial bootstrap, there is no middle-man, it's strictly peer to peer and by having your ports closed, your client can't effectively communicate across the network. Without two-way communication across peers, your client will generally be stuck with a very limited pool of peers it can communicate with. Magnet links as well as most torrent clients utilize DHT.

    One reason it's not so noticeable these days when ports are closed is because many torrent peers exist in big data centers with virtually unlimited bandwidth. When torrents were still young, most if not all peers were hosted on consumer grade hardware at a residence so you needed every connection you could get.

    If your torrent download happens to be a well-known Linux ISO, chances are very likely that there will be at least two or three peers you'll connect to that exist in a data center, they will most likely account for 80%+ of your download speed.

    Blocking ports ultimately hurts seeding the most which can effect the overall "health" of a torrent. Say a peer labeled A can't connect to those giant data center peers for whatever reason, they now have to seek out other peers that may have the data they are looking for. If all the other peers have their ports closed, well then the torrent is essentially dead for peer A and they'll have to either wait for someone with open ports to come online and start seeding or search for an entirely new torrent.

    Sorry, this was a bit of an on-the-go mind dump so please anyone correct me if I'm wrong anywhere here but that's pretty much the gist of port forwarding in the context of torrenting.

  • InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)RY
    RyeMan @lemmy.ml
    Posts 0
    Comments 2