Just pilling on some concrete examples, awesome-gemini is definitely the best place to start looking. There are both converters for the gemtext format and gateways for the protocols.
For format conversion tools, awesome-gemini already lists a handful of tools.
From the gemini side there are some gateways for specific websites operated by various people
- BBC news gemini://freeshell.de/news/bbc.gmi
- The Guardian gemini://guardian.shit.cx/world/
- Lots of others gemini://gemi.dev/cgi-bin/waffle.cgi
These work pretty well for me. I think there were public gateways to open http pages from gemini, but I can't recall one from the top of my head.
Some of the gemini browsers support gemini proxies to access http(s) content. You can run it in your own machine. Duckling is the only one I'm familiar (but see the awesome list for more)
Conversely, to access gemini pages from a web browser portal.mozz.us hosts a gateway (just place whatever gemini link you want in the box).
One big privacy caveat of using gemini proxies for this is that while this may improve your privacy with regards to javascript/cookies it will reduced it because it makes your behaviour more identifiable from the point of view of the websites you visit (i.e. your proxy is clearly not a browser making it unusual).
I can't offer a comparison with Session, since I'm not familiar w/ it. At a glance messages seem to be routed through some nodes that belong to a pool of service nodes that run some cryptocurrency stake (but I don't know what this means in practice). It does seem seem to do multi hop routing which means its more resilient to privacy attacks (but this says nothing about resiliency to being blocked).
On the SimpleX side, anyone can operate a SimpleX SMP server - that is the server that holds messages while in transit from the source to the destination (each server has a number of queues, each is one-way from a sender to a receiver ).
Each user defines the servers/queues he uses to receive messages, but not to send (those are the defined by the user you are sending messages to). So resilience to blocking means both users need to diversify the servers they use.
The folks running SimpleX host a handful of servers - and I expect those are the ones most people use. In that sense they are a point of failure for someone to block communication. If you check the source you will see an incomplete list of servers there, and in the app settings there are more (and you can add your own).
As for blocking the protocol, the following approaches seem standard for a state operator:
- block TCP port 5223
- if a different port is used, block based on TLS negotiation - this seems easy to spot
- seize the public servers
(This is as far as my knowledge of SimpleX goes - the rest is slightly hand wavy assumptions I never checked)
I don't recall how the SimpleX app manages those server queue(s?). Taking a peek at the app right I only see one receive/send queue when I select a contact. But in theory it should be possible for it to have multiple queues per contact. The documentation does mention this in some comments (newQueueMsg: maybe it is not implemented?)
Finally the android app seems to support integration with ToR and will support .onion addresses if this is enabled, that is probably the most practical way to bypass some blocker (assuming ToR is not blocked :D). But this requires that the SMP server used by your contacts supports ToR addresses.
It would definitely be nice to see support for tunneling over other protocols, and of course more servers running those (ToR, I2P, gnunet?, etc, etc).
Some links to stuff:
Thanks for the list, there were a few I did not know about.
I would add ToR and GNUNet (https://www.gnunet.org/) too.
Depends on what you mean by "secure", being very loose with the definitions, we have
- end to end confidentiality (i.e. only you and the intended destination can see the message contents)
- privacy (only the destination knows i'm sending messages to them)
- anonymity (no one can find out who you are, where you live, i.e. metadata/identity/etc)
My personal preference is Simplex.
Reasoning for a few:
- Email: even if you use PGP to encrypt messages the server(s) in the delivery path have access to all metadata (sender, receiver, etc, etc). If no encryption is in use, they see everything. Encryption protocols in e-mail only protect the communication between client and server (or hop by hop for server to server)
- XMPP: similar reasoning to email. i.e. the server knows what you send to who. I should note that XMPP has more options for confidentiality of message content (PGP, OMEMO, others). So I find it preferable to email - but architecturally not too different.
- IRC: Again similar reasoning to email - even if your IRC server supports TLS, there is no end to end encryption to protect message contents. There were some solutions for message encryption/signing, but I've never seen them in the wild.
- Signal: Good protocol (privacy, confidentiality, etc). Dependency on phone number is a privacy concern for me. I think there are 3rd party servers/apps without the use of phone numbers.
- Simplex: Probably the strongest privacy protection you can find, but definitely not easy in terms of usability. The assumption is that we do not trust the intermediate server at all (and expose nothing to it), we just leave our encrypted messages there for the receiver to pick up later. It also does some funny stuff like padding messages with garbage.
- Matrix: In theory it supports end to end encryption in various scenarios, but my experience with it has been so bad (UX, broken encrypted sessions) I only use it for public groups.
Some more food for though though; these protocols support both group communication and 1-1 messaging - privacy expectations for these two are very different. For example I don't care too much about confidentiality in a group chat if there are 3000 people in there. It might be more concerned with concealing my phone/name/metadata.
In general I consider large group chats "public", I can try to be anonymous, but have no other expectations. e.g. some people use some protocols over ToR because they do not trust the service (or even the destination) but they try to protect their anonymity.
On a technical note: I don't think there is any protocol that supports multi-device without some kind of vulnerability in the past. So I would temper my expectations if using these protocols across devices.
I'm not familiar with the other ones that were mentioned in comments or in the spreadsheet.
There are gemini to http gateways so the content is probably already crawled anyway.
So lets be clear - there is no way to prevent others from crawling your website if they really want to (AI or non AI).
Sure you can put up a robots.txt or reject certain user agents (if you self host) to try and screen the most common crawlers. But as far as your hosting is concerned the crawler for AI is not too different from e.g. the crawler from google that takes piece of content to show on results. You can put a captcha or equivalent to screen non-humans, but this does not work that well and might also prevent search engines from finding your site (which i don't know if you want?).
I don't have a solution for the AI problem, as for the "greed" problem, I think most of us poor folks do one of the following:
- github pages (if you don't like github then codeberg or one of the other software forges that host pages)
- self host your own http server if its not too much of an hassle
- (make backups, yes always backups)
Now for the AI problem, there are no good solutions, but there are funny ones:
- write stories that seem plausible but hold high jinx in there - if there ever was a good reason for being creative it is "I hope AI crawls my story and the night time news reports that the army is now using trained squirrels as paratroopers"
- double speak - if it works for fictional fascist states it works for AI too - replace all uses of word/expression with another, your readers might be slightly confused but such is life
- turn off your web site at certain times of the day, just show a message showing that it only works outside of US work hours or something
I should point out that none of this will make you famous or raise your SEO rank in search results.
PS: can you share your site, now i'm curious about the stories
Here is my take as someone who absolutely loves the work simplex did on the SMP protocol, but still does not use SimpleX Chat.
First the trivial stuff:
- no one else seems to use it
- UX is not great because of initial exchange
These two are not that unexpected. Any other chat app with E2E security has tricky UX, and SimpleX takes the hard road by not trading off security/privacy for UX. I think this is a plus, but yes it annoys people.
Now for the reasons that really keep me away:
- the desktop app is way behind the mobile app - and I would really prefer to use a desktop CLI app
- haskell puts me off a bit - the language is fine I just don't know how to read it - for more practical issues it did not support older (arm6/7) devices which kept lots of people in older devices away
- AFAIK no alternative implementations of either the client or the SMP server exist - which is a petty I think the protocol would shine in other contexts (like push notifications)
- I was going to say that there are not many 3rd party user groups - but I just found out about the directory service (shame on me, maybe? can't seem to find groups though)
- protocol features/stabilization is a moving target and most of the fancy new features don't really interest me (i don't care much about audio/video)
- stabilization of code/dependencies would help package the server/client in more linux distros, which I think would help adoption among the tech folk
Finally a couple of points on some of the other comments:
- multi device support - no protocol out there can do multi device properly (not signal, none really) so i'm ok with biting the bullet on this
- VC funding is a drag - but I am still thankful that they clearly specified the chat protocol separate from the message relay, which means that even if the chat app dies, SMP could still be used for other stuff.
Depends a bit on what you mean by p2p.
If you mean it as anyone can run their own server - this is already possible. But since message inboxes are one way you can really only control how the server for the messages your are receiving. The messages you send go to wherever the destination says they should go.
If by P2P you mean fully decentralized with no distinction between clients and servers then the discussion is a bit longer, but right now no, and for practical reasons probably not. Let me elaborate a bit.
First the protocol assumes a client, that is your messaging app, delivers the message to a server which is identified by a hostname/ip (you ability to deliver depends on this server being up and working).
For practical reasons the server is a separate entity, just like in email delivery protocols, because
- it needs a stable hostname or ip address
- it needs to be reachable most of the time to maximize availability, otherwise messages are not delivered
Now, in practice nothing prevents your client and server from being the same machine, but if the previous two points are not true you will have a bad experience receiving your messages. Specially since most clients are on mobile phones, these two points will likely not be true.
Another thing we could do to get it to be fully distributed would be to run simplex on top of other p2p protocol - anything w/ a DHT that can expose ports to the Internet (Tor, GNUNet, etc). But this has one downside - the client app would need to recognize new types of inbox addresses and connect accordingly.
You could probably achieve this with Tor and some .onion host setup. But then anyone that wanted to deliver to you would need an equivalent setup.
Apologies, I tend to nerd on about distributed systems.
First of all, you can assume the server can infer this in a number of ways - there is actually no way to fully block it, but we can try.
The main issue for privacy is that it makes your browser behave in ways that are a bit too specific (i.e. less private by comparison with the rest of the browsers in the known universe).
As for techniques the site can use
- javascript can test the geometry of something that was rendered to draw conclusions - was this font actually used? test several options and check for variations
- measure font work between network events i.e. generate a site that makes the browser use unique links for 1) fetches a font 2) renders text and 3) only then another fetch - measure the time between 1) and 3) and draw conclusions. Repeat for test cases and draw conclusions - e.g. is the browser really fast using monospace vs custom huge font? not a great method, but not completely worthless
- some techniques can actually do some of this without Javascript, provided you can generate some weird CSS/HTML that conditionally triggers a fetch
By the away not downloading the fonts also makes you "less private". Some of this is a stretch but not impossible.
Now for a more practical problem. Lots of sites use custom fonts for icons. Which means some sites will be very hard to use, because they only display buttons with an icon (actually a letter with a custom font).
FWIW these two lines are in my Firefox profile to disable downloads and skip document provided fonts:
user_pref("gfx.downloadable_fonts.enabled", false);
user_pref("browser.display.use_document_fonts", 0);
If someone has better/different settings please share.
Finally the Tor browser folks did good work on privacy protections over FF. Maybe their issue tracker is a good source of inspiration https://gitlab.torproject.org/tpo/applications/tor-browser/-/issues/18097
I don't quite agree with some of the rationale
- I do think users have benefited from Open Source, but I also think that there has been an a decline in Open Source software in general
- I don't think contracts are a good analogy here (in the sense that every corporate consumer of the software would have to sign one)
Having said this I do understand where he is coming from. And I agree that:
- a lot of big companies consume this software and don't give back
- corporate interests are well entrenched in some Open Source projects, and some bad decisions have been made
- he does raise an interesting point about the commons clause (but them I'm no laywer)
I would like to remind everyone that the GPL pretty much exists because of (1.). If anything we should have more GPL code. In that regard I don't think it failed us. But we rarely see enforced (in court). Frankly most of our code is not that special so please GPL it.
Finally I think users do know about Open Source software indirectly. In the same way they find out their "public" infrastructure has been running without permit or inspection the day things start breaking and the original builder/supplier is long gone and left no trace of how it works.
Since these days everything is software (or black box hardware with firmware) this is increasingly important in public policy. And I do wish we would see public contracts asking for hardware/firmware what some already for software.
I wont get into the Redhat/IBM+CentOS/Fedora or AI points because there is a lot more going on there. Not that he is not right. But I'm kind of fed up with it :D
I've tried a few times in the past 2 weeks. Using a good email account and also with github, no luck though. Maybe its doing some "smart" heuristics to trigger it.
I just retried now, using that temp mail (but no vpn) and got the exact same phone verification. Maybe my IP address is evil :D
Looks like gitlab now requires account verification for new accounts in addition to email. Either phone number or credit card.
This applies both to accounts created with a working email or by logging in using your github account. You can't even verify your email until you go through step 1.
I don't know when this started, but at least for the last month or two judging from these posts in the forums.
- https://forum.gitlab.com/t/how-to-create-an-account-without-telephone-number-if-an-non-activated-account-has-already-been-created-with-the-same-e-mail-address-that-demands-a-phone-number/93675/2
- https://forum.gitlab.com/t/phone-verification-sms-not-received-unable-to-login-and-register/92202
- https://forum.gitlab.com/t/how-to-create-an-account-without-telephone-number-if-an-non-activated-account-has-already-been-created-with-the-same-e-mail-address-that-demands-a-phone-number/93675/2
Fun fact: I don't even want to host on gitlab, I just wanted to report bugs in some projects. So I'm locked out.
I'm a bit of terminal nerd, so probably not the best person to talk about desktop. I don't have many thoughts with regards to app development or layout for accessibility. What I really would like is for distros to be accessible from the ground up, even before the desktop is up.
The best example of accessibility from the ground up I saw for linux was talking arch, an Arch Linux spin with speech. Sadly the website is gone, but we can find it in the web archive
in particular there was an audio tutorial to help you install the live cd (you can still ear it in the archive):
Here are a few resources, which are pretty dated but I wish they were the norm in any install:
- your system can boot up talking, using speakup http://www.linux-speakup.org/speakup.html
- even if the desktop is not fully accessible or breaks, your console or the terminal in your desktop can speak using yasr https://yasr.sourceforge.net/
Now going into your points:
How should a blind Desktop be structured?
To be honest I don't expect much here. As long as context/window switching signals you properly you are probably fine. I have not used gnome with orca in a long time, but this used to be ok. The problems begin with the apps, tabs and app internal structure.
Are there any big dealbreakers like Wayland, TTS engines, specific applications e.g.?
Lots.
Some times your screen reader breaks and its nice to have a magic key that restarts the screen reader, or the entire desktop. Or you just swap into a virtual console running speakup/yasr and do it yourself :D
TTS engines are probably ok. Some times people complain about the voices, but I think it is fine as long as it reliably works, does not hang, responds quickly.
Specific applications are tricky. The default settings on a lot of apps wont work well by default, but that is not surprising.
I do think that a lot of newer apps have two problems
- they are not configurable or scriptable at all, there is only one way to do things and no way to customize it. Opening tickets to patch each and every feature is not feasible.
- They frequently go through breaking release cycles that nuke old features, so you need to relearn all your tricks on the next major release and find new hacks
I can give you two good-ish examples, both Vim and Mutt can work very well with a terminal screen reader, but it is a lot of work to configure:
- with vim you need to disable all features that make the cursor jump around and draw stuff (like line numbers and the ruler)
- with mutt every single string in the screen can be customized, so you even insert SSML to control speech and read email
I think you can find similar examples in desktop apps too.
What do you think would be the best base Desktop to build such a setup on?
no idea to be honest. Gnome use to have support. I suppose other desktops that can be remote controlled could be changed to integrate speech (like i3 or sway).
Would you think an immutable, out of the box Distro like “Fedora Silversound”, with everything included, the best tools, presets, easy setup e.g. is a good idea?
I have never used Silversound. But the key thing for me is to be able to roll back forward to a working state.
How privacy-friendly can a usable blind Desktop be?
I think it should be fine. People with screens have things like those Laptop Screen Privacy Filter, people using audio have headphones. Depending on your machine you can setup the mixer so that audio never uses the external speaker.
I don't recall the details but you can also have some applications send audio to the external speaker while others use your headphones (provided they are a separate sound card, like usb/bluetooth headphones).
Also, how would you like to call it? “A Talking Desktop”?
Urgh, Shouting Linux.
This is a really nice summary of the practical issues surrounding this.
There is one more that I would like to call out: how does this client scanning code end up running in your phone? i.e. who pushes it there and keeps it up to date (and by consequence the database).
I can think of a few options:
- The messaging app owner includes this as part of their code, and for every msg/image/etc checks before send (/receive?)
- The phone OS vendor puts it there, bakes it as part of the image store/retrieval API - in a sense it works more on your gallery than your messaging app
- The phone vendor puts it there, just like they already do for their branded apps.
- Your mobile operator puts it there, just like they already do for their stuff
Each of these has its own problems/challenges. How to compel them to insert this (ahem "backdoor"), and the different risks with each of them.
Fair point (IP, email, browser session data). Those should not be exposed via the federation in any way. And the existence of the federated network means we could switch instances if we are concerned our instance is a bad actor about this.
I did not mean to suggest the ecosystem is not valuable for privacy. I just really don't want people to associate federation with privacy protections about data that is basically public (posts, profile data, etc). Wrong expectations about privacy are harmful.
To be fair I do not expect any privacy protections from lemmy/mastodon in general, or from blocking/defederation in particular.
Lemmy/Mastodon protocols are not really private, as soon you place your data in one instance your data is accessible by others in the same instance. If that instance is federated this extends to other instances too. In other words the system can be seen as mostly public data since most instances are public.
The purpose of blocking or defederation (which is blocking at instance level) is to fight spam content, not to provide privacy.
Ultimately you are trusting the relay server to hold your messages If the relay is not trustworthy, it could reveal those messages.
The only exception I know of are encrypted direct messages which are still held by the relay but are encrypted with the recipient's key. These messages still have a cleartext recipient id (so the server can deliver them).
So, if the relay is well behaved
- messages are confidential between you and the relay
- direct messages are only delivered to the recipient and are encrypted
- most other messages are visible by anyone that can connect to the same relay
- btw the relay can enforce a list of people that can connect (i.e. a private server) or just make it harder via proof of work (to discourage bots)
If the relay server is operated by the forces of evil, then the only thing you can assume is that direct message content is not visible, but they can see the message src/destination/timestamp.
I think the main motivation for nostr is censorship resistence - so if you are being blocked in one relay, you move to another - in terms of privacy/security it does not seem weaker than most other public message forums.
They could serve similar purposes. In terms of maturity nostr is younger. Here are the main differences from the point of view of nostr:
- In nostr there is no registration, your identity is your public key that you generate by yourself (lose that and you cannot recover it). You can connect to a bunch of different nostr relays with the same key, or use different ones.
- AFAIK nostr does NOT do end to end encrypted for group chat. But it does support end to end encryption for direct messages
- nostr does not do video/audio calls
- nostr does not host your images/files, you just put some URL in your messages
At its core nostr is a basic protocol where you send messages to a relay server and the relay passes them along to other people when they request them. And on top of those messages people implement extensions for features, full length posts, payments, etc. The are notions of followers and subscriptions (like twitter) but those are just tiny messages where you ask the relay for messages from person A or B. The list of specifications is here https://github.com/nostr-protocol/nips
Finally there are a few different nostr implementations for relays, clients and web interfaces. Some of them do not implement all the features, so you may need to shop around a bit if your are looking for some fancy features (check https://github.com/vishalxl/Nostr-Clients-Features-List).
Also some nostr highlights which I think don't have equivalent in matrix (but deserve nerd points)
- message expiration dates - the relay removes them after the deadline
- nostr has builtin proof of work to dissuade spam by forcing the client to do some computation before posting
- you can do reposts across relays or share relay addresses to people in another relay
As any engineer who does ops can tell you - you did the right thing - the solution is always to roll back, never force a roll forward, ever.
We should totally do pre and post update parties though. Even if the update fails we can have an excuse for drinks and a fun thread.