Block the posting bot - a very simple way to solve it.
If The Wire
and The Shield
have taught me anything - it's that all the stats are cooked.
I've been running mine for just over 5 years now - initial setup was ass, but it's very much hands off now - email simply doesn't change anymore.
If you have a domain to test - I can host it for you. If you then decide that it works well enough for you - I'll show you how to set it up on your own server.
Wireguard works best for private traffic, but you can't host a public site with that.
Of course you can! Nginx and wireguard on a VPS and actual services wherever you want.
If you can dedicate some time to constant keep up - pick a rolling distro. Doing major version upgrades has never not had problems for me. Every major distro has one.
My choice is Gentoo, but I'm weird like that. Having said that - my email server has been running happily on Arch for just over 5 years now.
The lemmy instance I host is on Debian testing - Gentoo was not available on DO - no issues so far.
Even when it's mostly containers - why waste time every n years doing the big upgrade? Small change is always safer.
Don't really have anything to add, but wanted to boost the activity on this post.
A subscription for surge protection is fucking stupid - that's insurance renamed; but you, OP, clearly do not understand how electricity works.
@Weslee@lemmy.world has already answered, but in general - you can see [de]federated instances at an <instance url>/instances. In my case that would be lemmy.cafe/instances
Where's the weirs tri-pin 240V one for US?
If not for an occasional comment like yours - I would never remember I've defederated them. Thanks!
That's not what I meant.
Never had a chance to give syncthing a shot, but nextcloud works very well. On top of that, if you ever want to ditch apple/google - it will also happily sync your contacts, calendar, etc, as well as more niche stuff like bike rides. It can become chonky, but that really depends on how much stuff you're asking it to do.
Gentoo OpenRC 🥰
So... it's a hack, but it's not been hacked?
Come the fuck on, BBC, you can do better.
It's kinda funny how we think the 100 watts of a desktop P4 was insane when now the TDP of a high end laptop CPU is more than that.
It really isn't. Modern mobile cpus barely sip power.
Precision guesswork here, but I've had nginx (not on opnsense) redirecting me to the default
host quite a few times recently - all times it was me cocking up its config. It could be that nginx is waiting for the actual target until it times out and then just gives a your opnsense gui as the most reasonable response.
I'd start checking its config. Or pasting it here, after removing secrets, it any.
The link does not load for some reason, but tar itself does not compress anything. Compression can (and usually is) applied afterwards, but that's an additional integration that is not part of Tape ARchive, as such.
It's been a while since I've last done one of those. Life happened, but things are settling down slowly. I'll combine all five months since September into a single post, but will keep them separated visually.
Income is applied on date of arrival. Stripe (LiberaPay uses it as processor) pays out in advance and as such income shows up as 0, even though LiberaPay banner shows a non-zero value.
Any financial help is greatly appreciated! To donate click the banner here or in the sidebar:
[!](https://liberapay.com/Illecors/donate)
February 2024
Contributions
- LiberaPay: £0
Expenses
- Servers: £38.94
Month balance
- -£38.94
Previous balance
- -£216.05
Balance to date
- -£254.99
___
January 2024
Contributions
- LiberaPay: £0
Expenses
- Servers: £38.66
Month balance
- -£38.66
Previous balance
- -£177.39
Balance to date
- -£216.05
___
December 2023
Contributions
- LiberaPay: £0
Expenses
- Servers: £38.59
- Backblaze B2: £0.64
Month balance
- -£39.23
Previous balance
- -£138.16
Balance to date
- -£177.39
___
November 2023
Contributions
- LiberaPay: £0
Expenses
- Servers: £38.74
Month balance
- -£38.74
Previous balance
- -£99.42
Balance to date
- -£138.16
___
October 2023
Contributions
- LiberaPay: £0
Expenses
- Servers: £37.32
Month balance
- -£37.32
Previous balance
- -£62.1
Balance to date
- -£99.42
___
Finance History
| | September 2023 | August 2023 | July 2023 | June 2023 | |---------------|---------|---------|---------|---------| | Contributions | £11.41 | £15.26 | £0 | £0 | | Expenses | £28.85 | £24.75 | £19.18 | £15.99 | | Difference | -£17.44 | -£9.49 | -£19.18 | -£15.99 | | Balance | -£62.1 | -£44.66 | -£35.17 | -£15.99 |
___
Previous reports
___ September 2023 ___ August 2023 ___ July 2023 ___ June 2023
EDIT: you guys have dug up some truly horrible pisstakes :D Thank you for those.
To the serious folk - relax a little. This is Mildly Infuriating
, not I'm dying if this doesn't stop
. As a non-native speaker I was taught a certain way to use the language. The rules were not written down by me, nor the teachers - it was done by the native folk. Peace!
Andrej Karpathy, an artificial intelligence researcher and one of the founding members of OpenAI, said in a post on social media platform X that he departed the Microsoft-backed company on Monday.
Maybe those board and ceo shenanigans did leave a bad taste internally, afterall.
cross-posted from: https://lemmy.world/post/11608344
> Credit: https://mas.to/@markarayner
The upgrade has gone through smoothly and everything seems to be running smoothly.
The performance looks to be better on the backend, time will tell if the memory leak issue is actually solved. So far, though - so good!
Find the exact time difference with the Time Zone Converter – Time Difference Calculator which converts the time difference between places and time zones all over the world.
Lemmy Cafe will be having its database upgraded.
Reasons
- Pict-rs is expecting PostgreSQL 16. It's running fine now, but it might not be at some future point.
- PostgreSQL 15 has a bug that requires
jit
to be turned off - otherwise DB keeps consuming all the memory available on the system and then some. In the end it gets culled by the kernel. This has performance as well as reliability issues. While turningjit
off has remedied the constant failure, it has also made the database a bit slower. I prefer squeezing out as much performance as possible :)
Plan
- Point
nginx
to the maintenance page - Shut down PostgreSQL 15
- Run the upgrade tool
- Start up PostgreSQL 16
- Point
nginx
to lemmy
Expected downtime
About an hour, if things go well. More if not so.
Will try to keep the maintenance page updated.
Here's the timezone converter
There are a few reasons why pict-rs might not be running, upgrades being one of them. At the moment the whole of lemmy UI will crash and burn if it cannot load a site icon. Yes, that little thing. Here's the github issue.
To work around this I have set the icon and banner (might as well since we're working on this) to be loaded from a local file rather than nginx.
Here's a snippet of nginx
config from the server
block:
``` location /static-img/ { alias /srv/lemmy/lemmy.cafe/static-img/;
# Rate limit limit_req zone=lemmy.cafe_ratelimit burst=30 nodelay;
# Asset cache defined in /etc/nginx/conf.d/static-asset-cache.conf proxy_cache lemmy_cache; } ``` I have also included the rate limitting and cache config, but it is not, strictly speaking, necessary.
The somewhat important bit here is the location
- I've tried using static
, but that is already used by lemmy itself, and as such breaks the UI. Hence the static-img
.
I have downloaded the icon and banner from the URLs saved in the database (assuming your instance id in site
is, in fact, 1):
SELECT id, icon, banner FROM site WHERE id = 1; id | icon | banner ----+----------------------------------------------+------------------------------------------------ 1 | https://lemmy.cafe/pictrs/image/43256175-2cc1-4598-a4b8-2575430ab253.webp | https://lemmy.cafe/pictrs/image/c982358f-6a51-4eb6-bf0e-7a07a756e600.webp (1 row)
I have then saved those files in /srv/lemmy/lemmy.cafe/static-img/
as site-icon.webp
and site-banner.webp
.
Changed the ownership to that of nginx (www-data
in debian universe, http
and httpd
in others.
I have then updated the site table to point to the new location for icon
and banner
:
UPDATE site SET icon = 'https://lemmy.cafe/static-img/site-icon.webp' WHERE id = 1; UPDATE site SET banner = 'https://lemmy.cafe/static-img/site-banner.webp' WHERE id = 1;
Confirm it got applied:
SELECT id, icon, banner FROM site WHERE id = 1; id | icon | banner ----+----------------------------------------------+------------------------------------------------ 1 | https://lemmy.cafe/static-img/site-icon.webp | https://lemmy.cafe/static-img/site-banner.webp (1 row)
That's it! You can now reload your nginx server (nginx -s reload
) to apply the new path!
docker compose
___
I'm using a v2
- notice the lack of a dash between docker
and compose
.
I've recently learnt of the default filenames docker compose
is trying to
source upon invocation and decided to give it a try. The files are:
- compose.yml
- compose.override.yml
I have split the default docker-compose.yml
that lemmy
comes with into 2
parts - compose.yml
holds pict-rs
, postfix
and, in my case,
gatus
. compose.override.yml
is responsible for lemmy services only. This is
what the files contain:
compose.yml
``` x-logging: &default-logging driver: "json-file" options: max-size: "20m" max-file: "4"
services: pictrs: image: asonix/pictrs:0.5.0 user: 991:991 ports: - "127.0.0.1:28394:8080" volumes: - ./volumes/pictrs:/mnt restart: always logging: *default-logging entrypoint: /sbin/tini -- /usr/local/bin/pict-rs run environment: - PICTRS__OLD_REPO__PATH=/mnt/sled-repo - PICTRS__REPO__TYPE=postgres - PICTRS__REPO__URL=postgres://pictrs:<redacted>@psql:5432/pictrs - RUST_LOG=warn - PICTRS__MEDIA__MAX_FILE_SIZE=1 - PICTRS__MEDIA__IMAGE__FORMAT=webp deploy: resources: limits: memory: 512m postfix: image: mwader/postfix-relay environment: - POSTFIX_myhostname=lemmy.cafe volumes: - ./volumes/postfix:/etc/postfix restart: "always" logging: *default-logging
gatus: image: twinproduction/gatus ports: - "8080:8080" volumes: - ./volumes/gatus:/config restart: always logging: *default-logging deploy: resources: limits: memory: 128M ```
___
compose.override.yml
is actually a hardlink to the currently active
deployment. I have two separate files - compose-green.yml
and
compose-blue.yml
. This allows me to prepare and deploy an upgrade to lemmy
while the old version is still running.
compose-green.yml
``` services: lemmy-green: image: dessalines/lemmy:0.19.2 hostname: lemmy-green ports: - "127.0.1.1:14422:8536" restart: always logging: *default-logging environment: - RUST_LOG="warn" volumes: - ./lemmy.hjson:/config/config.hjson # depends_on: # - pictrs deploy: resources: limits: # cpus: "0.1" memory: 128m entrypoint: lemmy_server --disable-activity-sending --disable-scheduled-tasks
lemmy-federation-green: image: dessalines/lemmy:0.19.2 hostname: lemmy-federation-green ports: - "127.0.1.1:14423:8536" restart: always logging: *default-logging environment: - RUST_LOG="warn,activitypub_federation=info" volumes: - ./lemmy-federation.hjson:/config/config.hjson # depends_on: # - pictrs deploy: resources: limits: cpus: "0.2" memory: 512m entrypoint: lemmy_server --disable-http-server --disable-scheduled-tasks
lemmy-tasks-green: image: dessalines/lemmy:0.19.2 hostname: lemmy-tasks ports: - "127.0.1.1:14424:8536" restart: always logging: *default-logging environment: - RUST_LOG="info" volumes: - ./lemmy-tasks.hjson:/config/config.hjson # depends_on: # - pictrs deploy: resources: limits: cpus: "0.1" memory: 128m entrypoint: lemmy_server --disable-http-server --disable-activity-sending
#############################################################################
lemmy-ui-green: image: dessalines/lemmy-ui:0.19.2 ports: - "127.0.1.1:17862:1234" restart: always logging: *default-logging environment: - LEMMY_UI_LEMMY_INTERNAL_HOST=lemmy-green:8536 - LEMMY_UI_LEMMY_EXTERNAL_HOST=lemmy.cafe - LEMMY_UI_HTTPS=true volumes: - ./volumes/lemmy-ui/extra_themes:/app/extra_themes depends_on: - lemmy-green deploy: resources: limits: memory: 256m ```
compose-blue.yml
``` services: lemmy-blue: image: dessalines/lemmy:0.19.2-rc.5 hostname: lemmy-blue ports: - "127.0.2.1:14422:8536" restart: always logging: *default-logging environment: - RUST_LOG="warn" volumes: - ./lemmy.hjson:/config/config.hjson # depends_on: # - pictrs deploy: resources: limits: # cpus: "0.1" memory: 128m entrypoint: lemmy_server --disable-activity-sending --disable-scheduled-tasks
lemmy-federation-blue: image: dessalines/lemmy:0.19.2-rc.5 hostname: lemmy-federation-blue ports: - "127.0.2.1:14423:8536" restart: always logging: *default-logging environment: - RUST_LOG="warn,activitypub_federation=info" volumes: - ./lemmy-federation.hjson:/config/config.hjson # depends_on: # - pictrs deploy: resources: limits: cpus: "0.2" memory: 512m entrypoint: lemmy_server --disable-http-server --disable-scheduled-tasks
lemmy-tasks-blue: image: dessalines/lemmy:0.19.2-rc.5 hostname: lemmy-tasks-blue ports: - "127.0.2.1:14424:8536" restart: always logging: *default-logging environment: - RUST_LOG="info" volumes: - ./lemmy-tasks.hjson:/config/config.hjson # depends_on: # - pictrs deploy: resources: limits: cpus: "0.1" memory: 128m entrypoint: lemmy_server --disable-http-server --disable-activity-sending
#############################################################################
lemmy-ui-blue: image: dessalines/lemmy-ui:0.19.2-rc.5 ports: - "127.0.2.1:17862:1234" restart: always logging: *default-logging environment: - LEMMY_UI_LEMMY_INTERNAL_HOST=lemmy-blue:8536 - LEMMY_UI_LEMMY_EXTERNAL_HOST=lemmy.cafe - LEMMY_UI_HTTPS=true volumes: - ./volumes/lemmy-ui/extra_themes:/app/extra_themes depends_on: - lemmy-blue deploy: resources: limits: memory: 256m ```
___ The only constant different between the two is the IP address I use to expose them to the host. I've tried using ports, but found that it's much easier to follow it in my mind by sticking to the ports and changing the bound IP.
I also have two nginx
configs to reflect the different IP for green
/blue
deployments, but pasting the whole config here would be a tad too much.
No-downtime upgrade
___
Let's say green
is the currently active deployment. In that case - edit the compose-blue.yml
file to change the version of lemmy on all 4 components - lemmy, federation, tasks and ui. Then bring down the tasks
container from the active deployment, activate the whole of blue
deployment and link it to be the compose.override.yml
. Once the tasks
container is done with whatever tasks it's supposed to do - switch over the nginx
config. Et voilà - no downtime upgrade is live!
Now all that's left to do is tear down the green
containers.
``` docker compose down lemmy-tasks-green docker compose -f compose-blue.yml up -d ln -f compose-blue.yml compose.override.yml
Wait for tasks to finish
ln -sf /etc/nginx/sites-available/lemmy.cafe-blue.conf /etc/sites-enabled/lemmy.cafe.conf nginx -t && nginx -s reload docker compose -f compose-green.yml down lemmy-green lemmy-federation-green lemmy-tasks-green lemmy-ui-green ```
lemmy.hjson
___
I have also multiplied lemmy.hjson
to provide a bit more control.
lemmy.hjson
{ database: { host: "psql" port: 5432 user: "lemmy" password: "<redacted>" pool_size: 3 } hostname: "lemmy.cafe" pictrs: { url: "http://pictrs:8080/" api_key: "<redacted>" } email: { smtp_server: "postfix:25" smtp_from_address: "lemmy@lemmy.cafe" tls_type: "none" } }
lemmy-federation.hjson
{ database: { host: "psql" port: 5432 user: "lemmy_federation" password: "<redacted>" pool_size: 10 } hostname: "lemmy.cafe" pictrs: { url: "http://pictrs:8080/" api_key: "<redacted>" } email: { smtp_server: "postfix:25" smtp_from_address: "lemmy@lemmy.cafe" tls_type: "none" } worker_count: 10 retry_count: 2 }
lemmy-tasks.hjson
{ database: { host: "10.20.0.2" port: 5432 user: "lemmy_tasks" password: "<redacted>" pool_size: 3 } hostname: "lemmy.cafe" pictrs: { url: "http://pictrs:8080/" api_key: "<redacted>" } email: { smtp_server: "postfix:25" smtp_from_address: "lemmy@lemmy.cafe" tls_type: "none" } }
___
I suspect it might be possible to remove pict-rs
and/or email
config from some of them, but honestly it's not a big deal and I haven't had enough time, yet, to look at it.
Future steps
I'd like to script the actual switch-over - it's really trivial, especially since most of the parts are there already. All I'd really like is apply strict failure mode on the script and see how it behaves; do a few actual upgrades.
Once that happens - I'll post it here.
So long and thanks for all the fish!
After a new law to quash hundreds of convictions was announced on Wednesday, an inquiry into the scandal resumes in London.
Proper cocked up cover up. Now live!
The process went through smoothly. I have also used the opportunity to split up a singular lemmy container into individual tasks - this has enabled a seemless upgrade process with no downtime, bar a few process quirks I need to work out.
There have been some federation fixes merged into this release, so the situation should definitely be improving overall!
I will make a more detailed write up of the whole setup later on, other admins might find it useful. Or not.
A little more speed and less latency
> Using optimization techniques, the wireless spec can support a theoretical top speed of more than 40Gbps, though vendors like Qualcomm suggest 5.8Gbps is a more realistic expectation
That is insane! Not that I would, but this could utilise the full pipe of my home connection on wifi only!
This feels sensible. There will always be naysayers, there will always be exports to poorer countries, but arguing against such a policy makes it sound like protecting the status quo for the sake of it.
It's a theoretical pathway, but it's a nice read overall for someone who's not into warring all that much. I've found the explanations understandable.