It's that, plus "notifications can disrupt your sleep."
“A much greater issue [than the blue light] is likely to be the content viewed,” says Peirson. “Reading work emails relating to impending deadlines is clearly going to cause anxiety, and anxiety is strongly related to insomnia.”
Gimp believes in you and loves you in a non clingy way.
Gimp might be able to perform that little logo-transformation favour for you libre of charge, but at least give it a call after for heaven's sake.
That actually makes a lot of sense. I never even second guessed how tedious all the parsing is. But then, as others have said here, as soon as the task at hand reaches a level of complexity beyond grepping, piping and so on I just very naturally move to Python.
On a different note, there are ways to teach bash json. I recall seeing a hacker conference talk on it some time ago, but didn't pay close attention.
Mh, it probably depends a lot where you're coming from. I don't need Powershell or have a reason to learn it in my daily work, and I mostly use WSL to access Linux shells everywhere else. And on top of that, I don't understand why Powershell needs a completely different command set to basically every other shell. It's a biased take, but I have not had an interaction with Powershell that I liked, nor have I seen a feature that made me want to look into it more.
What's the killer feature, would you say? Care giving me the fanboy-pitch?
edit. Oh and I forgot, the tab completion in Powershell is so incredibly dumb. I never ever in my life want to cycle through all items in a path, and much less have it be case insensitive. Come to think of it, this might be the origin of most of my disdain. ;)
WSL has changed the game pretty significantly, don't you agree? It's not perfect, but allows me to stay firm in my resolve never to learn powershell.
I'd spend half the money on snail amnesia research. The rest I'd just squander.
Storage box is self-serviced storage on a single server, as far as I'm aware. If you need replication, you need to rent storage at a second location and do it yourself.
And I'm sure the fish he caught that one time really was YEA big. And boy the fight he gave him.
By god, lemmy is civilised. 😂 I love it.
I can see what you mean, too, but am still on the liking him side I guess. And anyway, l'art pour l'art and all that, right? 😅
Hm, interesting. I didn't read it like that, but as an economist trying to make sense of what's going on and explain it to others. I didn't question whether the thoughts are original, neither do I know if there are holes in his concepts that I as a non-economist am blind to. My personal opinion, anyway, is that the message is important today (or better yet 15 years ago but nobody would have listened 😉), no matter whether he is primarily motivated by his ego or what.
Maybe this makes me part of the people he caters to, but that line of thinking doesn't lead anywhere meaningful anyway, I think.
I liked the end of the book: A call to action for us to come up with tools and technological solutions for "users" to stand together so we can create resistance against overly powerful cooperations and demand our rights. I don't think it's hypocritical for him to ask for this either. We need people to point problems out and problem solvers, both.
Have you read more of what he wrote or how did you come by that opinion on him? Technofeudalism and a number of interviews leading up to the book release was the first I was exposed to him.
I have a Raspberry Pi 3 with a Hifiberry DAC running OSMC (nicely packaged Kodi on top of Debian) acting as my media center and recently installed Jellycon with the hopes of being able to use server side transcoding for a few formats my old TV doesn't support.
My verdict: Menu navigation is slow, but it's a native kodi integration (supports widgets) and playback works great once you made your way through the menus. You can selectively set transcoding options per file type which is exactly what I needed.
Best solution I've seen so far, as it also does IR remote passthrough over HDMI if your TV supports it. The addon works in any kodi setup of course. I think there might be a way to start playback from the Jellyfin web UI but haven't bothered with it. This would fully remedy the menu slowness, I think.
Is that a way of saying you think he's wrong?
I thought the book had an interesting core idea, even if his grasp on technology seems rather loose and I really disliked the literary device he used to explain said idea.
What's your take on it?
Maybe there's a natural border like a river there that influences propagation. (I don't know how earthquakes work.)
I strongly disagree with this statement. Just because it's hard to do doesn't mean it isn't what you rationally decide you want to do. The reason for staying and the reason for leaving are orthogonal to one another or else there wouldn't be a conflict. Compare to substance addiction: You decide you want to stop, but you need.
Read your reply now, and not sure about the requirements you have: must not leave the local devices or must not use the WiFi?
If it's the latter, a 4g USB modem with a cheap iot data plan easily frees you of that.
I second Zigbee.
- There's plenty of devices available.
- Battery life is amazing.
- zigbee2mqtt is an easy way to bring those messages into your regular IP network; they have a huge list of supported devices.
- Once translated to MQTT, you can hook any automation onto it you want: a python script, home assistant, or my recommendation in this case, NodeRED. NodeRED has a module for zigbee2mqtt that is very well integrated to just know all devices registered on your zigbee network and stringing flows together is actually fun once you get the hang of it. Plus, there is no upper bound to flow complexity.
- Gateway device can be a sonoff zigbee USB coordinator and the whole thing can comfortably run on a rpi3.
It's only fair.
In my home network, I'm currently hosting a public facing service and a number of private services (on their own subdomain resolved on my local DNS), all behind a reverse proxy acting as a "bouncer" that serves the public service on a subdomain on a port forward.
I am in the process of moving the network behind a hardware firewall and separating the network out and would like to move the reverse proxy into its own VLAN (DMZ). My initial plan was to host reverse proxy + authentication service in a VM in the DMZ, with firewall allow rules only port 80 to the services on my LAN and everything else blocked.
On closer look, this now seems like a single point of failure that could expose private services if something goes wrong with the reverse proxy. Alternatively, I could have a reverse proxy in the DMZ only for the public service and another reverse proxy on the LAN for internal services.
What is everyone doing in this situation? What are best practices? Thanks a bunch, as always!
Hi there, hoping to find some help with a naive networking question.
I recently bought my first firewall appliance, installed Opnsense and am going to use it with my ISP modem in bridge mode, but while I'm learning I added it to my existing LAN with a 192.168.0.0/24 address assigned to the WAN port by my current DHCP. On the firewall's LAN port I set up a 10.0.0.0/24 network and am starting to build up my services. So far so good, but there's one thing I can't get to work: I can't port forward the firewall's WAN IP to a service on the firewall's LAN network and I can't figure out why.
To illustrate, I would like laptop with IP 192.168.0.161 to be able to reach service on 10.0.0.22:8888 by requesting firewall WAN IP 192.168.0.136:8888.
Private IPs and bogons are permitted on the WAN interface and I have followed every guide I can find for the port forwarding, but the closest I have come to this working is a "connection reset" browser error.
Hope my question is clear and isn't very dumb. Thanks for the help or any explanation why I might be struggling to get this to work. Am I missing something obvious?
---
UPDATE The thread is all over the place, but I have made some progress:
- RDR rule gets triggered when requesting 192.168.0.136:8888 from 192.168.0.123
- Apache logs show
2024-02-09T17:39:17.056208857Z 192.168.0.123 - - [09/Feb/2024:17:39:17 +0000] "GET / HTTP/1.1" 200 161
- a tcpdump (in spoiler below) on the apache container looks inconspicuous to my untrained eye, with the exception of checksum errors in some packets from the docker container (172.20.0.2). The last five lines, after the second GET request (why is there a second GET request?) appear in tcpdump after a delay of about five seconds.
tcpdump
```17:45:14.918182 IP (tos 0x0, ttl 62, id 63127, offset 0, flags [DF], proto TCP (6), length 60) 192.168.0.123.54120 > 172.20.0.2.80: Flags [S], cksum 0xfdc5 (correct), seq 4106772895, win 64240, options [mss 1460,sackOK,TS val 1485594466 ecr 0,nop,wscale 7], length 0 17:45:14.918207 IP (tos 0x0, ttl 64, id 0, offset 0, flags [DF], proto TCP (6), length 60) 172.20.0.2.80 > 192.168.0.123.54120: Flags [S.], cksum 0x6d68 (incorrect -> 0x2fd7), seq 3999845366, ack 4106772896, win 65160, options [mss 1460,sackOK,TS val 1469298770 ecr 1485594466,nop,wscale 7], length 0 17:45:14.924098 IP (tos 0x0, ttl 62, id 63128, offset 0, flags [DF], proto TCP (6), length 52) 192.168.0.123.54120 > 172.20.0.2.80: Flags [.], cksum 0x5b30 (correct), ack 3999845367, win 502, options [nop,nop,TS val 1485594472 ecr 1469298770], length 0 17:45:14.924102 IP (tos 0x0, ttl 62, id 63129, offset 0, flags [DF], proto TCP (6), length 134) 192.168.0.123.54120 > 172.20.0.2.80: Flags [P.], cksum 0x70f5 (correct), seq 4106772896:4106772978, ack 3999845367, win 502, options [nop,nop,TS val 1485594472 ecr 1469298770], length 82: HTTP, length: 82 GET / HTTP/1.1 Host: 192.168.0.136:8888 User-Agent: curl/7.74.0 Accept: /
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 3.2 Final//EN"> <html> <head> <title>Index of /</title> </head> <body> <h1>Index of /</h1> <ul></ul> </body></html>
17:45:14.924119 IP (tos 0x0, ttl 64, id 34500, offset 0, flags [DF], proto TCP (6), length 52) 172.20.0.2.80 > 192.168.0.123.54120: Flags [.], cksum 0x6d60 (incorrect -> 0x5ad1), ack 4106772978, win 509, options [nop,nop,TS val 1469298776 ecr 1485594472], length 0 17:45:14.924407 IP (tos 0x0, ttl 64, id 34501, offset 0, flags [DF], proto TCP (6), length 364) 172.20.0.2.80 > 192.168.0.123.54120: Flags [P.], cksum 0x6e98 (incorrect -> 0x0a74), seq 3999845367:3999845679, ack 4106772978, win 509, options [nop,nop,TS val 1469298776 ecr 1485594472], length 312: HTTP, length: 312 HTTP/1.1 200 OK Date: Fri, 09 Feb 2024 17:45:14 GMT Server: Apache/2.4.58 (Unix) Content-Length: 161 Content-Type: text/html;charset=ISO-8859-1 17:45:14.929077 IP (tos 0x0, ttl 61, id 0, offset 0, flags [DF], proto TCP (6), length 40) 192.168.0.123.54120 > 172.20.0.2.80: Flags [R], cksum 0x1833 (correct), seq 4106772978, win 0, length 0 17:45:15.138862 IP (tos 0x0, ttl 62, id 63130, offset 0, flags [DF], proto TCP (6), length 134) 192.168.0.123.54120 > 172.20.0.2.80: Flags [P.], cksum 0x701e (correct), seq 4106772896:4106772978, ack 3999845367, win 502, options [nop,nop,TS val 1485594687 ecr 1469298770], length 82: HTTP, length: 82 GET / HTTP/1.1 Host: 192.168.0.136:8888 User-Agent: curl/7.74.0 Accept: /
17:45:15.138872 IP (tos 0x0, ttl 64, id 0, offset 0, flags [DF], proto TCP (6), length 40) 172.20.0.2.80 > 192.168.0.123.54120: Flags [R], cksum 0xb48d (correct), seq 3999845367, win 0, length 0 17:45:19.995097 ARP, Ethernet (len 6), IPv4 (len 4), Request who-has 172.20.0.1 tell 172.20.0.2, length 28 17:45:19.995161 ARP, Ethernet (len 6), IPv4 (len 4), Request who-has 172.20.0.2 tell 172.20.0.1, length 28 17:45:19.995164 ARP, Ethernet (len 6), IPv4 (len 4), Reply 172.20.0.2 is-at 02:42:ac:14:00:02, length 28 17:45:19.995164 ARP, Ethernet (len 6), IPv4 (len 4), Reply 172.20.0.1 is-at 02:42:b8:07:c2:99, length 28```
---
UPDATE 2 I see the exact same behaviour with a second VM and apache directly installed on it instead of in a docker container.
---
UPDATE 3
Thank you everybody for coming up with ideas. And thank you most of all to @maxwellfire@lemmy.world: The culprit was the Filter rule association
in my Port Forward settings which I had as Add associated filter rule
but needs to be Pass
. As soon as that is set, everything works.
The full solution is a NAT Port forwarding rule with filter rule "pass", an outbound NAT rule for hairpinning, and everything related to reflection turned off in Settings > Advanced. It's that easy! 😵💫
Nextcloud seems to have a bad reputation around here regarding performance. It never really bothered me, but when a comment on a post here yesterday talked about huge speed gains to be had with Postgres, I got curious and spent a few hours researching and tweaking my setup.
I thought I'd write up what I learned and maybe others can jump in with their insights to make this a good general overview.
To note, my installation initially started out with this docker compose stack from the official nextcloud docker images (as opposed to the AIO image or a source installation.) I run this behind an NGINX reverse proxy.
Sources of information
- Server tuning on Nextcloud Docs: Most of this are very basic things that are already taken care of in the docker image or in the proxy companion image I'm using. The one thing I haven't tried and that comes up in other places, too, is using Imaginary for image preview generation.
- How to migrate Nextcloud 17 Database Backend from MySQL to postgreSQL
- Eking out some Nextcloud Performance mainly talks about using a socket connection for redis, but also mentions logging to syslog (have not found a good source of information for this), using postgres, using imaginary for image previews
Improvements
Migrate DB to Postgres
What I did first is migrate from maridb to postgres, roughly following the blog post I linked above. I didn't do any benchmarking, but page loads felt a little faster after that (but a far cry from the "way way faster" claims I'd read.)
Here's my process
- add postgres container to compose file like so. I named mine "postgres", added a "postgres" volume, and added it to depends_on for app and cron
- run migration command from nextcloud app container like any other occ command. The migration process stopped with an error for a deactivated app so I completely removed it, dropped the postgres tables and started migration again and it went through. after migration, check
admin settings/system
to make sure Nextcloud is now using postgres../occ db:convert-type --password $POSTGRES_PASSWORD --all-apps pgsql $POSTGRES_USER postgres $POSTGRES_DB
- remove old "db" container and volume and all references to it from compose file and run
docker compose up -d --remove-orphans
Redis over Sockets
I followed above guide for connecting to Redis with sockets with details as stated below. This improved performance quite significantly. Very fast loads for files, calendar, etc. I haven't yet changed the postgres connection over to sockets since the article spoke about minor improvements, but I might try this next.
Hints
- the redis configuration (host, port, password, ...) need to be set in
config/config.php
, as well asconfig/redis.config.php
- the cron container needs to receive the same
/etc/localtime
and/etc/timezone
volumes the app container did, as well as thevolumes_from: tmp
EDIT Postgres over Sockets
I'm now connecting to Postgres over sockets as well, which gave another pretty significant speed bump. When looking at developer tools in Firefox, the dashboard now finishes loading in half the time it did before the change; just over 6s. I followed the same blog article I did for Redis.
Steps
- in the compose file, for the db container: add volumes
/etc/localtime
and/etc/timezone
; adduser: "70:33"
; addcommand: postgres -c unix_socket_directories='/var/run/postgresql/,/tmp/docker/'
; add tmp container tovolumes_from
anddepends_on
- in nextcloud config.php, replace
'dbhost' => 'postgres',
with'dbhost' => '/tmp/docker/',
Outlook
What have you done to improve your instance's performance? Do you know good articles to share? I'm happy to edit this post to include any insights and make this a good source of information regarding Nextcloud performance.
Hi fellow self-hosting lemmings,
In an SME setting, I'm looking for a service to regularly fetch mails from an IMAP server and print incoming mails and attachments on a local network printer based on rules (e.g., only print mails where the subject contains a specific word.)
Does a solution like that exist, ideally with a browser frontend to set it up?
Thank you!
Hi everyone, looking for help with an SSD/Win problem: My Thinkpad with Win11 has been acting up lately, and I am fairly sure the problem is with the SSD (very high disk load on startup and shortly before each of the many many crashes.) I would like to avoid having to set up my system from scratch.
I have a new SSD and have tried the following:
- leave bitlocker intact, boot into Ubuntu live, dd the old disk to an external USB drive, install new SSD, dd disk to new SSD
- same as above but with bitlocker disabled
- boot into Clonezilla live, clone old SSD to external storage, clone external storage to new SSD
- clean Windows install on new SSD and clone c: partition to new SSD with Clonezilla
All of these attempts invariably lead to an "INACCESSIBLE_BOOT_DEVICE" blue screen, and "bootrec /fixboot" and the like executed from the recovery CMD shows "0 Windows installations found." Booting into Ubuntu live with the cloned SSD installed I can see all my user data intact with no apparent problems.
Is my old SSD/Windows installation broken beyond repair and do I have to accept it and move on or am I missing something?
Thanks for any help or pointers!