Skip Navigation
Featured
Jellyfin Server/Web 10.9 Released
  • LOL, same. I did a docker-compose pull and restarted, came here to look at the release notes, and almost started panicking whether I omitted some important upgrade steps. Turns out everything upgraded smoothly automatically though.

  • How to move task manager to the side?
  • The task manager is just another widget on the panel. Right click anywhere on the panel (except on the tray icons, those are special), and click Enter edit mode. Then you can drag the task manager along the panel and configure it how you like.

  • Are we Wayland yet or Whats missing?
  • I truly believe the answer to this question is going to be yes around the May - June timeframe when Nvidia releases their explicit sync enabled drivers. All aboard the Wayland hype train babyyyy!

  • Independent auditors confirm top VPN doesn't log your data
  • I think there was some bad vibes when they got bought by a less than reputable company a while back. I know a lot of people, myself included switched to Mullvad. I am on Proton now though for the port forwarding.

  • How do you guys handle reverse proxies in rootless containers?
  • I see! So I am assuming you had to configure Nginx specifically to support this? Problem is I love using Nginx Proxy Manager and I am not sure how to change that to use socket activation. Thanks for the info though!

    Man, I often wonder whether I should ditch docker-compose. Problem is there's just so many compose files out there and its super convenient to use those instead of converting them into systemd unit files every time.

  • How do you guys handle reverse proxies in rootless containers?
  • Yeah, I thought about exposing ports on localhost for all my services just to get around this issue as well, but I lose the network separation, which I find incredibly useful. Thanks for chiming in though!

  • How do you guys handle reverse proxies in rootless containers?

    I've been trying to migrate my services over to rootless Podman containers for a while now and I keep running into weird issues that always make me go back to rootful. This past weekend I almost had it all working until I realized that my reverse proxy (Nginx Proxy Manager) wasn't passing the real source IP of client requests down to my other containers. This meant that all my containers were seeing requests coming solely from the IP address of the reverse proxy container, which breaks things like Nextcloud brute force protection, etc. It's apparently due to this Podman bug: https://github.com/containers/podman/issues/8193

    This is the last step before I can finally switch to rootless, so it makes me wonder what all you self-hosters out there are doing with your rootless setups. I can't be the only one running into this issue right?

    If anyone's curious, my setup consists of several docker-compose files, each handling a different service. Each service has its own dedicated Podman network, but only the proxy container connects to all of them to serve outside requests. This way each service is separated from each other and the only ingress from the outside is via the proxy container. I can also easily have duplicate instances of the same service without having to worry about port collisions, etc. Not being able to see real client IP really sucks in this situation.

    18
    Former Microsoft developer says Windows 11's performance is "comically bad," even with monster PC
  • All of this is still irrelevant. If given the same hardware, one OS performs better than another, then one OS is obviously more optimized...

    You're saying a lot of words but it all just boils down to "throw more hardware at the problem".

  • Riot Games talk Vanguard anti-cheat for League of Legends and why it's a no for Linux
  • Yeah, but Apple isn't allowing it (at least according to the comment you replied to), so if Riot continues to allow their games to run on Mac without kernel anti-cheat, then their whole argument against Linux support is moot.

  • Anyone else unable to log out of Plasma 6?

    On one of my machines, I am completely unable to log out. The behavior is slightly different depending on whether I am in Wayland or X11.

    Wayland

    1. Clicking log out and then OK in the log out window brings me back to the desktop.
    2. Doing this again does the same thing
    3. Clicking log out for a third time does nothing

    X11

    1. Clicking log out will lead me to a black screen with just my mouse cursor.

    In my journalctl logs, I see:

    Apr 03 21:52:46 arch-nas systemd[1]: Stopping User Runtime Directory /run/user/972... Apr 03 21:52:46 arch-nas systemd[1]: run-user-972.mount: Deactivated successfully. Apr 03 21:52:46 arch-nas systemd[1]: user-runtime-dir@972.service: Deactivated successfully. Apr 03 21:52:46 arch-nas systemd[1]: Stopped User Runtime Directory /run/user/972. Apr 03 21:52:46 arch-nas systemd[1]: Removed slice User Slice of UID 972. Apr 03 21:52:46 arch-nas systemd[1]: user-972.slice: Consumed 1.564s CPU time. Apr 03 21:52:47 arch-nas systemd[1]: dbus-:1.2-org.kde.kded.smart@0.service: Deactivated successfully. Apr 03 21:52:47 arch-nas systemd[1]: dbus-:1.2-org.kde.powerdevil.discretegpuhelper@0.service: Deactivated successfully. Apr 03 21:52:47 arch-nas systemd[1]: dbus-:1.2-org.kde.powerdevil.backlighthelper@0.service: Deactivated successfully. Apr 03 21:52:48 arch-nas systemd[1]: dbus-:1.2-org.kde.powerdevil.chargethresholdhelper@0.service: Deactivated successfully. Apr 03 21:52:54 arch-nas systemd[4500]: Created slice Slice /app/dbus-:1.2-org.kde.LogoutPrompt. Apr 03 21:52:54 arch-nas systemd[4500]: Started dbus-:1.2-org.kde.LogoutPrompt@0.service. Apr 03 21:52:54 arch-nas ksmserver-logout-greeter[5553]: qt.gui.imageio: libpng warning: iCCP: known incorrect sRGB profile Apr 03 21:52:54 arch-nas ksmserver-logout-greeter[5553]: kf.windowsystem: static bool KX11Extras::compositingActive() may only be used on X11 Apr 03 21:52:54 arch-nas plasmashell[5079]: qt.qpa.wayland: eglSwapBuffers failed with 0x300d, surface: 0x0 Apr 03 21:52:55 arch-nas systemd[4500]: Created slice Slice /app/dbus-:1.2-org.kde.Shutdown. Apr 03 21:52:55 arch-nas systemd[4500]: Started dbus-:1.2-org.kde.Shutdown@0.service. Apr 03 21:52:55 arch-nas systemd[4500]: Stopped target plasma-workspace-wayland.target. Apr 03 21:52:55 arch-nas systemd[4500]: Stopped target KDE Plasma Workspace. Apr 03 21:52:55 arch-nas systemd[4500]: Requested transaction contradicts existing jobs: Transaction for is destructive (drkonqi-coredump-pickup.service has 'start' job queued, but 'stop' is included in transaction). Apr 03 21:52:55 arch-nas systemd[4500]: graphical-session.target: Failed to enqueue stop job, ignoring: Transaction for graphical-session.target/stop is destructive (drkonqi-coredump-pickup.service has 'start' job queued, but 'stop' is included in transaction). Apr 03 21:52:55 arch-nas systemd[4500]: Stopped target KDE Plasma Workspace Core. Apr 03 21:52:55 arch-nas systemd[4500]: Stopped target Startup of XDG autostart applications. Apr 03 21:52:55 arch-nas systemd[4500]: Stopped target Session services which should run early before the graphical session is brought up. Apr 03 21:52:55 arch-nas systemd[4500]: dbus-:1.2-org.kde.LogoutPrompt@0.service: Main process exited, code=exited, status=1/FAILURE Apr 03 21:52:55 arch-nas systemd[4500]: dbus-:1.2-org.kde.LogoutPrompt@0.service: Failed with result 'exit-code'.graphical-session.target/stop

    I've filed an upstream bug for this but I was wondering if anyone else here was also experiencing the same issue.

    9
    Nextcloud Hub 7 is here!
    nextcloud.com Nextcloud Hub 7: advanced search and global out-of-office features - Nextcloud

    Nextcloud functions as a single platform, bridging together all apps under one roof in synchronization. Today, we’re introducing our most integrated platform yet - Nextcloud Hub 7.

    Nextcloud Hub 7: advanced search and global out-of-office features - Nextcloud
    1
    What are some KVM-over-IP or equivalent solutions you guys would recommend for guaranteed remote access and remote power cycle?

    Currently, I have SSH, VNC, and Cockpit setup on my home NAS, but I have run into situations where I lose remote access because I did something stupid to the network connection or some update broke the boot process, causing it to get stuck in the BIOS or bootloader.

    I am looking for a separate device that will allow me to not only access the NAS as if I had another keyboard, mouse, and monitor present, but also let's me power cycle in the case of extreme situations (hard freeze, etc.). Some googling has turned up the term KVM-over-IP, but I was wondering if any of you guys have any trustworthy recommendations.

    24
    Bcachefs Merged Into Linux-Next

    cross-posted from: https://lemmy.world/post/4930979

    > Bcachefs making progress towards getting included in the kernel. My dream of having a Linux native RAID5 capable filesystem is getting closer to reality.

    18
    Bcachefs Merged Into Linux-Next

    Bcachefs making progress towards getting included in the kernel. My dream of having a Linux native RAID5 capable filesystem is getting closer to reality.

    3
    Anyone else seeing reduced performance in BG3 Patch 2 with the Vulkan renderer?

    Patch 2 seems to have drastically slowed down the Vulkan Renderer. Before I was able to get 80-110FPS in the Druid Grove, but now I am only getting 50fps. DX11 seems fine though, but I prefer using Vulkan since I am on Linux.

    Arch Linux, Kernel 6.4.12

    Ryzen 3900x

    Nvidia 3090 w/ 535.104.05 drivers

    Latest Proton Experimental

    5
    [SOLVED] If I am using the SWAG proxy in front of a Nextcloud instance, is it safe to ignore some of the warnings in the admin page?

    I am using one of the official Nextcloud docker-compose files to setup an instance behind a SWAG reverse proxy. SWAG is handling SSL and forwarding requests to Nextcloud on port 80 over a Docker network. Whenever I go to the Overview tab in the Admin settings, I see this security warning: The "X-Robots-Tag" HTTP header is not set to "noindex, nofollow". This is a potential security or privacy risk, as it is recommended to adjust this setting accordingly.

    I have X-Robots-Tag set in SWAG. Is it safe to ignore this warning? I am assuming that Nextcloud is complaining about this because it still thinks its communicating over an insecured port 80 and not aware of the fact that its only talking via SWAG. Maybe I am wrong though. I wanted to double check and see if there was anything else I needed to do to secure my instance.

    SOLVED: Turns out Nextcloud is just picky with what's in X-Robots-Tag. I had set it to SWAG's recommended setting of noindex, nofollow, nosnippet, noarchive, but Nextcloud expects noindex, nofollow.

    2
    Migrating from docker to podman, encountering some issues

    cross-posted from: https://lemmy.world/post/3989163

    > I've been messing around with podman in Arch and porting my self-hosted services over to it. However, it's been finicky and I am wondering if anybody here could help me out with a few things. > > 1. Some of my containers aren't getting properly started up by podman-restart.service on system reboot. I realized they were the ones that depended on my slow external BTRFS drive. Currently its mounted with x-systemd.automount,x-systemd.device-timeout=5 so that it doesn't hang up the boot if I disconnect it, but it seems like Podman doesn't like this. If I remove the systemd options the containers properly boot up automatically, but I risk boot hangs if the drive ever gets disconnected from my system. I have already tried x-systemd.before=podman-restart.service and x-systemd.required-by=podman-restart.service, and even tried increasing the device-timeout to no avail. > > When it attempts to start the container, I see this in journalctl: > > Aug 27 21:15:46 arch-nas systemd[1]: external.automount: Got automount request for /external, triggered by 3130 (3) > Aug 27 21:15:46 arch-nas systemd[1]: external.automount: Automount point already active? > Aug 27 21:15:46 arch-nas systemd[1]: external.automount: Got automount request for /external, triggered by 3130 (3) > Aug 27 21:15:46 arch-nas systemd[1]: external.automount: Automount point already active? > Aug 27 21:15:46 arch-nas systemd[1]: external.automount: Got automount request for /external, triggered by 3130 (3) > Aug 27 21:15:46 arch-nas systemd[1]: external.automount: Automount point already active? > Aug 27 21:15:46 arch-nas systemd[1]: external.automount: Got automount request for /external, triggered by 3130 (3) > Aug 27 21:15:46 arch-nas systemd[1]: external.automount: Automount point already active? > Aug 27 21:15:46 arch-nas systemd[1]: external.automount: Got automount request for /external, triggered by 3130 (3) > Aug 27 21:15:46 arch-nas systemd[1]: external.automount: Automount point already active? > Aug 27 21:15:46 arch-nas systemd[1]: external.automount: Got automount request for /external, triggered by 3130 (3) > Aug 27 21:15:46 arch-nas systemd[1]: external.automount: Automount point already active? > Aug 27 21:15:46 arch-nas systemd[1]: external.automount: Got automount request for /external, triggered by 3130 (3) > Aug 27 21:15:46 arch-nas systemd[1]: external.automount: Automount point already active? > Aug 27 21:15:46 arch-nas systemd[1]: external.automount: Got automount request for /external, triggered by 3130 (3) > Aug 27 21:15:46 arch-nas systemd[1]: external.automount: Automount point already active? > Aug 27 21:15:46 arch-nas systemd[1]: libpod-742b4595dbb1ce604440d8c867e72864d5d4ce1f2517ed111fa849e59a608869.scope: Deactivated successfully. > Aug 27 21:15:46 arch-nas conmon[3124]: conmon 742b4595dbb1ce604440 : runtime stderr: error stat'ing file `/external/share`: Too many levels of symbolic links > Aug 27 21:15:46 arch-nas conmon[3124]: conmon 742b4595dbb1ce604440 : Failed to create container: exit status 1 > > > 2. When I shutdown my system, it has to wait for 90 seconds for libcrun and libpod-conmon-.scope to timeout. Any idea what's causing this? This delay gets pretty annoying especially on an Arch system since I am constantly restarting due to updates. > > All the containers are started using docker-compose with podman-docker if that's relevant. > > Any help appreciated! > > EDIT: So it seems like podman really doesn't like systemd automount. Switching to nofail, x-systemd.before=podman-restart.service seems like a decent workaround if anyone's interested.

    0
    Migrating from docker to podman, encountering some issues

    I've been messing around with podman in Arch and porting my self-hosted services over to it. However, it's been finicky and I am wondering if anybody here could help me out with a few things.

    1. Some of my containers aren't getting properly started up by podman-restart.service on system reboot. I realized they were the ones that depended on my slow external BTRFS drive. Currently its mounted with x-systemd.automount,x-systemd.device-timeout=5 so that it doesn't hang up the boot if I disconnect it, but it seems like Podman doesn't like this. If I remove the systemd options the containers properly boot up automatically, but I risk boot hangs if the drive ever gets disconnected from my system. I have already tried x-systemd.before=podman-restart.service and x-systemd.required-by=podman-restart.service, and even tried increasing the device-timeout to no avail.

    When it attempts to start the container, I see this in journalctl: Aug 27 21:15:46 arch-nas systemd[1]: external.automount: Got automount request for /external, triggered by 3130 (3) Aug 27 21:15:46 arch-nas systemd[1]: external.automount: Automount point already active? Aug 27 21:15:46 arch-nas systemd[1]: external.automount: Got automount request for /external, triggered by 3130 (3) Aug 27 21:15:46 arch-nas systemd[1]: external.automount: Automount point already active? Aug 27 21:15:46 arch-nas systemd[1]: external.automount: Got automount request for /external, triggered by 3130 (3) Aug 27 21:15:46 arch-nas systemd[1]: external.automount: Automount point already active? Aug 27 21:15:46 arch-nas systemd[1]: external.automount: Got automount request for /external, triggered by 3130 (3) Aug 27 21:15:46 arch-nas systemd[1]: external.automount: Automount point already active? Aug 27 21:15:46 arch-nas systemd[1]: external.automount: Got automount request for /external, triggered by 3130 (3) Aug 27 21:15:46 arch-nas systemd[1]: external.automount: Automount point already active? Aug 27 21:15:46 arch-nas systemd[1]: external.automount: Got automount request for /external, triggered by 3130 (3) Aug 27 21:15:46 arch-nas systemd[1]: external.automount: Automount point already active? Aug 27 21:15:46 arch-nas systemd[1]: external.automount: Got automount request for /external, triggered by 3130 (3) Aug 27 21:15:46 arch-nas systemd[1]: external.automount: Automount point already active? Aug 27 21:15:46 arch-nas systemd[1]: external.automount: Got automount request for /external, triggered by 3130 (3) Aug 27 21:15:46 arch-nas systemd[1]: external.automount: Automount point already active? Aug 27 21:15:46 arch-nas systemd[1]: libpod-742b4595dbb1ce604440d8c867e72864d5d4ce1f2517ed111fa849e59a608869.scope: Deactivated successfully. Aug 27 21:15:46 arch-nas conmon[3124]: conmon 742b4595dbb1ce604440 : runtime stderr: error stat'ing file `/external/share`: Too many levels of symbolic links Aug 27 21:15:46 arch-nas conmon[3124]: conmon 742b4595dbb1ce604440 : Failed to create container: exit status 1

    1. When I shutdown my system, it has to wait for 90 seconds for libcrun and libpod-conmon-.scope to timeout. Any idea what's causing this? This delay gets pretty annoying especially on an Arch system since I am constantly restarting due to updates.

    All the containers are started using docker-compose with podman-docker if that's relevant.

    Any help appreciated!

    EDIT: So it seems like podman really doesn't like systemd automount. Switching to nofail, x-systemd.before=podman-restart.service seems like a decent workaround if anyone's interested.

    18
    What's the best way to prevent IPv6 from leaking when using Wireguard?

    cross-posted from: https://lemmy.world/post/3754933

    > While experimenting with ProtonVPN's Wireguard configs, I realized that my real IPv6 address was leaking while IPv4 was correctly going through the tunnel. How do I prevent this from happening? > > I've already tried adding ::/0 to the AllowedIPs option and IPv6 is listed as disabled in the NetworkManager profile.

    45
    What's the best way to prevent IPv6 from leaking when using Wireguard?

    While experimenting with ProtonVPN's Wireguard configs, I realized that my real IPv6 address was leaking while IPv4 was correctly going through the tunnel. How do I prevent this from happening?

    I've already tried adding ::/0 to the AllowedIPs option and IPv6 is listed as disabled in the NetworkManager profile.

    3
    [SOLVED] Anyone else seeing duplicate audio devices when Bluetooth disconnects and reconnects?

    So I have been running into a weird issue lately where if I disconnect a Bluetooth audio device, it will remain visible in the KDE audio mixer. Reconnecting the audio device then adds a duplicate entry and the keyboard volume control for it is completely broken. It stays at the same volume. This was working just fine about a week ago and I've already downgraded pipewire, kpipewire, bluedevil, and plasma-pa to no avail. Nothing shows up in the logs, so I don't know exactly what's causing this bug.

    Anyone else experiencing the same thing?

    Arch Linux Kernel 6.4.8-arch1-1 Pipewire 0.3.77-1 KDE Plasma: 5.27.7 KDE Frameworks Version: 5.108.0 Qt 5.15.10

    EDIT: Seems like simply changing the Bluetooth A2DP audio profile causes this issue as well. I have Bluetooth earbuds that have both AAC and SBC modes and toggling between them just creates more and more duplicate devices with the same name.

    THE FIX: Seems like pipewire-pulse 0.3.77 was the culprit after all. Downgrade it to pipewire-pulse 0.3.76 and then do a systemctl --user restart pipewire-pulse to workaround this issue.

    EDIT 2: pipewire-pulse 0.3.77-2 has the patch backported. Feel free to update to latest version in Arch repos.

    Relevant bug report: https://gitlab.freedesktop.org/pipewire/pipewire/-/issues/3414

    6
    The Firefox icon is broken for this magazine / community.

    I am subscribed to this community from Lemmy and the icon is broken. I don't even see it on kbin.social fedia.io. Let's get our favorite mascot back!

    Here's what it looks like from lemmy.world: !

    EDIT: If the icon is working for you, either your browser or your instance may be caching the icon and thereby hiding the issue.

    1
    [SOLVED] Anyone else getting DNS leaks when running VPN on Archlinux?

    I tried both Mullvad and Mozilla VPN and when I do a dns test, both are still using my ISP's DNS instead of the VPN's. This only happens on my Arch systems, works fine on my phone.

    EDIT: Turns out these VPN clients depend on systemd-resolved in order to change your DNS. Enabling the service makes it work properly. A bit scary that they don't give you a warning that you're leaking DNS if you don't have systemd-resolved enabled.

    6
    What are your experiences with ZFS on Arch?

    I am in the process of making a DIY NAS box and I am debating between mdadm + BTRFS and ZFS. What are your experiences with using ZFS on Arch? Have you run into any major issues? Does ZFS being an external kernel module cause any annoyances with the frequent Arch kernel updates?

    Give me the dirty details! Any recommendations or pitfalls I should avoid?

    Thanks!

    3
    HyperX Quadcast S mic is preventing my system from sleeping. What's the best way to debug and submit a bug report?

    cross-posted from: https://lemmy.world/post/1313651

    > Every time I plug in my Quadcast S USB mic into my Arch Linux box, I can't properly go into deep sleep. Unplugging it before attempting to sleep makes it work again, but its annoying to have to do that every time. How do I debug this and where do I even submit a bug report for this? > > Here's the relevant journalctl: > > Jul 10 11:59:39 systemd[1]: Starting System Suspend... > Jul 10 11:59:39 systemd-sleep[70254]: Entering sleep state 'suspend'... > Jul 10 11:59:39 kernel: PM: suspend entry (deep) > Jul 10 11:59:39 kernel: Filesystems sync: 0.066 seconds > Jul 10 11:59:42 kernel: Freezing user space processes > Jul 10 11:59:42 kernel: Freezing user space processes completed (elapsed 0.001 seconds) > Jul 10 11:59:42 kernel: OOM killer disabled. > Jul 10 11:59:42 kernel: Freezing remaining freezable tasks > Jul 10 11:59:42 kernel: Freezing remaining freezable tasks completed (elapsed 0.001 seconds) > Jul 10 11:59:42 kernel: printk: Suspending console(s) (use no_console_suspend to debug) > Jul 10 11:59:42 kernel: serial 00:04: disabled > Jul 10 11:59:42 kernel: sd 2:0:0:0: [sdb] Synchronizing SCSI cache > Jul 10 11:59:42 kernel: sd 1:0:0:0: [sda] Synchronizing SCSI cache > Jul 10 11:59:42 kernel: sd 5:0:0:0: [sdc] Synchronizing SCSI cache > Jul 10 11:59:42 kernel: sd 1:0:0:0: [sda] Stopping disk > Jul 10 11:59:42 kernel: sd 2:0:0:0: [sdb] Stopping disk > Jul 10 11:59:42 kernel: sd 5:0:0:0: [sdc] Stopping disk > Jul 10 11:59:42 kernel: ACPI: PM: Preparing to enter system sleep state S3 > Jul 10 11:59:42 kernel: ACPI: PM: Saving platform NVS memory > Jul 10 11:59:42 kernel: Disabling non-boot CPUs ... > Jul 10 11:59:42 kernel: Wakeup pending. Abort CPU freeze > Jul 10 11:59:42 kernel: Non-boot CPUs are not disabled > Jul 10 11:59:42 kernel: ACPI: PM: Waking up from system sleep state S3 > Jul 10 11:59:42 kernel: sd 5:0:0:0: [sdc] Starting disk > Jul 10 11:59:42 kernel: sd 2:0:0:0: [sdb] Starting disk > Jul 10 11:59:42 kernel: sd 1:0:0:0: [sda] Starting disk > Jul 10 11:59:42 kernel: serial 00:04: activated > > > Thanks in advance!

    3
    HyperX Quadcast S mic is preventing my system from sleeping. What's the best way to debug and submit a bug report?

    Every time I plug in my Quadcast S USB mic into my Arch Linux box, I can't properly go into deep sleep. Unplugging it before attempting to sleep makes it work again, but its annoying to have to do that every time. How do I debug this and where do I even submit a bug report for this?

    Here's the relevant journalctl: Jul 10 11:59:39 systemd[1]: Starting System Suspend... Jul 10 11:59:39 systemd-sleep[70254]: Entering sleep state 'suspend'... Jul 10 11:59:39 kernel: PM: suspend entry (deep) Jul 10 11:59:39 kernel: Filesystems sync: 0.066 seconds Jul 10 11:59:42 kernel: Freezing user space processes Jul 10 11:59:42 kernel: Freezing user space processes completed (elapsed 0.001 seconds) Jul 10 11:59:42 kernel: OOM killer disabled. Jul 10 11:59:42 kernel: Freezing remaining freezable tasks Jul 10 11:59:42 kernel: Freezing remaining freezable tasks completed (elapsed 0.001 seconds) Jul 10 11:59:42 kernel: printk: Suspending console(s) (use no_console_suspend to debug) Jul 10 11:59:42 kernel: serial 00:04: disabled Jul 10 11:59:42 kernel: sd 2:0:0:0: [sdb] Synchronizing SCSI cache Jul 10 11:59:42 kernel: sd 1:0:0:0: [sda] Synchronizing SCSI cache Jul 10 11:59:42 kernel: sd 5:0:0:0: [sdc] Synchronizing SCSI cache Jul 10 11:59:42 kernel: sd 1:0:0:0: [sda] Stopping disk Jul 10 11:59:42 kernel: sd 2:0:0:0: [sdb] Stopping disk Jul 10 11:59:42 kernel: sd 5:0:0:0: [sdc] Stopping disk Jul 10 11:59:42 kernel: ACPI: PM: Preparing to enter system sleep state S3 Jul 10 11:59:42 kernel: ACPI: PM: Saving platform NVS memory Jul 10 11:59:42 kernel: Disabling non-boot CPUs ... Jul 10 11:59:42 kernel: Wakeup pending. Abort CPU freeze Jul 10 11:59:42 kernel: Non-boot CPUs are not disabled Jul 10 11:59:42 kernel: ACPI: PM: Waking up from system sleep state S3 Jul 10 11:59:42 kernel: sd 5:0:0:0: [sdc] Starting disk Jul 10 11:59:42 kernel: sd 2:0:0:0: [sdb] Starting disk Jul 10 11:59:42 kernel: sd 1:0:0:0: [sda] Starting disk Jul 10 11:59:42 kernel: serial 00:04: activated

    Thanks in advance!

    0
    InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)MO
    Molecular0079 @lemmy.world
    Posts 24
    Comments 657