"Link in the post" is already in the post itself, and it's a different one
it may be related to the older lemmy version.
the latest lemmy version has changed how metadata is fetched for posts coming from federation and it will no longer be processed while receiving the federated activity, instead it will be processed in the background. sometimes, metadata for urls cannot be fetched within the time limit and then the post will not federate properly if it's not happening in the background.
I think most (especially mobile) clients simply don't have this option and will always copy/share the "fedi link" - the url where the content is canonically hosted. all other URLs are simply cached representations of the original content.
https://lemmy.blahaj.zone/instances shows lemmygrad as blocked, maybe there are some older communities that were never removed on blahaj after defederation?
I'm not familiar with n8n but it's fairly straightforward on the API side.
You'll need a session token, also known as JWT, which you can get from logging in.
You typically don't want to do a login for every post, so you'll want to store that as a persistent value.
For authentication, you can pass the header authorization: Bearer {jwt}
, with {jwt}
being the session token.
https://join-lemmy.org/api/classes/LemmyHttp.html contains the API documentation.
You'll need to figure out the id of the community that you want to post to.
If you need to look it up, you can use getCommunity to fetch its details. Afterwards you can use createPost to submit it.
The form
links for the methods explain the request body json values that should be provided.
photos will never be pngs unless someone intentionally converts them to that format, as pngs are much worse than jpgs for storing this type of image. pngs are much better for computer generated images, such as screenshots, drawings, etc. you can also losslessly compress pngs with tools like pngcrush without converting them to jpg.
das ist ein bug
so just dropping duplicates then?
are there no references to the duplicates from posts? if there are and the duplicate rows are just deleted it'll cascade deletion and purge all those posts from the db as well
how are you planning to fix this?
delegating authentication to another service.
one of the more commonly known options would be sign in with google, but this is also quite useful for providers hosting multiple services. a provider could host a service that handles authentication and then you only have to login once and will automatically get logged in for their lemmy, xmpp, wiki and other services they might be providing.
do you happen to have experience with setting up influxdb and telegraf? or maybe something else that might be better suited?
the metrics are currently in prometheus metrics format and scraped every 5 minutes.
my idea was to keep the current retention for most metrics and have longer retention (possibly with lower granularity for data older than a month).
the current prometheus setup is super simple, you can see (and older copy of) the config here.
if you want to build a configuration for influxdb/telegraf that i can more or less just drop in there without too many adjustments that would certainly be welcomed.
the metric that would need longer retention is lemmy_federation_state_last_successful_id_local
.
I'll probably have to look at another storage than prometheus, aiui it's not really well suited for this task.
maybe something with influxdb+telegraf, although i haven't looked at that yet.
this feels like more db index corruption that already existed for users previously, unlikely to be an issue in lemmy itself
you may want to redact the names as this spam is framing another person pretending to be originating from them
so all you're looking for is the amount of activities generated per instance?
that is only a small subset of the data currently collected, most of the storage use currently comes from collecting information in relation to other instances.
Hi, I run this.
What benefit do you expect from longer retention periods and how much time did you have in mind?
The way data is currently collected and stored keeps the same granularity for the entire time period, which currently uses around 60 GiB for a month of retention across all monitored instances.
see https://programming.dev/post/20167648 and https://programming.dev/post/20201619, this was an issue with p.d infrastructure
verification emails are usually sent immediately. if there are delays you should check your junk folder, and if it's not there it probably won't arrive anymore. depending on the instance you signed up on there may be alternative methods to reach out to the instance admins about this. note that private messages from mastodon to lemmy do not work unfortunately.
you may have broken your language settings? check in your account settings. the posts are all tagged as English, you'll want to have at least English and undefined languages selected
fwiw, for Sync users the update to 0.19.5 is not that great, as @ljdawson@lemmy.world still hasn't updated Sync to use the updated APIs for marking posts as read :(