I’m an accidental DBA, but I still never quite got the hate for ORMs. I thought this article does a good job explaining the issue, and why they aren’t so bad.
For me the article touches on the problem but doesn't actually reveal it.
What I see day in and day out is projects using a relational database to store data that is not suited to a relational database. And you can often get away with that fundamental mistake when you're writing raw SQL queries... but as soon as an ORM is involved you're in for a world of pain (or at least, problems with performance).
The article you linked disagrees - they said it pretty well:
Of course, some issues come from the fact that people are trying to use the Relational model where it doesn’t suit their use case. That’s why I prefer a document model instead of a tabular one as the default choice. Most of our applications are more suitable for it, as we’re still moving the regular physical world (so documents) into computers. (Read also more in General strategy for migrating relational data to document-based).
I never joined the NOSQL hype-train so I can't comment on that. However I will point out storing documents on a disk is a very well established and proven approach... and it's even how relational databases work under the hood. They generally persist data on the filesystem as documents.
Where I find relational data really falls over is at the conversion point between relational document representation. That typically happens multiple times in a single operation - for example when I hit the reply button on this comment (I assume, haven't read the source code) this is what will happen:
my reply will be sent to the server as a document, in body of a HTTP request
beehaw's server will convert that document into relational data (with a considerable performance penalty and large surface are for bugs)
PostgreSQL is going to convert that relational data back into a document format and write it to the filesystem (more performance issues, more opportunities for bugs)
And every time the comment is loaded (or sent to other servers in the fediverse) that silly "document to relational to document" translation process is repeated over and over and over.
I'd argue it's better, more efficient, to just store this comment as a document because over and over and over it's going to be needed in that format and anyway you ultimately need to write it to disk as a document.
Yes - you should also have a relational index containing critical metadata in the document. The relationship linking that document to the comment that I replied to. The number of upvotes it has received. Etc Etc... but that should be a secondary database, not the primary one. Things like an individual upvote should also be a document, stored as a file on disk (in the format specified by AcitivtyStreams 2.0).
I much prefer the repository pattern, as used by sequel and Ecto
Your models are essentially just enhanced structs. They hold information about the shape of data. Generally don't hold any validations or logic related to storing the data. You perform changes on the data using changesets, which handle validation and generating the transaction to persist the data
It works extremely well, and I've yet to encounter the funky problems ActiveRecord could give you
Data comes out as a map or keyword list, which is then turned into the repository struct in question. If you want raw db data you can get that too. And you can have multiple structs that are backed by the same persistent dataset. It's quite elegant.
Queries themselves are constructed using a language that is near SQL, but far more composable:
Repo.one(from p in Post, join: c in assoc(p, :comments), where: p.id == ^post_id)
Queries themselves are composable
query = from u in User, where: u.age > 18
query = from u in query, select: u.name
And can be written in a keyword style, like the above examples, or a functional style, like the rest of elixir:
User
|> where([u], u.age > 18)
|> select([u], u.name)
None of these "queries" will execute until you tell the Repo to do something. For that, you have commands like Repo.all and Repo.one, the latter of which sticks a limit: 1 on the end of the provided query
Oh interesting, never knew that. In some instances american english is better, in others, british is better, in still others, the aussies perfected it. I suggest we combine for a master dialect.