Running raw user SQL isn't a prerequisite of an ORM needed to make it an "ORM", it's a useful feature that most ORMs try to include because the authors recognize the many shortcomings. Also, by writing raw engine-specific SQL, you automatically invalidate one of ORMs biggest selling points which is being SQL-database agnostic.
And by "drop into", this typically means writing custom stitching code that stitches the SQL cursor results back into the models again. It's rarely straightforward.
> you automatically invalidate one of ORMs biggest selling points which is being SQL-database agnostic.
The biggest selling point is a massive reduction in boilerplate code. Database agnostism is a feature that almost nobody ever uses, so who cares! Your proposed alternative is engine-specific SQL so you lose either way. At least with an ORM, you'd lose significantly less. You'd just have to deal with the places you used SQL. Which, in my experience, is pretty small and pretty specific.
> this typically means writing custom stitching code that stitches the SQL cursor results back into the models again.
I often feel like people who complain about ORMs have either actually never used one or used a poor one. As long as my query matches the structure of my object(s) I don't need any stitching code. And if they didn't match, I wouldn't write stitching code because that would be waste of time.
If I'm writing a custom query, I'm probably not looking to integrate into the model anyway -- if it's used for reporting I'd just take the results as is. If I'm writing a query specifically to get a matching model object (to manipulate) I'm going to get the whole model and no stitching would be required.
> you automatically invalidate one of ORMs biggest selling points which is being SQL-database agnostic.
I haven't heard anyone talk seriously about database-agnosticism since the very early 2000s. Maybe some commercial products still try (choose MS or Oracle!), but it's rare nowadays.
The primary selling point of an ORM is that it abstracts marshaling/un-marshaling rows to/from entities. Instantiating and persisting entities to relational storage.
> And by "drop into", this typically means writing custom stitching code that stitches the SQL cursor results back into the models again. It's rarely straightforward.
That's not typical in most uses I've seen. Far more typical are things like:
- Go straight to SQL for reporting, since that's what SQL does. Useful in reporting contexts, and also for list/filter UI screens.
- Use raw SQL to query a list of entity IDs for updating based on some complex criteria. Iterate over the identifiers and perform whatever logic you need to before letting the ORM handle all the persistence concerns.
> I haven't heard anyone talk seriously about database-agnosticism since the very early 2000s.
Do you use the same database engine for your unit and integration testing as you do production? I don't. I use sqlite for unit and local integration testing, and aurora-mysql for production.
As a side note, I quite literally can't use aurora-mysql for local unit and integration testing. It doesn't exist outside AWS.
Unit testing code that touches the database is not useful, and in fact it indicates there is likely a design flaw. Code that acquires from or changes data in the DB should be self contained.
An integration test that doesn't use the same DB as production is unsatisfactory.
Using a DB inside your unit tests is an antipattern and arguably a violation of the concept of a unit test in the first place.
Integration tests should run against a test environment, otherwise, what integration are you testing? I don't see the value in writing integration tests that test the integration between my code and a one-off integration test DB that exists solely for the purpose of integration testing.
There is no syntactical difference between Aurora/MySQL and regular MySQL until you start interacting with S3.
Also if you’re using only the subset of MySQL that is supported by SQLite, you’re missing out on some real optimizations for bulk data loads like “insert into ignore...” and “insert...on duplicate key update...”. Besides that, the behavior of certain queries/data types/constraints are inconsistent between MySQL and every other database in existence.
Finally, you can’t really do performance testing across databases.
> Do you use the same database engine for your unit and integration testing as you do production? I don't. I use sqlite for unit and local integration testing, and aurora-mysql for production.
The PHP ORM Doctrine supports a custom SQL like language called DQL that integrates your defined model/relationships as well as converting it to the correct SQL dialect for your backing store.
Queries look like:
SELECT u FROM ForumUser u WHERE (u.username = :name OR u.username = :name2) AND u.id = :id
It's certainly limiting, though you can write (or find implementations[0] of) custom types[1] that can be pretty powerful (void where prohibited, limitations apply).
I've always thought the "being SQL database agnostic" theory of ORMs was more about a development team being able to choose from some common choices than about apps being portable in practice.
Yeah, I’ve never seen any complex applications, using ORM’s, that were easy to port to another database. Hell, one company I was at switched from MySQL to MariaDB and even that took some work despite that they should be almost the dame thing. If you switch to a more substantially different database, eg, from MySQL to Postgres, then its even harder. I’m also a believer in using a databases features when it makes sense and not limiting myself to the standardised subset of SQL just in case I might want to change databases later.
And by "drop into", this typically means writing custom stitching code that stitches the SQL cursor results back into the models again. It's rarely straightforward.