Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I haven't had any problem testing with RDBMS, so I'm wondering if there's something I'm not understanding.

For me, a testing script will load a clean schema+procedures with no data (empty tables), and the schema never changes. Each test inserts a handful of rows as needed in relevant tables, runs the query, and then checks the query result and/or changed table state. Then DELETE all table contents, and move on to the next test. Finally delete the entire test database once you're done.

If you have queries that alter tables, then each test starts with a brand-new schema and deletes the whole database once done.

The only kinds of tests this doesn't scale to are performance tests, e.g. is this query performant on a table with 500 million rows? But to me that's not the domain of unit tests, but rather full-scale testing environments that periodically copy the entire prod database.

So I'm curious why you consider an RDMBS to be hyper-interconnected state with interconnected tests?



My group is working on getting testing set up on the DB of a highly configurable SAAS product that's heavily normalized, so configuration for every feature is smeared across dozens of tables. It dates back from the "use procs for everything" era of development.

Thus, the code to set up preconditions would be as complicated as the entire service tier of the admin screens. So it makes sense to write the test scripts as, instead of "insert" it's

1. "set up known good mostly-blank DB"

2. "Test the CREATE methods"

3. "Test the UPDATE methods"

4. "Test the DELETE methods"

5. "teardown"

* obviously it's not simple CRUD, this is just a simplification of how it goes.

It's not that this is an ideal workflow, it's just that "worse is better" here. This lets us get testing ASAP and move forward with confidence.


Of course, schemas change, and it's the migration of live data with those changes that are singularly most likely to break things. Will all your db access code still work? If you're normalizing everything through stored procedures, it gets somewhat harder still.

It's not just testing a single build up... but build ups and migrations from deployed, working options. Some solutions have different versions of an application/database at different clients that will update to "approved" versions in progress. (Especially in govt work)


Ah, got it. That workflow makes a lot of sense in that case -- not perfect but the most pragmatic.

I think the concepts of "application-owned RDBMS" vs "3rd party product that uses its own RDBMS" are the source of a lot of confusion sometimes.

If you manage your own database, testing shouldn't usually be particularly difficult. But when you're integrating with a third-party product, you generally can't effectively do unit tests. Just end-to-end integration tests of the type you're describing.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: