Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Maybe I'm not seeing it, but why do none of these "postgres durable packages" ever integrate with existing transactions? That's the biggest flaw of temporal, and seems so obvious that they can hook into transactions to ensure that starting a workflow is transactionally secure with other operations, and you don't have to manage idempotency yourself (or worry about handling "already started" errors and holding up DB connections bc you launched in transaction)


(DBOS co-founder here) DBOS does exactly this! From the post:

DBOS has a special @DBOS.Transaction decorator. This runs the entire step inside a Postgres transaction. This guarantees exactly-once execution for databases transactional steps.


Sorry, i mean with external transactions to the workflow steps. Like I can select, insert, and launch a workflow in a HTTP handler


Yeah, you can launch workflows directly from an HTTP handler. So here's some code that idempotently launches a background task from a FastAPI endpoint:

    @app.get("/background/{task_id}/{n}")
    def launch_background_task(task_id: str, n: int) -> None:
      with SetWorkflowID(task_id): # Set an idempotency key
        DBOS.start_workflow(background_task, n) # Start the workflow in the background
Does that answer your question?


Not OP, but I don't think that's it.

Suppose you had an existing postgres-backed CRUD app with existing postgres transactions, and you want to add a feature to launch a workflow _atomically_ within an existing transaction, can you do that? (I.e. can the DBOS transaction be a nested transaction within a transaction defined outside the DBOS library?)


Got it! I'm not sure if that was what OP was asking, but it's a really interesting question.

We don't currently support launching a workflow atomically from within an existing database transaction. I'd love to learn about the use case for that!

We do support calling a database transaction as a workflow step, which executes entirely atomically and exactly-once: https://docs.dbos.dev/python/tutorials/transaction-tutorial


In the apps I've written, generally the user interaction with the API is synchronous and has some immediate effect (e.g. uploading a file - the file is committed and guaranteed accessible by the time the HTTP success response is sent, giving the system strong causal consistency) and within that same transaction I enqueue the related background task (e.g. processing the file) so that we never get an uploaded file with no associated background task (or vice-versa).

(The background task may involve its own transactions when dequeued later, and spawn further background tasks, etc)


Got it! So you can make the entire HTTP endpoint a DBOS workflow that performs the synchronous work then launches a background task. Something like this:

    @app.get("/endpoint")
    @DBOS.workflow()
    def http_workflow():
        synchronous_task() # Run the synchronous task
        DBOS.start_workflow(background_task) # Start the background task asynchronously

This is atomic in the sense that if the synchronous task runs, the background task will always also run.


It's kind of weird to do that though, it mixes up concerns, why would an operation starts to mess up with specifc db integration.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: