A feature that may very well make me finally jump over to RoR. I've recently built quite a large site, and the only current bottle neck is when a few emails need to be sent off at the same time with attachments, and to be able to add that into a "que" and let the user continue browsing the site instead of stuck on a loading page (if only for a few seconds) would make the current set up ideal.
Incidentally - if any one has any way of doing this in PHP without having to setup cron jobs (and not using node or its derivatives), I'm really open to any ideas!
I've got great news then: you can make the jump to RoR today! :)
This news isn't about Rails implementing its own background queue, but rather creating a unified API for interacting with background queuing systems; of which there are many. Resque (crafted at GitHub [1]) is probably the most popular: https://github.com/defunkt/resque.
Although certainly not without its issues, the most popular solution for that platform is Gearman http://gearman.org . It's fairly ops-intensive, but the most friendly for PHP without having to resort to things like Stomp to interface with messaging (MQ) systems. Which are not optimally designed for job enqueing, per se.
With this commit, you can if you want. This code decouples Rails from external queue solutions. If your application needs to interact with a queue, you only have to write it once and you can use a standard API to do it. If your external queue solution (your DB queue code) conforms to the API, you can switch it out with another conforming solution when your needs call for it.
As someone pointed out in the OP comments, this is like Rack for queues.
A queue is FIFO oriented, a database is least-recently-used (LRU). It works, but is not going to be the most efficient tool.
Where a queue is really useful is converting from foreground to background, so that you can optimize for throughput, rather than having to leave free capacity for 'random arrivals' of your foreground servers. Think of it as the same as the same problem as the bursty traffic that a bank machine gets, and why you always seem to have to line up.
I too rolled my own, and while trivial to create, it's always made me uneasy. If there's a bug, I won't know about it; Amazon SES will reject the emails if they're sent all at once, or perhaps the calls won't be made at all.
I ended up doing a little status page for my newsletter; I set it up to auto refresh in Opera, each one of of the refreshes sends 10 emails, and prints their statuses/destination/titles as they go (it's also rate limited in memcached). I chuck that the laptop or a third monitor and leave it for a couple of hours, keeping an eye on it as it goes.
Using something off the shelf I could trust would be much nicer.
Incidentally, Amazon SES has limits on how many mails you can send a second - even after your account is confirmed by them. You can see this limit on your control panel. Mine shows around 5 mails per second.
So you will have to add some kind of throttling to make it work.
Yep, which was a big reason why I did it the way it was. I was paranoid that I'd make a slip up in the throttling code and send too many emails (I guess they actually check it over a 10 or 60 second window), and the rest of the batch wouldn't go through properly.
Just move your code to a register_shutdown_function() call and it will execute after the output has been sent, but without having to deal with forking a background PHP process or running out-of-context.
Incidentally - if any one has any way of doing this in PHP without having to setup cron jobs (and not using node or its derivatives), I'm really open to any ideas!