Hacker Newsnew | past | comments | ask | show | jobs | submit | tty7's commentslogin

I used PouchDB w/ CouchDB as the datastore for a digital wallet. Used to store ZKs for proof of age etc.

Worked a treat


irony being you think twitter updates is professional


My take is I expect "reviews" of every code change. whereas an "ask" is more based in the context of what I am currently doing.

Crappy example:

1 - Adding a user to a group, I expect a review - did I get the name correct?

2 - Adding a group, I expect a review & an ask - does the group make sense for our context, and is it named correctly.


Coming into a "enterprise" organisation this is a decent generalisation of the pains of opening a PR.

Any reccomendations of how to implement this? Tags would work but only if you understand the context of ship/show/ask.

100% opening a PR, regardless if its literally just merge this badboy ends up being a "show/ask" instead of a "ship" where i am currently working


DIDs do not specify keys should be written to a public chain, another commented mentioned the VC spec.

Another thing to look at is the did:peer method, it allows you to have a direct connection with a party for secure communication. Either party could root their did against a public key (eg a public organisation) but that is not required.


Let me use the, 'What programming language should i start with?!' analogy :)

Just start reading the book (any discrete mathematics book!) :)


I'd disagree. They're better off getting some feedback on books that might appeal to their learning style and background better. Otherwise they may tire quickly and not learn well-enough.


Or to put it mathematically, the optimal amount of exploration time in this explore/exploit problem is far less than you might think.


Actually, having gone through multiple programming books, I disagree with that. I highly recommend HTDP because of their design recipes. The book shows you an approach to solving problems with programs rather than just the syntax of a language. This book has had the largest impact on my (very humble) approach to programming.


I get the following error:

  Error on line 1. Unspecified reference named 'HTDP' 
  in the expression 'I highly recommend HTDP'
Any help?



That doesn't really make any sense, Backblaze were not limiting you to a single thread - you were...


You do need access to an index/DB of all files in a bucket in order to delete them in parallel. Otherwise you're stuck paginating with the B2 API.


You need a DB of all of the dead entries that need to be deleted, and that’s a fine thing to have.

There are lots of problem spaces where deletion is expensive and so is time shifted not to align with peak system load. Some sort of reaper goes around tidying up as it can.

But I think by far my favorite variant is amortizing deletes across creates. Every call to create a new record pays the cost of deleting N records (if N are available). This keeps you from exhausting your resource, but also keeps read operations fast. And the average and minimum create time is more representative of the actual costs incurred.

Variants of this show up in real-time systems.


My case was really simple. I was done with my ML pipeline and nuked the database, but pics in B2 remained with no quick way to get rid of them and/or to stop the recurring credit card charges.

IMO an "Empty" button should have been implemented by Backblaze.


Would this technique have been faster?

A single pass: paginating through all entries in the bucket without deletion, just to build up your index of files. And then using that index to delete objects in parallel.


I believe S3 is the same way.


S3 has an "Empty bucket" button, unlike B2.


Disclaimer: I work at Backblaze.

> no way to empty a bucket.

Backblaze currently recommends you do this by writing a “Lifecycle rule” to hide/delete all files in the bucket, then let Backblaze empty the bucket for you on the server side in 24 hours: https://www.backblaze.com/b2/docs/lifecycle_rules.html


Check. It. In. Anyway.

Amen


huh, what do you chroot to apt-get/dnf ?


I think you can do that bootstrapping, for example, with debootstrap. (I'm talking by memory)


Absolutely correct. I've got a cross-compile toolchain set up right now that needs Debian (Ubuntu won't do because of different glibc versions in the cross-compile toolchains). I used debootstrap to make a basic Debian Stretch filesystem, chroot into that, and apt-get the remaining pieces and run the build. Works like a charm, no Docker required. And, unlike containers, it's intentionally persistent so future builds go very quickly.


i think you missed my point :)


That's what it seems like :o :P


You need to read the _whole_ comment.. dingaling said it was a technical failure and didn't even imply it was due to staff, just commenting on the general public's opinion at the time.

I know people don't read the article, but at least read the comment lol


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: