My VPS was purged due to a platform hack. I did not keep a backup, and I am trying to figure out what to do. There is no plug and play solution for backup. From what I understand, I have to set up rsync and dump files via cron to a Raspberry Pi. But there is no snapshot-like feature.
I am using KVM from Cloudcone (their virtualization software was hacked about a week ago) and I am using RPI4.
Then I need to set up my old website again, which is a pain in the butt. I hard-coded cron and a git-based auto-deployment feature (I think).
You mention rsync which can be fine. But there are tons of other solutions, many that will have snapshot features. I use borg backup, for instance. https://www.borgbackup.org/
Also, look into scriping your server setup with tools like Ansible or PyInfra. There is always the risk that bad things happen to servers, and when you want a new server it's great to be able to spin things up in a matter of minutes. Tools like these are profesional best practices these days.
In fact, if you have a scripted server setup and a server that doesn't collect data itself you may not even need backups. What is there to backup? Just spin up a new server with your scripts and carry on.
I am sorry for the late reply. Everyone says "you need backup" but that is the thing, I need just scripts. But I will look into your recommendation for sure. Thank you!
I think the "Heroku story" was less about technical limitations, but everything except technical limitations. More than a decade ago, I started learning and building on Heroku and hosted all my side projects and client projects on Heroku. Then when they got acquired, I was naive; then they removed their free tier and that broke my trust.
I primarily worked on PoC/MVP development where I worked to bring ideas to something barely tangible. And Heroku's free tier decisions meant it was a barrier for developers to develop on their platform. Pay first, develop later. It was like the rest of the industry.
After that, I just exited containerized platform-based application development entirely because convenience and having that weird developer philosophy "I must not pay because I can find a way" was less of a reason than sustainability. For me, containerized application platforms was about POC and MVP. If there was growth then me or the client can pay for the convenience. But if there was nothing, pretty easy to delete the project.
Then I committed to replicating the Heroku experience with a small VPS, backing up via rsync, and moving from PostgreSQL to SQLite. I can even charge clients for hosting (+ maintenance) on my VPS.
I do not know, to me containerized application platforms are limited by commercial challenges rather than technical ones. I see tons of containerised application platforms, but the trust has eroded because of a single company.
I have changed my development facility and laid the groundwork to not commit to these platforms. Sustainability over convenience.
Sure, I understand and respect folks at fly.io, render, railway, and even the open source variants of these companies (Caddy etc.). But there is no sustainability guarantee for these platforms. It was not just about the "free tier", to me it transcends to a philosophical point about building applications in general. Sure, there could be a new era with AI making MVP/PoC development easy through hosting in containerised applications, but that is a tangent point.
If Heroku were doing everything right, there would not be a dozen application platforms out there, but they made mistakes and, in my opinion, made the entire containerised application platform model untrustworthy.
How does the govdirectory project work? Does it use web scrapers to collect the contact details? I checked the bot repository, and it was empty. https://github.com/govdirectory/bots I would like to know about their methodology and to be honest how it helps me.
There are only 46 countries listed there. I am not intending to be critical of the idea, but having contact information of government institutions is not an effective way to get things done, in my opinion. And most government websites and contact details are quite accessible because they are centrally built through national IT system and a unified software service (usually). In third and second world country the contact details is usually quite useless. You have to find the right person and sit in front of their office.
The issue with the government contact repository is that it does not connect 'I have this problem' and 'who do I reach out to'. From my experience, you have to invest time in doing research and finding out who to reach out to.
I'm glad the name of my native language is written correctly. In many cases, people say "Farsi", which is offensive to many Iranians because it's the Arabic version of the word "Parsi" (unlike Persian, Arabic doesn't have "p", "g", "ch", "zh").
It's like someone calling English "Anglaise" because that's how the French say it.
PS: Contrary to common belief, Persian and Arabic are totally different languages, though they have borrowed words from one another (think English and French). Persian is an Indo-European language whereas Arabic is Aramaic (same roots as Hebrew).
> It's like someone calling English "Anglaise" because that's how the French say it.
That is the case for some other languages, though. We call the language German rather than Deutsch because Germani was the Latin name for tribes in the area, for example.
Or native names get modified too -- in English we don't call it Espanish, just Spanish, even though it comes from español.
The names of languages in other languages tend to get modified in tons of different and random ways for lots of reasons. Is there really a reason to take offense at it?
It doesn't bother me that Italians call me an americano instead of an American. It's just a letter change. So why is it so bothersome that it's called Farsi rather than Parsi? Can't the change from "p" to "f" be seen as an interesting historical quirk, due to the fascinating effect of Arabic on European languages in the Middle Ages? At the same time that we got Arabic words like "algebra" and "alcohol"?
Interesting. This is the first time I’m hearing that Farsi is offensive to Iranians. None of my Irani friends have objected so I’m curious if I’m missing something.
Wikipedia says Farsi should be avoided in Western languages, but what about others? Persian is called Farsi in Indian subcontinent due to the deep historical connections we share. We have proverbs saying Farsi is the sign of a learned person etc.
Small nitpicking, Arabic is from a different branch of Semitic languages than Aramaic or Hebrew (which are very similar).
And TIL I learned that Aramaic replaced Hebrew in Judea because the Persian Empire maintained Aramaic as the official administrative language, and Jews brought it back, coming back from the Babylonian captivity.
Looking at the Persian IPA table[0] for the letters you wrote, we get `/ʒ/` for `ژ` and `/tʃʰ/` for `چ`
In Arabic[1], there are two close phonemes: `/dʒ/` for `ج` and `/ʃ/` for `ش`
The difference in both phonemes is minimal and are practically affricates[2] of each other (where `d` or `t` can precede a `ʒ` or a `ʃ`), so it seems these sounds are present in both Arabic and Persian.
These variations are also within the dialectal distribution of either languages. For example `ج` is pronounced `/dʒ/` in Algeria and `/ʒ/` in Morocco.
You have to scroll down a couple pages' worth before you even realize this might be SO long you need to collapse it. So then you've got to scroll back UP a couple pages, find the teensy [-] link...
It's enough to just post the link to the list of languages. The list itself doesn't belong in a comment here, when it's that long.
Thorium powerplants, nuclear fusion powerplants, solid-state batteries, quantum computing, and TSMC producing chips in the US are news that should be considered speculative until they go into production. There has been plenty of coverage, but little real-world utility or impact from these technologies so far.
The projects I worked on always had a mechanism to dump data into a proper database. For example, I built a daily scraper to collect some of the inventory. I didn’t keep all the data in a single sheet. Instead, all the data was stored in a cloud-based managed SQLite or PostgreSQL database, or sometimes a local SQLite database. Only the day's data was stored in Google Sheets because the client wanted to see the spreadsheet themselves and have the UI be accessible for their users.
In a separate comment, I mentioned how Mozilla should have been more like Proton with their cloud storage, VPN, password manager, and cloud office suite.
In fact, they should have done that a decade a ago.
Mozilla has been around since the late '90s and should have evolved beyond just being a browser company. They launched a VPN service when VPNs were already everywhere, and they did the same with a bookmark manager when others were already offering similar solutions. Mozilla is always catching up, never leading, and that's a common issue with many big open-source and free software companies. They often pretend to be a business that isn't heavily propped up by big tech donations.
If I were leading a browser company, my focus would have been aggressively directed towards small business software. I’d create an internet and privacy-focused affordable minimal business software suite that lives within the browser — a combination of Proton and Zoho. And I’d strongly avoid building things that should be browser extensions.
I've built quite a few dashboards while working on proof-of-concept feature/product engineering. Even though people often think I'm joking, almost always the backend database was a Google Sheet. Google Sheets has great API support, easy to write functionality, convenient read and file dumps, works well with Pandas/SQL, and has a universally appreciated UX. Data validation can be annoying if the "admin" directly enters data, but for building a proof of concept, nothing beats Google Sheets. The data from Google Sheets would be passed through an API to a web UI/dashboard. The dashboard I built for end users was a simple Bootstrap (Vue-Bootstrap) table from the API, with enough easy-to-use filters and views to work out of the box. If the data was too large, I would use a templated snippet to convert the JSON into a card view. Ignoring long-term maintainability, this was one of the most foolproof ways to build dashboards. After that, I'd slap Firebase Auth on top, and that was it.
I haven't worked on these types of projects for nearly two years now. Folks I know use Retool or Softr with an API connectivity platform like Portable, Pipedream, or Zapier. If you're staring at a spreadsheet on a daily basis, the next step should be looking into an app builder combined with an API connection platform.
Monzo (online bank in UK) let you see all your transactions in a Google sheet (with some caveats e.g. interest earned is skipped). I wanted to make a total wealth chart over all years.
It's so damn complicated in Google sheets. In MS Excel I could simply make a pivot chart, apply any aggregation on days/weeks and be done with it. But with Sheets I had to make a new aggregate column, filter data in another new column, and make a chart on that.
Your comment does explain that API is why they export to sheets and not excel but Google sheets is way behind in ease of use.
This is the text book answer. Not something you can actually do on a live sheet with all transactions. If you want to use a different aggregate e.g. week instead of month or skip first year or something like that you will be making these columns all over again.
As I understand it, the appeal of Monzo's way is it's live updating rather than having to log in to some horribly outdated online banking portal and manually export
Yes exactly. The Google sheet is live. I couldn't just add a column in live sheet either. I had to copy the sheet using a formula so that its always up to date. And still had to use to other sheets. Chart is in the 4th sheet.
You're fine with Google seeing complete information about your income and spending? I'm not one of those truly paranoid, but this seems like a bit too much.
What’s your risk model here? Google are not manually or automatically inspecting the contents of your spreadsheets in order to profile you and show you better-personalised ads. Not enough people have such a spreadsheet for it to be worth it for them, I’m sure - and they also probably already know approximately how wealthy you are. Unless you’re under investigation for fraud or something, I doubt the government could get much use out of being able to access your financial spreadsheets via Google compared to the information they already get directly from banks etc. So that only really leaves criminals (for whom I still can’t see a great incentive to read your spreadsheets), and I also don’t really think your Google account is much less secure than your computer’s local storage (if that’s the other place you would keep it)
We’ve learned over and over that tech companies do things which seemingly make no sense to us, but do to them. Not being able to imagine it is not sufficient. Nothing is deleted anymore as well, in hopes it will be useful later.
That said I don’t see a huge risk here, unless combined with lots of other data. Would probably avoid though.
I simply want minimal exposure of my personal data since I don't want to go through life thinking about risk models. Not only is it time consuming, but my creative juices simply don't flow when it comes down to exploiting people (nor do I want them to flow in that direction).
Most big tech companies very rarely/never let a human review private customer data. Therefore I'm fine handing all the data over to big tech companies.
I don't really understand any other point of view - if you're worried about a machine seeing your private stuff, why did you type it into a keyboard in the first place?
> if you're worried about a machine seeing your private stuff, why did you type it into a keyboard in the first place?
Who owns that machine is important though. I don't mind putting my card details in via my phone to buy something because there's a level of trust in the whole system (it's my phone, I have a level of trust that google is not going to steal my card details via android, and a level of trust that the shop is legit and will process my order.)
If some random person off the street asked me to type my card details into their phone that's a very different ball game as I don't inherently trust them.
I don’t care that much about card details. If I see a fraudulent transaction, I dispute it and it’s done. But my personal financial details/etc — that’s much different than a payment mechanism.
> Most big tech companies very rarely/never let a human review private customer data. Therefore I'm fine handing all the data over to big tech companies.
This isn't true. Google and Apple and others turn over user data to human analysts at NSA and FBI and others without search warrants all of the time, on hundreds of thousands of user accounts per year.
To be fine handing all the data over to big tech companies, you have to be fine handing all of the data over to US federal cops and intelligence services, too, because that's what giving the data (in non-e2ee form) to big tech means.
Actually I have heard the exact opposite of what you are stating is true. Both Google and Apple fight very hard to avoid handing data to authorities. They don't want to be seen as some sort of easy conduit to government surveillance or shill. How does that benefit their reputation? I know of one case where Google spent millions on lawyers fighting government wanting access to an activist's email. Their FAQ here makes their policy pretty clear. https://support.google.com/transparencyreport/answer/9713961...
Apple’s own transparency report indicates they turn over data to the USG for over 100,000 different apple IDs each year in the no-warrant-or-probable-cause (FISA orders and NSLs) category.
(Mind you; this includes device location histories due to geoip logs, unique identifiers, iMessage histories, photos, documents, everything.)
The cases they are allowed to tell you about aren’t in this category. They aren’t even allowed to say exactly how many of the secret warrantless orders they received, or exactly how many users were affectee, only 500-count ranges.
For just Apple, for just January 2023 to June 2023 (six months):
National Security - FISA Non-Content Requests
Table for National Security - FISA Non-Content Requests Data
Requests Received
0 - 499
Users/Accounts
40,500 - 40,999
National Security - FISA Content Requests
Table displaying National Security - FISA Content Requests
Requests Received
500 - 999
Users/Accounts
50,500 - 50,999
National Security Letter Requests
Table for National Security Letter Requests data
Requests Received
0 - 499
Users/Accounts
1,000 - 1,499
National Security Letters where Non-disclosure Order Lifted
> Apple’s own transparency report indicates they turn over data to the USG for over 100,000 different apple IDs each year in the no-warrant-or-probable-cause (FISA orders and NSLs) category.
FISA “orders” are warrants and have the same requirement for probable cause as any search or seizure warrant (they aren't criminal warrants so the probable cause is not of there being evidence of a crime, but of the target being an agent of a foreign power.)
NSLs are administrative subpoenas accompanied with gag orders, not warrants, and correspondingly do not have a probable cause requirement; unlike warrants (and like other subpoenas), they are subject to precompliance challenge (and the associated gag order is challengable separately.)
> FISA “orders” are warrants and have the same requirement for probable cause as any search or seizure warrant (they aren't criminal warrants so the probable cause is not of there being evidence of a crime, but of the target being an agent of a foreign power.)
You put orders in quotes, but that’s what they are called, because it is illegal and inaccurate to call them warrants, because warrants per 4A are issued only upon probable cause. FISA orders are warrantless and do not require probable cause.
Snowden was very clear when he released the data on FAA702. No probable cause is required. They are not warrants. There is nobody in the room except a government petitioner and a government judge who rubber stamps them.
They are the #1 most used source in the US IC, and they make it possible for the FBI and DHS et al to read all of your gmail, all of your google docs, and all of your iMessages and phone photos without so much as a shred of criminal wrongdoing.
The idea that they are used only for foreign surveillance is patently false. There is ample hard documentation (again, thanks to Snowden) that they routinely use these to spy on americans. Their twisted logic is that if the data is replicated outside of the US (to say, a datacenter in Europe) then they are legally permitted to access it under the way the unconstitutional FISA Amendments Act (Section 702) is written.
> You put orders in quotes, but that’s what they are called, because it is illegal and inaccurate to call them warrants, because warrants per 4A are issued only upon probable cause.
Orders authorizing foreign intelligence surveillance purposes under FISA are warrants, and are often called warrants, and they, like all warrants, are issued only on probable cause. (It is not improper to call them “orders”, and they are often referred to that way, as well, it is just less specific; all warrants are [court] orders, but not all court orders, much less all orders more generally, are warrants.)
Subchapter I of FISA established procedures for the conduct of foreign intelligence surveillance and created the Foreign Intelligence Surveillance Court (FISC). The Department of Justice must apply to the FISC to obtain a warrant authorizing electronic surveillance of foreign agents. For targets that are U.S. persons (U.S. citizens, permanent resident aliens, and U.S. corporations), FISA requires heightened requirements in some instances.
* Unlike domestic criminal surveillance warrants issued under Title III of the Omnibus Crime Control and Safe Streets Act of 1968 (the “Wiretap Act”) , agents need to demonstrate probable cause to believe that the “target of the surveillance is a foreign power or agent of a foreign power,” that “a significant purpose” of the surveillance is to obtain “foreign intelligence information,” and that appropriate “minimization procedures” are in place. 50 U.S.C. § 1804.
* Agents do not need to demonstrate that commission of a crime is imminent.
* For purposes of FISA, agents of foreign powers include agents of foreign political organizations and groups engaged in international terrorism, as well as agents of foreign nations. 50 U.S.C. § 1801
FISA warrants do not have the check and balance safeguards that other warrants have, and the system for getting FISA warrants has been extensively and egregiously abused
>they are subject to precompliance challenge
and it's weird you go to the trouble to mention this but slough over the problems with FISA warrants. You are not arguing honestly here.
+1 for googlesheets. It is stable, and provided you don't exceed the API limits, you can run a lot of data through it.
Appscript is also incredibly powerful if you work with tools that have decent APIs. With a few lines of js and the built in trigger functions, you can have an automatically updating dashboard with email notifications when certain conditions are met (e.g. balance too low, user engagement metric does X etc).
I've built a scrabble tournament management app on google sheets, because it was the easiest way I could think of to let multiple people edit it with all the others seeing live updates. From a developer's point of view it kind of sucks, having to maintain everything in one large javascript file and having to use spreadsheet tabs as input and output, not to mention manually attaching the script to a new sheet for every tournament. But the user experience has been excellent, modulo a few permissions glitches.
I'm still trying to rewrite it as a self hosted web app this year because the aforementioned permissions glitches and the difficulty of doing versioned deployments has made me reluctant to continue relying on google, but overall sheets has been a huge boon and we have gotten years of use out of it.
The article mentions appsheets, which I absolutely loved in my last role.
Brilliant for knocking up very simple internal tooling off the back of a Google Sheet. Most importantly, there was zero fucking around with IT on it, as it was already included in our workspace package.
Back in 2010, I was in a work meeting making small talk and we were discussing how cringe some advertising is nowadays. One of us, a program manager who was ~15 years my senior told me it's because we're not the target demographic and that millennials are targeted differently, mainly through word-of-mouth and generally Guerrilla marketing tactics.
Whenever I see articles like these and comments like yours, I can't stop thinking about that meeting.
Edit: to clarify, I am not accusing you of anything. But I do suspect the article to be part of a Google marketing campaign.
If I want a database set up in minutes just to focus on the business side and nothing else, I simply fire up a new Django project (I suspect any other framework with an ORM and auto-generated CRUD UIs would be equally competent for that).
I have nothing against Google Sheets, I really haven't put much thought into it in this context, but I would need some convincing that it's a better and easier way to kick things off.
That Django project needs a host to run on, even if it's just your local machine.
If you want others to interact with it, you need to expose it to the internet through some means and think through the security implications of doing so.
The Google Sheets approach doesn't need you to manage a process or web server. It is instantly shareable while letting Google worry about security, performance, availability, etc.
The specifics of the "what's better?" ratio can shift depending on comfort level / experience with either product, or what infrastructure you may already have available to deploy to. But building on top of GApps does mean you get a lot of useful aspects "free".
Thanks, that's a valid point. I have never thought of publishing a PoC web app as more than pushing some files on a server and editing a config file, but now I see how not everyone can be familiar with that.
You can just query a database and work with the results directly . In a dynamic language like python there is little advantage to loading rows into classes. That Django query language is so painful and opaque.
Practically, Django's admin *becomes* the SQLite UI editor you are referring to, I suspect. Starting with it, the amount of UI code, backend code or SQL code to be written becomes a matter of how far you want to stretch the PoC in either direction (more effort on the views for a prettier/more unique front-end, more controller code for more business logic/cases covered, more database work to assess how the data model will scale).
In small business accounting, I have a persistent need for a FileMaker Pro like solution for invoices. FMP was conscious of on-screen layout and print.
ReTool has been recommended as similar replacement for FMP but they don’t have the idea of creating print-ready documented receipts and invoices. You don’t even need to print it, you just need to keep it for the IRS.
I love the idea of using Google Sheets and turnkey app building apps. But I still need documentation.
> "There's gotta be a ton...Among them FileMaker pro?"
You'd think. I don't know what Claris did last week, but when I last dived into this small pool about 12 months ago their license + hosting was something like 2k/year (looking now it might have gone down, or their pricing page is hiding the fees?). Add on top you need to develop, maintain, support your project. These costs are not insignificant for a "small business"--and by the SBA definition my clients are in the bottom 2-18% by employee (size) & revenue.
Accounting of your holiday's expenses or hobby project? Sure. For a whole business and all the legal and practical obligation that entails? Nope. Not a chance. Just give me a proper accounting software so I don't have to second-guess myself at every turn as to whether my code or my interpretation of the law is correct.
The only useful (or even paid) browser-integrated AI service I can imagine using would be a browsing history-aware AI chatbot. Essentially, it would just spit out a link from my history based on the context or prompt I give. Since privacy will be a crucial factor, I can imagine building an extension that reads page contents, stores them in a database, and connects to a self-hosted LLM.
I am using KVM from Cloudcone (their virtualization software was hacked about a week ago) and I am using RPI4.
Then I need to set up my old website again, which is a pain in the butt. I hard-coded cron and a git-based auto-deployment feature (I think).