Hacker Newsnew | past | comments | ask | show | jobs | submit | beart's commentslogin

> nearly every other project uses it for some reason instead of fetch (I never understood why).

Fetch wasn't added to Node.js as a core package until version 18, and wasn't considered stable until version 21. Axios has been around much longer and was made part of popular frameworks and tutorials, which helps continue to propagate it's usage.


Also it has interceptors, which allow you to build easily reusable pieces of code - loggers, oauth, retriers, execution time trackers etc.

These are so much better than the interface fetch offers you, unfortunately.


You can do all of that in fetch really easily with the init object.

   fetch('https://api.example.com/data', {
  headers: {
    'Authorization': 'Bearer ' + accessToken
  }
})

There are pretty much two usage patterns that come up all the time:

1- automatically add bearer tokens to requests rather than manually specifying them every single time

2- automatically dispatch some event or function when a 401 response is returned to clear the stale user session and return them to a login page.

There's no reason to repeat this logic in every single place you make an API call.

Likewise, every response I get is JSON. There's no reason to manually unwrap the response into JSON every time.

Finally, there's some nice mocking utilities for axios for unit testing different responses and error codes.

You're either going to copy/paste code everywhere, or you will write your own helper functions and never touch fetch directly. Axios... just works. No need to reinvent anything, and there's a ton of other handy features the GP mentioned as well you may or may not find yourself needing.


Interceptors are just wrappers in disguise.

    const myfetch = async (req, options) => {
        let options = options || {};
        options.headers = options.headers || {};
        options.headers['Authorization'] = token;
    
        let res = await fetch(new Request(req, options));
        if (res.status == 401) {
            // do your thing
            throw new Error("oh no");
        }
        return res;
    }
Convenience is a thing, but it doesn't require a massive library.

That fetch requires so many users to rewrite the same code - that was already handled well by every existing node HTTP client- says something about the standards process.

It could also be trivially written for XMLHttpRequest or any node client if needed. Would be nice if they had always been the same, but oh well - having a server and client version isn't that bad.

Because it is so few lines it is much more sensible to have everyone duplicate that little snippet manually than import a library and write interceptors for that...

(Not only because the integration with the library would likely be more lines of code, but also because a library is a significantly liability on several levels that must be justified by significant, not minor, recurring savings.)


> Because it is so few lines it is much more sensible to have everyone duplicate that little snippet manually

Mine's about 100 LOC. There's a lot you can get wrong. Having a way to use a known working version and update that rather than adding a hundred potentially unnecessary lines of code is a good thing. https://github.com/mikemaccana/fetch-unfucked/blob/master/sr...

> import a library and write interceptors for that...

What you suggesting people would have to intercept? Just import a library you trust and use it.


Your wrapper does do a bunch of extra things that aren't necessary, but pulling in a library here is a far greater maintenance and security liability than writing those 100 lines of trivial code for the umpteenth time.

So yes you should just write and keep those lines. The fact that you haven't touched that file in 3 years is a great anecdotal indicator of how little maintenance such a wrapper requires, and so the primary reason for using a library is non-existent. Not like the fetch API changes in any notable way, nor does the needs of the app making API calls, and as long as the wrapper is slim it won't get in the way of an app changing its demands of fetch.

Now, if we were dealing with constantly changing lines, several hundred or even thousand lines, etc., then it would be a different story.


But you said so yourself they are necessary… otherwise you would just use fetch. This reasoning is going around in circles.

Why the 'but'? Where is the circular reasoning? What are you suggesting we have to intercept?

- Don't waste time rewriting and maintaining code unecessarily. Install a package and use it.

- Have a minimum release age.

I do not know what the issue is.


but it does for massive DDoS :p

> Likewise, every response I get is JSON.

fetch responses have a .json() method. It's literally the first example in MDN: https://developer.mozilla.org/en-US/docs/Web/API/Fetch_API/U...

It's literally easier than not using JSON because I have to think about if I want `repsponse.text()` or `response.body()`.


that's such a weak argument. you can write about 20 lines of code to do exactly this without requiring a third party library.

Helper functions seem trivial and not like you’re reimplementing much.

Don't be silly, this is the JS ecosystem. Why use your brain for a minute and come up with a 50 byte helper function, if you can instead import a library with 3912726 dependencies and let the compiler spend 90 seconds on every build to tree shake 3912723 out again and give you a highly optimized bundle that's only 3 megabytes small?

> usage patterns

IMO interceptors are bad. they hide what might get transformed with the API call at the place it is being used.

> Likewise, every response I get is JSON. There's no reason to manually unwrap the response into JSON every time.

This is not true unless you are not interfacing with your own backends. even then why not just make a helper that unwraps as json by default but can be passed an arg to parse as something else


One more use case for Axios is it automatically follows redirects, forwarding headers, and more importantly, omiting or rewriting the headers that shouldn't be forwarded for security reasons.

fetch automatically follows redirects, fetch will forward your headers, omitting or rewriting headers is how security breaks… now a scraper got through because it’s masquerading as Chrome.

What does an interceptor in the RequestInit look like?

A wrapper function around fetch… that’s what interceptors are…

It also supports proxies which is important to some corporate back-end scenarios

fetch supports proxies

Before that we had node-fetch. If you already use a dependency why not one that's pretty much what will come natively to every JS runtime soon.

The fetch API is designed for browsers. It's not designed for servers. Fetch may work for a particular use case on the server, it may not. Servers have needs over and above what a browser allows the client to do.

Now I'm curious, because we have a big server side code base using fetch(). What are you using that doesn't work with fetch? Especially since axios nowadays has a fetch adapter.

Right. Though I would've used the built in xhr then. Not going to install a dep just to make http calls.

Firefox is still ahead of Chrome in several areas.

    - multi account containers
    - ublock origin (and extensions in general)
    - extensions on Android
Firefox has also recently improved tabs with a number of features. I haven't used Chrome in a long time, so I don't know if these exist there.

Firefox just works, and blocks ads, and doesn't randomly decide I'm not allowed to do things it doesn't' approve of anymore (like block ads with ublock origin).

What features does Chrome provide in the last year (that presumably would not yet be copied by Firefox)?


I think multi container accounts are too niche to move the needle, but mobile extension and better ad block are fair points.

> What features does Chrome provide in the last year (that presumably would not yet be copied by Firefox)?

That's the thing though, when you are ahead you can coast on being the same. When you are behind you don't have that luxury. You have to be better in some way and not just mildly (i.e. you need some killer app). If marginally better was enough we would all be using plan9.

Maybe the ad blocking story can become that for firefox, but i think chrome would have to get a lot more heavy handed before that can really become a marketing win for firefox.


It had problems in 8. I would frequently type my search term, see it was the number one result. I would then attempt to arrow or tab down and hit enter to launch that result. Between arrowing down and hitting enter, the result list would update/reorder and suddenly I'm launching some unknown program. Happened all the time.

This entire article is based on a one sentence tweet with zero details provided.

"Ya I hate that. Working on it." - Could mean anything, which I would argue in this case, is equivalent to being meaningless. Does this mean Hanselman has a team with tickets lined up for the next sprint to allow offline accounts as a first-class workflow? Or does it mean he sent an email to the relevant stakeholders asking, "Hey guys, what can we do about this"?

I am not encouraged that we will see a change in momentum from Microsoft on this issue.


Hey, sorry I'm a bit lost trying to follow your comment. Who are "We" that you are referring to?

I think the we is a bot.. they already posted this: https://news.ycombinator.com/item?id=47497129

And suggested a mod should read comment history: https://news.ycombinator.com/item?id=47497296


Not a bot. Anyway if you have questions about router security rather than moderation happy to "delve" into that.

Yes, please share more of what you've found about wifi security.

Supernetworks -- ill update. Our initial comment got moderated for too much self promotion so also apologies there and again for anyone who is offended

How about Sonar as in SOund Navigation And Ranging?


Outside of sqlite, what runtimes natively include database drivers?


Bun, .NET, PHP, Java


For .NET only the old legacy .NET Framework, SqlClient was moved to a separate package with the rewrite (from System.Data.SqlClient to Microsoft.Data.SqlClient). They realized that it was a rather bad idea to have that baked in to your main runtime, as it complicates your updates.


It's still provided by Microsoft. They are responsible for those first party drivers.


For Bun you're thinking of simple key / values, hardly a database. They also have a SQLite driver which is still just a package.


I think you're confusing the database engine with the driver?


The switch from plan mode to build is not always clearly defined. On a number of occasions, I've been in plan mode and enter a secondary follow up prompt to modify the plan. However, instead of updating the plan, the follow up text is taken as approval to build and it automatically switches to building.

Ask mode, on the other hand, has always explicitly indicated that I need to switch out of ask mode to perform any actions.

This is my experience with Cursor CLI.


The first time I recall encountering this sort of feature was in one of the early sim city games. I wonder if this being a feature of Claude indicates the humanity of some engineer behind it, or if it is a deliberate effort to apply humanity to the agent.


In fact ‘Reticulating splines’ from simcity 2000s load screen is one they use.


The Windows 8 equivalent server edition also included the upgrade to Metro UI. I don't know, I guess MS figured IT wanted to provision Windows services using a surface tablet?

I actually really did like Windows Phones though. I can imagine a world with a third competitor in that space today... But MS didn't seem to have any understanding or ability to develop an ecosystem that works. Even when they were literally paying people to write apps for their app store, it was just terrible.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: