Hacker Newsnew | past | comments | ask | show | jobs | submit | sudo_navendu's commentslogin


Weights on different metrics. From https://github.com/twitter/the-algorithm/blob/ec83d01dcaebf3...

private def getLinearRankingParams: ThriftRankingParams = { ThriftRankingParams( `type` = Some(ThriftScoringFunctionType.Linear), minScore = -1.0e100, retweetCountParams = Some(ThriftLinearFeatureRankingParams(weight = 20.0)), replyCountParams = Some(ThriftLinearFeatureRankingParams(weight = 1.0)), reputationParams = Some(ThriftLinearFeatureRankingParams(weight = 0.2)), luceneScoreParams = Some(ThriftLinearFeatureRankingParams(weight = 2.0)), textScoreParams = Some(ThriftLinearFeatureRankingParams(weight = 0.18)), urlParams = Some(ThriftLinearFeatureRankingParams(weight = 2.0)), isReplyParams = Some(ThriftLinearFeatureRankingParams(weight = 1.0)), favCountParams = Some(ThriftLinearFeatureRankingParams(weight = 30.0)), langEnglishUIBoost = 0.5, langEnglishTweetBoost = 0.2, langDefaultBoost = 0.02, unknownLanguageBoost = 0.05, offensiveBoost = 0.1, inTrustedCircleBoost = 3.0, multipleHashtagsOrTrendsBoost = 0.6, inDirectFollowBoost = 4.0, tweetHasTrendBoost = 1.1, selfTweetBoost = 2.0, tweetHasImageUrlBoost = 2.0, tweetHasVideoUrlBoost = 2.0, useUserLanguageInfo = true, ageDecayParams = Some(ThriftAgeDecayRankingParams(slope = 0.005, base = 1.0)) ) }


I think humans will be able to innovate much faster by building on top of existing knowledge that tools like GPT give access to. It might be different from what we have experienced so far. But there is very less chance that humans stop innovating.


It is perfectly fine if the code actually works. At least for now, getting the exact code you want from an AI is a legit skill. This was pure spam.

I wrote more about what actually happened here: https://navendu.me/posts/ai-generated-spam-prs/

It can help set some context to the discussions.


I wrote more about what actually happened here: https://navendu.me/posts/ai-generated-spam-prs/

It can help set some context to the discussions.

TLDR:

Recently, a person has been using AI tools to generate code and open pull requests to open source projects I contribute to.

The code is entirely wrong and doesn’t work, and it is evident that the person making these pull requests doesn’t understand the code.

The person also copied explanations (which was an obvious giveaway as it sounded like a typical <popular AI tool> response) into the pull request and attempted to explain the code and answer questions from the reviewers.

We were polite and when it didn’t work, reported the person to GitHub.

I don’t want to shame the person publicly. But I want to make other open source maintainers aware that this is a thing and prevent them from wasting time and effort chasing such people down.


Actually, we were sure that it was spam. GitHub gives the option to "approve workflow run for first-time contributors". I guess none of the maintainers thought to approve it because they thought it might be spam. Still, a lot of time and effort spent to review it.


That button was added to GitHub to protect against new bot accounts creating PRs against random projects, adding a CI step that runs a cryptominer. Now that the CI doesn't run automatically for new users without a button click, these attackers have a much harder time.

So tell your maintainers to use that button more liberally -- it mostly just exists to save GitHub money / discourage these attacks. It doesn't hurt to click it for these "CV improvement" spam PRs, and it makes rejecting the PR a lot simpler if there's a red X.

I usually just scan file list changed by the PR, and if it isn't changing CI stuff, I just let the actions run prior to the actual code review.


Now they clearly won't because we will be reporting them to GitHub.


This is how you use AI tools to write code.

In our project, the user just copy-pasted the output from the AI tool and called it a day. They did not even bother to build the project and test it.

I have also started using AI tools and it has made me much more efficient. I could have done a task without AI but with it, it is much more faster.


That makes no sense. The one thing chatgpt does _really_ well is setup unit tests, which is the part of unit tests I hate.

My ChatGPT workflow is give requirements -> have it create unit tests -> give it test results until it passes.

Been playing around with a generated-code-only project: https://github.com/JerkyTreats/scrivr/

In that workflow I don't really look closely at the code. In most cases I've found it isn't really necessary.


As long as it does what it intends to do, plain old HTML would suffice.


I love using RSS to subscribe to blogs. Many of my favorite personal developer blogs have RSS feeds, and I have them on my blog. I'm currently using Readwise to get all my feeds in one place. This filters out a lot of noise prevalent in social media like Twitter.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: