A simple translation of keywords seems straightforward, I wonder why it's not standard.
# def two_sum(arr: list[int], target: int) -> list[int]:
펀크 투섬(아래이: 목록[정수], 타개트: 정수) -> 목록[정수]:
# n = len(arr)
ㄴ = 길이(아래이)
# start, end = 0, n - 1
시작, 끝 = 0, ㄴ - 1
# while start < end:
동안 시작 < 끝:
Code would be more compact, allowing things like more descriptive keywords e.g.
AbstractVerifiedIdentityAccountFactory vs 실명인증계정생성, but we'd lose out on the nice upper/lowercase distinction.
I hear that information processing speed is nearly the same across all languages though regardless of density, so in terms of processing speed, may not make much difference.
It never really took off. I think because computers already require users to read and type Latin letters in lots of other situations, and it's not that hard to learn what a few keywords mean, so you might as well stick with the English keywords everyone else is using.
People act like the keywords are the hard part. They aren't, and once you get past 'for' and 'if' the rest of the toolchain still lands in English: package docs, compiler errors, logs, blog posts, and ten years of answers on Stack Overflow.
Changing syntax doesn't change the surrounding world. Unless you plan to translate half of pip and npm you mostly end up with a teaching language or a local curiosity.
That probably pops up all over the place, like how there's no real progress making the terminal support different keyboards/languages (e.g. send raw key code to terminal apps).
Technical people already have to make concessions to deal with ascii chars and English in computing by the time they use a terminal, so the upside of changing any one thing kinda peters out.
I think the actual reason it has not taken off is because of the ecosystem. You can translate Python, but nobody using a high-level language like Python uses just Python. The first time you pull in a library and are met with a wall of English documentation, error codes, and APIs, you're right back to where you started. Low-level OS APIs are also in English. I suspect there is a lot of potential being left on the table here, but it's a massive undertaking because you need to translate not only a language but also enough of an ecosystem for people to be able to make programs.
Good point about compactness — 실명인증계정생성 vs AbstractVerifiedIdentityAccountFactory is a real example where Korean shines.
One distinction though: Han uses actual Korean words, not transliterations. 함수 means "function" in Korean, 만약 means "if" — they're real words Korean speakers already know.
Your example uses transliterations like 펀크 and 아래이 which would look odd to a Korean reader. That difference matters for readability.
Using actual Korean words rather than transliterations greatly aids readability --- I can still remember stumbling over the transliteration of "Walker Hotel" when taking Korean at the Presidio of Monterey, and pretty much everyone else had the same problem.
Scratch supports Korean, but Scratch benefits from using JSON instead of bytes or code points to serialize programs, which allows the user to change their display language (similar to how hard tabs let users set indentation size).
There's probably a lot of reasons why non English programmers stick with English keywords, beyond just language/tooling support. Learning new keywords is already part of learning a programming language, and much of the documentation and resources available for languages and libraries are only in English. ASCII-only strings are still ubiquitous in software, like URLs and usernames. And in international teams, English is the go-to lingua franca.
Could this change with LLMs? Maybe, but most code in its training data is in English, so LLMs likely work most effectively in English.
I can't speak to Korean, but thinking about Japanese, one probable reason it wouldn't catch on is how tedious and inefficient it would be to constantly switch between input modes. Japanese input mode is designed for prose, and isn't well-suited to efficiently entering the symbols commonly used in programming. Even spaces. It results in needing a lot of extra keystrokes.
I was going to suggest emotional leetcode, but LLMs do well on this.
When given a conversation about Alice and Suzy having a one-upmanship conversation (my husband rich, my kid is a genius) and what emotions they are feeling, and what Suzy could have said instead to improve the conversation, it gave accurate responses (e.g. they're feeling insecure, competitive, envy).
That type of question could also turn people off. We already have too many discussions where people are quick to jump to conclusions and attribute intent, rather than asking basic questions.
Some say that when encountering difficulties, one should either change them or adapt to them, and be able to distinguish between the two. It is obvious that we cannot change climate change, and even if individuals can do some environmental protection, they still cannot change it.
Yeah, I couldn't figure out how to set billing caps on the gemini API. Here's what the chatbot said:
Me: Help me cap gemini API request costs ... limit total billing for this project to max $100 a month
GC: Hello! While it's not possible to set a hard spending cap on Gemini API requests, you can set up billing alerts to monitor your costs and avoid surprises.
Me: How to set hard budget limit tied to billing account
GC: Based on your account information, it is not possible to set a hard budget limit that automatically stops charges for a billing account.
Why is this sentiment expressed so often ("It was written/edited by AI"?
It seems to bother people, perhaps since it may have been low-effort.
Doesn't it not matter as long as the content is good? Otherwise, it seems to be no different than a standard low-quality post.
The formulaic style/cadence/structure/tone is annoying, for one due to its LLM-induced prevalence, but also because it is padded and stretched without adding substance while being dyed in superficialities, and has a weird tendency of meandering through its thematic territory, like the author was slightly distracted or is writing the same thing for the 20th time, or is missing a good editor. Pre-LLM, it might have been an okay-ish, but not great, article. Now it’s just grating and makes you feel like you’re wasting your time reading it.
When I want to read Ai writing (which is not never), I chat with it myself and I prompt it better and get more interesting stuff than these generic insight blogspam.
The important factors seem to be intrinsic motivation and other good mental faculties like great memory for concepts and formulas, understanding.
It's hard to say whether the motivation came from the good skills (understanding, memory) e.g. "I'm good at this, I like it!", or that the good skills came from the motivation. I believe both are important though, and that they are intertwined.
Make sure you can return the keyboard if you decide to try one out.
I found it wasn't for me (too big, uncomfortable, keys too far apart, harder to type without looking IIRC) and the company (in Canada) refused to issue a refund and I was SOL.
Translating the confusing science speak, basically:
Appearance self-esteem takes a hit when they don't fit in a size. They take it out on the clothes: "I hate their stuff, they suck." They buy more of other stuff to compensate for the hit, whether non-sized accessories (I am pretty) or book/tech (I am smart even if I don't fit).
People confident in their appearance are immune to the effect, and simply think it's sized wrong or runs small.
They use precise but indirect terminology e.g., "heightened level of appearance self-esteem" rather than "confident in their appearance".
Indirect phrasing e.g., "they respond more favorably to products that can help to repair their damaged appearance self-esteem" rather than something direct and easy to understand like "they feel bad that they don't fit, so they end up buying other things like makeup/jewelry to feel better about their appearance".
Being able to easily, and quickly read scientific literature is not a universal trait. You're in the top 1% (probably top 0.1%?) if you're able to do that and actually understand the source material.
The average person has a hard time reading and fully understanding a newspaper article or cooking instructions on a pre-prepared meal.
The first paragraph is fine -- I agree. The second paragraph is a silly hyperbole that comes up over and over again on HN and needs to die. Major newspapers are written for about 8th grade level reading comprehension. Cooking instructions on a prep'd meal are probably much lower -- maybe 5th grade. The "average person" (whatever that term means) living in highly developed nations can read at 8th grade level or above.
Why can't you write that? It is much more accurate than their own version since what they wrote is very suggestive while this is just describing what happened.
I think they read the full paper rather than the snippets and agree most couldn't tell you what Cronbach's alpha is, how ANOVA works, or otherwise accurately interpret the meaning of the results sections in a casual read through. One can grab the full paper on resources such as Anna's Archive if they don't have access via a university or such.
Of course, the trick (once you know) is you don't need a comment summarizing it for you. The abstract is alright in a pinch, but the "General Discussion" in psychology papers is the equivalent of "Conclusion" and aims to discuss the results directly. It's still a bit verbose... but the language should at least be very familiar in comparison.
I hear that information processing speed is nearly the same across all languages though regardless of density, so in terms of processing speed, may not make much difference.
reply