Hacker Newsnew | past | comments | ask | show | jobs | submit | rancar2's commentslogin

This isn’t the case with 3D printed bracket systems. Studies have shown that the 3D personalized bracket system from LightForce will cut down treatment time over 40%. The refinement phase is also spot on and finishes much faster, and recent studies have shown further improvements. Also, the more complex the case, the larger the delta on the performance. Not all orthos are practicing this way but this is where the future is going whether it is LightForce or another company. The physics just works out better than the best plastics.

Disclaimer: I’m a health nerd and was an employee a couple years ago.


LightForce still requires a lot of visits to the doctor. Nice source of revenue for orthos. Aligners may become more accessible to patients, because that tech is more scalable by design.


Ivan worked at Align and was CTO of European DTC competitor. I agree with Ivan that both technologies have their sweet spot. I disagree with the scale piece and the frequency of visits as there are practices doing both those things differently using the placement and remote monitoring technology.

For younger patients that haven’t had braces before which is the majority of historical patient populations and future ones, 3D printed personalized braces systems are better as the tooth movement needed is more clinically substantial at tougher angles. Many younger patients forget and lose their plastic aligners.

That said: what is chosen should be what for the whole person at multiple levels.


This sounds like AI-written comment, at least in the first part. What I do not get is why are you dismissing the argument about number of visits. 3D printed braces are still braces. They require manual adjustment for each treatment stage. Aligners do not need that - they may need a check-up in the middle of the treatment, maybe one or two mid-course corrections, maybe an appointment for installation of attachments, but that's it. You change aligners at home without visiting doctor.

>Many younger patients forget and lose their plastic aligners.

They can get a new one easily by post.


Taking a quick look at the BOM, it lacks the correct sensor selection.


Even if the correct sensor could be chosen (whatever it is), unlikely is attainable by consumers and the technology would definitely be export controlled in the US.


You'd be AMAZED what you can find on eBay.

I saw this pop up alongside its video thumbnail and nearly shit myself watching it and going "damn, that looks exactly like what's on those RU/UA drones going at each other"... https://www.ebay.com/itm/197224214645

"HS AI Vision Cube For Ultra-long-range Target Recognition tracking & Thermal" for as low as $175. I am feeling the potential ITAR violations straight through my screen.


The funny thing is, at least as I understand it, ITAR only applies to things produced in the United States. As example, you can't buy very good FLIR IR cameras in the United States without a lot of paperwork, but you can trivially buy much better (higher resolution, faster frame rate) and cheaper IR cameras that are produced in China.


> I am feeling the potential ITAR violations straight through my screen

And possibly landing on all kinds of watch lists.

I wouldn’t be surprised if some of the sellers there are just honeypots.

A name like “Ultra-long-range Target Recognition tracking” just screams “Hey, FBI, please come visit me and ask what I am building in the basement”


Are items made, located in, and sold from China covered by ITAR?


I think MEMS gyroscopes and accelerometers used in consumer drones should be just about good enough to measure orientation and acceleration, and those are cheap and easy to get.

You could integrate acceleration to get speed - the flight is short enough to make compounding errors easy to ignore.

I think thanks to drones and RC hobbyists, there's a generally nice body of knowledge on how to get good enough data from consumer hardware to keep things flying.


> You could integrate acceleration to get speed - the flight is short enough to make compounding errors easy to ignore.

‘Easy to ignore’ is not a term I would use here, especially given the motion environment of a rocket. It seems like it might be beginning to be borderline possible.


> You could integrate acceleration to get speed - the flight is short enough to make compounding errors easy to ignore.

False, given how noisy MEMS IMUs are, and the accuracy required. Even Ring Laser Gyros drift quickly.


I did a bit of googling and this was the first result:

https://www.h4-lab.com/store/p/qmu102

This sensor has a 16G limit, which is well above what an amateur rocket cold and the compounding velocity error at 10G would be something like 0.0002 (m/s)/s. Which is way more than good enough, at least for short flights measured in minutes max.


8GB RAM means bye-bye Electron apps and Chrome running at the same time.


I had to check because I'd genuinely forgotten, but the Mac Mini I use all day only has 8 GB. Chrome, Slack, and Spotify are running on it 99.9% of the time, along with several other apps.


Not true, but good riddance if it was.


It's great to see that others have had better experiences than me. I had to upgrade from my M1 Air cause I kept on hitting issues. Note that I'm more on the power user side, and not on the typical light use side in my computer/software use cases day-to-day.


No. It doesn't. Mac OS runs fine for this with 8GB


Define fine. Tahoe, chrome, electron apps running with pretty much anything else already push things over 4gb when things start to get laggy and usability becomes more problematic, atleast to me. You could theoretically run a lot of things ‘fine’ the way you describe. And for the college student who hopefully doesn’t already run Spotify and Discord, it’ll hopefully be “fine”.

I just don’t get arguing that it’s the same experience as what people actually consider fine.


This is the cheapest MacBook available new - you are already compromising. Do you expect an economy car to outrun a Porsche?


I have an 8GB M2 as my primary laptop and I never experience noticeable lag on it, despite doing work on it a normal person would never do.


I would recommend a more systemic mapping like Power Scorecard from Demos.org [1] to help ground the mappings to impacts on populations. Ideally, one would be able to swap the system mappings based on the preferred context, as there are different policies that should be looked at in different ways to assure optimal positive impacts with minimal secondary negative impacts. I haven’t seen a perfect way of doing this yet, but this will be directional correct on the pathway to something even better for all of us. Thank you for your labor of love and efforts!

[1] https://power.demos.org/


The founders are all on a first name basis. I’m surprised no one has noted that Anthropic and OpenAI winning together by giving the world two different choices, just like the US does in its political landscape. In this circumstance, OpenAI wins the local market for its government and aligned entities (while having the free consumer by a matter of cost dynamic for that ideal customer profile which is vary broad and similar to Google’s search audience where most their revenue still depends), while Anthropic is provided the global market and prosumer market where people can afford choice by paying for it.


"B0tH SiDeS ArE ThE SAme!"


My attempt with trying one of their OOTB prompts in the demo https://chat.inceptionlabs.ai resulted in: "The server is currently overloaded. Please try again in a moment."

And a pop-up error of: "The string did not match the expected pattern."

That happened three times, then the interface stopped working.

I was hoping to see how this stacked up against Taalas demo, which worked well and was so fast every time I've hit it this past week.


If we don't see a huge gain on the long-term horizon thinking reflected with the Vendor-Bench 2, I'm not going to switch away from CC. Until Google can beat Anthropic on that front, Claude Code paired with the top long-horizon models will continue to pull away with full stack optimizations at every layer.


Having sent billions of emails between multiple startups:

RE setup and testing: Trust (as is most devops one-time setups). Once the initial email setup is complete, you typically aren’t paying with it much. The black swan outages aren’t really an active concern.

RE PII: email is non-secure and shouldn’t have sensitive data in production either. Also, dev/test shouldn’t have PII in regulated industries as a good hygiene practice (I’ve worked in healthcare, finance, and national security contexts).

Re licensing: I appreciate your openness and clarity on the licensing of the gateway engine as AGPL vs MIT for the rest. There’s a more modern licensing approach with FSL-1.1-MIT. It may be a better fit for customers (ie clear licensing terms when using a paid license and less concerns if the business goes defunct or pivots) and for your business plans.


Thanks, someone who has sent billions of emails is exactly who I need to ask.

Regarding 'set and forget': I agree once infra is stable, it stays. But I see the value when the application layer changes—tweaking templates, switching providers, or DNS updates. Do you still feel mocks are enough there?

Regarding PII: You're 100% right on hygiene. The encryption (ML-KEM-768) is just a 'safety net' for the human errors.

Regarding FSL-1.1-MIT: Very interesting suggestion. I will investigate it.

Honest question: At your scale, is this a niche tool or is 'mock and pray' just the industry standard for a reason? Don’t worry about hurting my feelings, I just need to know if I'm solving a real problem.


For a bit more context, most email infrastructures I’ve worked with are for transactional and marketing DTC and B2B companies. I would read my response in this context.

Re one-time setups and one-time changes: I think this will answer both questions and the implied PMF question as well. For internal FTE staff, this will be handle as a one off exception consistently (it’s really no one’s full-time job or responsibility). You may wish to speak with teams that offer professional services / SaaS including self-hosted where this infrastructure would be helpful. Their jobs are made easier with additional predicable / dependable infrastructure software (ie chat with (a) Twilio’s messaging team which remains the SendGrid acquisition, (b) related Red Hat / IBM) vs more work for an individual who is just doing this one-off. You may wish to consider a revenue share and/or white-labeling as they co-install the infrastructure for your business.


Thanks for that perspective. My goal right now is not money, I just want to build something super helpful. If I can make some cash later, in a way that helps everyone, like with white-label or pro-services, that is great. If not, I am cool with that too.

Building the community is the priority. If I do not solve a real problem for people, then the rest does not matter anyway.

Really appreciate you taking the time to share that 'pro-services' angle. It has given me a lot to think about.


I wasn’t able to gather the future state plans beyond what’s noted in the V2 plans:

https://github.com/VibiumDev/vibium/blob/main/V2-ROADMAP.md

What’s next 5 years look like given that you are very good at building long-term projects that last and evolve through time? And for a very specific example, what’s the plan for incorporating new standards like Agent Skills as they quickly evolve and launch?


short term: yeah, we should totally add agent skills asap! new year's eve goal?

as far as long term plans go, i like the tim o'reilly quote: "create more value than you capture".

with selenium, we created an entire ecosystem of tools, users, companies, and economic activity. (literally billions of usd -- it's a story frequently ignored by the tech press when looking for "open source success stories".) but i hope to do the same with vibium. there will likely be a hosted "vibium.cloud" hosted service. i also hope there will be lots of them. in a similar way, there weren't many "hosted selenium" services when i started sauce labs. now there's a bunch. browserstack, lambdatest, etc.

it was also not really an accident we did that with selenium. there is a lot of behind-the-scenes consensus building that happens to make things like a w3c webdriver standard happen. (funfact: vibium relies on the new! w3c standard "webdriver bidi" protocol heavily inspired by the chrome devtools protocol used by playwright. (tl;dr: it's just json over websockets.)

i'm betting on industry cooperation, standards, and shared prosperity. that's my 5 year plan!


Nice work; I’d love to see a V2. Quick tip: try Flux AI to help accelerate the V2 work!


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: