Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Lightform's Sunset (lightform.com)
69 points by xbryanx on Feb 7, 2022 | hide | past | favorite | 19 comments


I bought a Lightform unit early on, and frankly I was underwhelmed with the overall deal. In particular, for some reason, I had the impression that I could use the camera/scanning features interactively, but unfortunately no, the scanning/detection features were purely for scene set up, and then its running a video file from there. It seemed like there was only nominal benefit compared to just setting up a projector and manually cutting up the desired scene. Moving the projector or moving new props into the scene requires a complete rescan of the scene. Lightform saved time for sure, but it didn't seem to take any leaps in terms of possibilities or 'magic'.

On top of that, once the scene is scanned and the program is set up to run, I assumed I could keep that video file / program running on a raspberry pi or something and move the Lightform on to the next project... nope. Had to keep the unit tethered to the scene.

The few times I set it up for demos for people, the consensus was, "huh thats kinda neat, is that all it does?"

Its possible some features got updated and more interesting stuff is possible. For now, my Lightform is sitting in the closet and Im sorta surprised they survived this long to be honest.


Same, I hoped that it had some real time capabilities. I really wanted to have it tracking an object, heck I would settle for mapping a rotating display, but it turned out to be rather limited. I kind of liked my LF2 as an all-in-one unit though. I do realize that structured light has limitations too.

My biggest use case was that I wanted to render output in real-time which the software couldn’t allow me to do at the time


Lightform is sick. I wish they'd open source everything instead of (I assume) selling all IP to a patent troll to recoup sunk costs.

If I were a VC, I would make it a covenant in any seed investment that the company had to open-source everything if they didn't make it to profitability.

I honestly feel, in this day and age, patents don't really help you until you get to the scale that they do (purely defensive) unless you're a troll.

Don't feed the trolls.


I completely agree.

I think in the case of Lightform, their core projection mapping algorithm is already public and has open source implementations. It's called "structured light". A quick search turned up this open source implementation: https://github.com/jhdewitt/sltk


maybe that's why they failed, they should have kept it closed and private with tight controls, patents, etc... in order to really profit.

couldn't they even get in trouble with their investors for having failed to achieve profitability with this misguided "open" strategy?

/devil's-advocate


I could be wrong but I think they built their product on research that was already open and public. I believe the technique predates the company.

Their "moat" was never the mapping algorithm itself; it was the suite of tools (hardware and software) that allowed users to easily map content without understanding the details of that algorithm.

I don't think their problem was that everyone was flocking to the open source tools. I think their problem is that the market for automatic projection mapping is extremely small.


I've followed this company since it was a research project at UIUC. I really admire what they built and feel sad it's shutting down.

Good luck to all, and I look forward to whatever you do next!


Wait it was from UIUC? Damn I never knew. Which lab was it from?


Was Lightform solving the right problem? If the optical mapping is done, I would guess that the hard part will remain as making custom art for that specific topography. Can someone correct me here?


Per my other comment, I think you are correct, they weren't really solving the right problem. I dont know what problem they should have been focusing on, but the end result didn't seem to hit on anything for me, understanding that maybe I wasn't the target customer.


Yet another company shutting down but keeping their proprietary products proprietary for some bizarre reason. "Since we won't benefit from it any more, no one else should either"?


nobody wants people to get things as valuable* as their hard-earned IP without either paying up, or putting in the pain and sweat just like they did.

also, you wouldn't want the Chinese to get this for free either, right?

*potentially valuable

/angry-snark

I just dunno what to do to counteract this tendency I perceive which ends up undoing the public domain in lieu private, trade-able "IP".


In my world it's valuable for the humanity merely by the fact that it's already been invented. The progress has already been made. Other people could improve upon it instead of reinventing the same thing over and over again. Yes, the Chinese will get it for free. But your company is shutting down, it's not going to earn you a cent more either way. You aren't going to incur any additional losses by open-sourcing your product, are you?


Lightform's LFX project is really cool. Unfortunate that the company is shutting down.


This is a mistake.

They need to pivot to AR/VR


Damn, this is heartbreaking...lightform is incredible, it's mindblowing how much damage the kung-flu did to the industry.


Anyone know of any alternatives? This honestly doesn’t seem like a hard technical challenge if you’ve got a high quality webcam.


If you're looking for an all-in-one, end-user like product, the mk360+ ( https://broomx.com/products/mk360 ) might be an option, but it's quite pricey.

If you like more a DIY approach take a projector + pc as hardware and a projection mapping software like MadMapper ( https://madmapper.com/madmapper/features ), which also includes a 3D scanning via structured light, exactly like the Lighform devices are doing it. For even more complex scenarios, you could dig into creative coding envirornments like vvvv ( https://visualprogramming.net/ ) for example.


MadMapper is definitely the best for entry level into this field. It's user friendly, cost effective and multi-platform. The MiniMad is quite a cool little thing too.

With projection mapping there are a couple of different techniques which you can use to achieve a good mapping.

The first is to simply take points on a mesh or grid and warp your video. You can corner pin first via a perspective corner pin tool and then warp the inner points linearly on a finer level. This works well for simple objects like walls or boxes etc.

The second is to use a 3D model of the physical object and then treat the projector as a camera. With this method you can calculate the projection by just dragging points in the projectors camera space onto the 3D world space and then calculate the perfect position of your "camera" from that. This second method is what people use when doing large scale building mappings or complex models.

Both methods are available in MadMapper. If you wanted go even more DIY and build your own tools and real-time visuals then I'd recommend Derivative's TouchDesigner as it has two tools (Kantan and Stoner) for the 2D approach and a tool called CamSchnappr for the 3D approach.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: