Hacker Newsnew | past | comments | ask | show | jobs | submit | yihlamur's commentslogin

The short answer is yes.

At the moment, at Vela, we are using the model to run on all new startups being created and flag the highest probability ones for investment, at scale.

However, we'll also incorporate in our product so that entrepreneurs can submit their Linkedin and we can give immediate feedback.


Thanks, @djusby!

Same here, and it's a game changing technology to enable investors to allocate capital to the world-changing projects faster.


Hi @yihlamur, Thanks for being so prolific in the comments here.

Looks like the OP also posed a few questions. Would love to hear any thoughts you have on those! Very curious to hear your thoughts.


Thanks!

You're right - the trained-model is proprietary to our business and currently in production.


Today we're excited to announce our provisional patent application and paper on arXiv: GPTree

GPTree outperforms the randomness by ~10x and the world's best experts by ~3x.

The cool part is that:

1. It's explainable, not a blackbox. A decision tree that humans can understand.

2. Human-machine harmony wins over machine-only and human-only decision making processes.

3. It's potentially applicable and high performant to any decision making use-case.

We fine-tuned the model for our own use-case at Vela Partners, picking outlier startups at their inception stage.

The reason why we love this research problem is that humans are so bad at picking startups at their inception stage.

For context, only 2% of the US-based investor-backed startups become an outlier return at the inception stage. Y Combinator and tier-1 VCs hovers around ~3% and ~6%, respectively. Ten-fold cross-validated GPTree is at 8% and most fined-tuned version is at ~18%.

Please take a moment to take a deep breath and let that sync in...

GPTree can find 1 outlier startup out of 5 of its investments at the inception stage. This may translate into a 10x+ return fund for whoever uses it if the future behaves as it forecasts.

Excited to hear the HN's feedback.


curious how the human intervention worked?

is there a risk of bias? when VCs make decisions, there is no hindsight. but when you ask humans to evaluate predictions, you necessarily are using past (training) data and the humans may recognize general successful patterns or even the examples themselves.

For example: the model outputs that it's considering an investment in a company that lets drivers pick up passengers on the way to their destination and earn some money as well. a person may think, "Duh, this is uber! invest!" Thus inflating "success" rate.


Great question.

We built a dataset of past success and fails with training, validation and test sets. Afterwards, the model got trained with some context from human experts at the beginning such as "Being a repeat entrepreneur is positive". Then, the model built out a explainable decision tree without human intervention. Lastly, a human looked into the models that machine came up with, and improve its logic further. For instance, the model might be asking a vague question such as "Is the entrepreneur based in an innovation hub?". The expert may prompt it to be "Hey, be specific and put specific cities or regions to improve this question".

Then the model would re-run and try to improve that question with a goal to increase the precision.

So this goes on and on with expert-in-the-loop.

Sometimes, expert may give wrong advice! :) And, in that case, the performance would decrease...

Lastly, your example is a great one. Human may introduce some bias into the process. For that reason, we also built a model with no human intervention with 10x cross validation. And, the model still outperforms all humans...

Given that expert-in-the-loop is a time intensive and expensive process, we did not do 10x fold validation for that. However, our initial observations indicate that the magic happens when humans and experts work together.


What about return on investment? You can pick more startups, but your avg return on investment might be lower.


We calculated what would happen if we were to run this with its precision and recall metrics.

The simulated fund with GPTree makes a 10x return vs typical VC funds return 3x, over 10 years.


Today we are excited to launch Vela Terminal.

Overview Vela Terminal helps VCs identify growing market trends, map out relationships, and find fast-growing startups.

Our story In 2016, we left Google to pursue deep learning to address a major global problem.

After exploring various industries, we discovered that early-stage investors who invest in innovation are not innovating in their very own businesses.

Isn't that an oxymoron?!

VCs were simply using their networks to source companies and their intuition to make investment decisions. We perceived this as suboptimal and embarked on research: how can one allocate resources in the most efficient way to the highest-impact projects in the world?

Over the years, we published 20+ pre-peer-reviewed articles and open-source machine learning models at our Github page with the collaboration of the University of Oxford.

We raised $25M and battle-tested our product by partnering with a handful of excellent VC firms and by making various outlier investments.

Our objective with this launch is to help everyone get access to a product-led, AI-native VC partner and enable others to be inspired by our research.

Our vision is to accelerate innovation. That is the only way we can advance our world and humanity.

Introducing Vela Terminal: Co-pilot for VCs We are the world's first product-led and AI-native VC.

VC has been inherently a service business. There has been no VC out there that built a product-led strategy to scale its business so far. To the best of our knowledge, Vela Terminal is the only crazy and unconventional approach out there.

Though we built Vela Terminal to solve our own itch, other fellow VCs, entrepreneurs, and LPs find what we are building highly valuable.

Here's how this baby works:

1. Identify growing market trends You can get a holistic view of what's growing, and identify specific ideas in markets such as developers, sustainability, and e-commerce. You can ask any question to deep dive and have a conversation.

2. Map out hidden relationships As much as we like to automate the whole value chain, the reality is that relationships matter in startups. You can understand who invests with whom, comprehend the strength of connections, and find introduction paths to build relationships with the right people.

3. Analyze fast-growing startups Getting a final list of soon-to-be-hot startups earlier than others is key in VC. That is exactly what our core IP signals to us. Afterward, our AI agents do the hard work of an associate to analyze, generate pros and cons, and due diligence questions, and score.

Business model We are disrupting VC like YC did with the accelerator model and a16z did with its agency and PR model. Our model is product-led, AI-native, and open-source-first.

Our limited partners and entrepreneurs in residence get access to our premium features such as proprietary sourcing algorithms. The rest is free and open to everyone.

We consciously keep a certain fund size ($25M) to remain collaborative while being assertive and high conviction to lead with $500K-1M first checks.

Vela Terminal is just the first step toward our vision of accelerating innovation.

Please share any and all feedback. We are all ears


This is an exciting product - but it is challenging to convince decision makers to try out your solution in the first place.

How do you overcome the customer's mindset of build vs buy, and having an internal competition/enemies from your customers?

It might be a more straight-forward decision when the customer is starting from scratch. However, when the customer is invested in their in-house solution, what does it take to convince them to try your solution?


Thanks! For build-versus-buy, we have a 3-part strategy:

1) Win ICs: Do the "crappy" work of running marketplace search really well. This is ops, data logging and correctness, A/B testing, and managing the complexity of requirements from all competing teams who want to manipulate search results and boost things. These are things that backend search teams usually don't love, but we solve their problems so that they can focus on their expertise and ship features.

2) Don't Compete, combine: Our approach allows us to combine all competing recommendation systems together into a unified model. There is never a this-or-that decision, or a feeling of losing out. This also applies to other vendors. This is a pain for ML ops, but it's worth it. From an ML approach, mixing different systems typically outperforms any component system so long as you have the infra and parameter complexity management to handle them.

3) Build a brand of being the best: Not everything in big companies is engineering experience and metrics. Decisions get made when you're the hot solution that the cool people that you want to be like use. We deliberately focus on working with hot marketplaces and hiring awesome engineers with top experience to built this brand.


Design is slick and mobile friendly.

I specifically like that I can select topics and share that as a link.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: