Hacker Newsnew | past | comments | ask | show | jobs | submit | thorgaardian's commentslogin

A lot of people will argue that state helps protect against drift, but the real reason I find that you have to have state is to store values that won't be returned a second time and still construct and connect the graph of resources in the IaC templates. For example, if you declare the need for an RDS database and connect its output credentials into another application, you'll need state in order for the applies to work a second time because you'll never be able to retrieve the values from the target provider again.


Yeah, and now all your creds are available for everyone to see. Instead use IAM authentication for RDS, or if it's impossible, store creds in SSM or Secrets Manager.

Yeah, it's not fully transactional, but it will work fine in practice.

State is just a poor crutch.


Looks awesome! I hadn't had the chance to dive into eBPF yet, but I had hoped someone would be able to use it in a clever way like this!

I was digging through the docs and it looks like you have custom language detection. Did you consider trying to extract the language detection features from buildpack to do this? I imagine you'd get more reliable results and less to maintain if you used that as the basis.


Yes we are actually using a combination of env vars / process names / linked libraries and container metadata to detect the language


Interesting use-case for it. Without prior knowledge of a solution like this I would have suggested you send the webhooks to a queue backed notification system (e.g. SNS backed by SQS) and subscribe to the event topic, but sounds must easier to configure and manage the way you instrumented it. Might be a good use-case for me to try out!


This is something you can easily configure with our automatic retry function. We have an option to return a pre-configured response to the caller, and put the request in a queue to be retried until successful. This allows you to have a sustained outage while making sure all calls are eventually delivered.


> This allows you to have a sustained outage while making sure...

Re-driving queue backlogs at services recovering from sustained outages ends in tears almost always. Tread carefully. :)


Typically people use two pools for circuit breaking, with the limit set lower on retries: https://www.envoyproxy.io/docs/envoy/latest/intro/arch_overv...


Yeah, this is what I've seen most services who rely on webhooks from another service to do. Add in some monitoring of how many events are not yet processed (set a alarm when there is X amount of events in it) and you're done!


We're currently building a GitHub integration which receives webhooks and kicks off a bunch of processing actions based on the event type. Your suggestion sounds like a great way to add some observability to the service -- thanks!


What you described in the first sentence is commonly referred to as an API gateway - protecting ingress traffic into a publicly accessible service/app (e.g. Kong, AWS API gateway, Ambassador, etc). Lately there's been a lot more generalized solutions in this category for inter-process communication via service meshes like Istio, Gloo, AWS AppMesh, and others - all of which seem to offer a solution that works for both internal traffic routing as well as external (when whitelisted).

Can you offer a description of your product that differentiates it from service mesh solutions? Did you build your own proxy software, or are you built on top of Envoy like many of the other available solutions?


We are not built on top of Envoy and have built our own proxy.

Many of the service mesh solutions require you to deploy and manage them as an on-premise installation. Our primary offering is a hosted solution, but also offer a managed service for on-premise installations.

As you've correctly pointed out the service mesh solutions can allow routing of external traffic, but by focusing on the external calls there are features that make sense for us to build that wouldn't make sense in something like Istio/Gloo/AppMesh. For example, we can build an enhanced experience around third-party APIs to better understand the calls, errors, quotas, etc that are specific to that provider.


That last paragraph is an interesting addition I handn’t considered actually, so great answer! While I’d be hesitant to use a 3rd party, hosted solution for this use case, I can also see how that affords you the ability to optimize fullfilment of requests per destination across all your users. Is it safe to assume that long term you’ll offer this to larger customers via private installation to alleviate security and latency concerns while still benefiting from the destination knowledge of the central hub to configure routing rules?


Congratulations on the launch!

And thanks for your explanations on how your proxy is similar to and different from API gateway or service mesh solutions.

Having worked on both production monitoring and an API gateway for a Fortune 100 company, I would consider monitoring and proxy to each be valuable in its own right and can envision scenarios where I’d want a standalone product offering for one but not the other.


Why did you build your own proxy instead of using envoy? What short comings did envoy have?


We wanted to architect a system that made it easy to deploy proxy nodes to multiple regions and clouds. We also wanted it to be easy to add functionality specific to our feature set. While we might have been able to achieve our goals by modifying an existing proxy, it made more sense to us to build our own. I have built proxies in previous companies and this was something I was very comfortable doing.


Can you expand on what specific part of envoy prohibited that?

Additionally, as other commenters mentioned, almost every company has rallied around Envoy and is spending considerable time/money making it better. If your solution isn't as performant as envoy, it seems like a poor architecture choice to roll your own, especially given the time/money constraints startups have.


I’d speculate this is likely a result of one or both of the following:

1. A de-risking strategy. Investing early effectively discounts a future acquisition, but doesn’t go so far as to bet the farm on the businesses success. They would also gain very early and regular access to financials that would tell them if it’s worth the follow-up plunge or if it is on its way to bust. 2. The founders thought there was more room to grow independently, but wanted to keep Visa friendly and close to home. Best way to do that is to change an acquisition conversation into a partnership/investment conversation.


This is a problem I've seen at every company I've ever worked at. Everyone starts off thinking they'll have independence, but it's only natural to build on what already exists and thus you end up with complex dependency topologies.

Large software companies like Google/Facebook home build their own opinionated frameworks for publishing services that include citing of dependencies via config files. Internal engines then scrape for these configs and manage the relationship topology across environments. As far as SWEs are concerned its like magic.

I'm working on trying to standardize such a framework, https://docs.architect.io, and would love preliminary feedback.


Confirm.io | Backend Engineer | Boston, MA (ONSITE) | Full-time

Confirm.io is an 18 month old, Series B funded SaaS startup providing APIs and SDKs to authenticate state and federally issued identity documents. Our team specializes in machine learning tactics to detect differences between real documents and forgeries, and offers those abilities to customers to include in their mobile apps. Being able to reliably trust the identity of users is crucial for high-risk mobile transactions, and we aim to deliver that trust with as little friction as possible.

Our distributed architecture is powerful, but requires top-notch developers to manage and proactively contribute to it. We're seeking a backend engineer to join our platform team and deliver the APIs that feed intelligent and curated results to customers, so they can reliably prevent fraud within their business.

More details: https://www.confirm.io/careers#job-35483


Looks just like that. As dstaten said, "just Promise.all but for microservices". I've been contributing to an open-source project called snappi.io - a microservice blueprint that includes very comparable dynamic RPC functionality. Combining RPC with standardized service contracts allows us to create and inject stubs for distributed systems as they get shipped out to solve the problems described in the article. Happy to see companies like Twitter following a similar path, and we're hoping to bring those patterns to more applications powered by a wide variety of tech-stacks.


that's very interesting. Have you looked at linkerd.io (used in production at large kubernetes deployments)


I have and personally I'm quite fond of it, but it doesn't quite go far enough imo. Linkerd is very lean and requires virtually zero knowledge of how the underlying service it manages works, but the knowledge of interface contracts for services is an important one.

Snappi introduces some mild requirements for how a service exposes it's functions, but once that is done the rest can be automated at deploy time. Not only can service-based load balancers be created dynamically, but RPC stubs can be created for each service and injected into peer services that are dependent on them. For example, if ServiceA needs to access ServiceB, it can do so by referencing this RPC stub knowing that the location of ServiceB will be injected at deploy time to ensure requests are fulfilled for the environment.

There are a number of benefits to this approach for a single application, but one of the larger benefits of consistent specs for contract creation is that individual services will be far easier to share. We're trying to create a structure that will not only empower us to re-use our own services, but also to consume services published by others by simply specifying them as a dependency.


you should get in touch with @beeps of Kubernetes. There has been quite a few discussions on replacing the iptables based Service architecture of k8s with something like Ingress (ingress-all-the-way).

Your work could be perfectly integrated into k8s


Thanks for the suggestion! We're actually using a lot of k8s inspired processes to power the tool, and may end up building on top of k8s as we learn more about how other developers use snappi. Definitely a lot of synergies, and I'll be sure to reach out to @beeps to talk about them.


> I think that standardisation should happen at the level of the stack/component (not at the application level). Most application developers don't know enough about specific components like app servers, databases, message queues, in-memory data stores... to be able to effectively configure them to run and scale on K8s (it's difficult and requires deep knowledge of each component).

Can't agree more with this, but I would add that its not limited to the specific components listed like databases, message queues, and others. Getting any component or service configured to autoscale on K8s and work its way into a larger infrastructure can often require far more working knowledge than should be necessary. Standardizing the interface these components use to publish themselves would help K8s take on this responsibility more fully. I can only speak for myself, but I for one would happily adopt an interface like this if it meant seamless distribution, autoscaling, and consumption for peer components.

The last part about consumption for peers is important as well. Though the standardized interface would empower a higher level of scale automation, the standardization of this automation could be translated to interface assumptions for external components as well. In the Redis example above, a standardized interface for the service would mean that K8s can deploy it automatically, but also that other services can make similar assumptions about it's location in a deployed environment.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: