Thanks for the questions! Vault doesn't have a deep integration to generate credentials for Kubernetes, and Infra plugs in to users' tooling (e.g. kubectl and Kubeconfig) to keep credentials up to date automatically.
Infra's different than Teleport in a few ways. Teleport doesn't provide identity provider integrations beyond GitHub (e.g. Okta) in their open source project. They have a different architecture that involves deploying a centralized proxy service (whereas Infra verifies credentials at the destination infrastructure vs at a central proxy). Further, we've designed Infra around an extensible REST API from the start whereas Teleport uses GRPC.
Infra does work with managed Kubernetes services like AKS/EKS/GKE! It will also work with self-hosted Kubernetes clusters.
Teleport does not require a centralized proxy, because it is based on certificate authorities. You can issue a certificate with or without Teleport proxy and access any cluster that trusts that certificate directly.
Because of this design you can have a completely decentralized system, with cold storage for your CA, HSM or any parallel system issuing certificates. There is also no need to revoke your credentials, because your certs are short-lived and bound to the device and cluster, so there is less opportunity for pivot attacks.
RE: GRPC
First version of Teleport also had HTTP/JSON REST API, but we have migrated to GRPC to support events streaming and have one type system across multiple languages and services boundaries.
Re: Managed clusters
Teleport supports all CNCF-compatible clusters, including AKS, EKS and GKE out of the box.
Great point on GRPC having better support for event streaming! We originally built Infra to have a GRPC API, but many users we spoke to didn't yet have load balancers or ingress controllers that supported the GRPC protocol (e.g. one user had to consider upgrading their AWS Load Balancer controller to put Infra behind it).
We wanted to remove as many hurdles as possible for teams to deploy Infra in their environments. Event streaming will invariably become an important part of the API (e.g. for features like audit logs), and we'll consider GRPC again for internal components of Infra.
RE using Teleport without the proxy, how would a target cluster's Kubernetes API server (e.g. an EKS cluster) verify certificates without Teleport's proxy?
> one user had to consider upgrading their AWS Load Balancer controller to put Infra behind it
Huh?
The AWS load balancer for which gRPC is relevant is their Application Load Balancer (ALB), which would require you to terminate TLS at the ALB and does not support mutual TLS (which is how short-lifetime client certificates work in this case). To the best of my knowledge, you can't pass through a client-key-encrypted gRPC session through an ALB (maybe I'm wrong?).
Typically this requires an NLB, which will treat all TCP traffic (REST and gRPC) the same, so gRPC wouldn't require an upgrade?
My bet that you'd migrate to GRPC eventually as you scale :) I like the simplicity of HTTPS/JSON API as well, but it just broke down for us at a certain scale point.
Re: Teleport with EKS
True, CNCF clusters support mTLS out of the box, but EKS hides the endpoint and does not let you provision CA to trust. You will have to run teleport proxy inside the EKS cluster to translate mTLS to EKS IAM auth. However, you don't have to have a centralized proxy, you can just deploy Teleport proxy agent in each cluster and hide your K8s endpoint.
You also don't have to have a single Teleport proxy to do that.
Thanks! Curious, where did HTTP+JSON break down for you? Was it specifically around audit/event streaming? This would be helpful as we consider building out future updates to Infra, especially considering tools like Kubernetes have put HTTP+JSON APIs the test (at least in their user-facing APIs)
Indeed! EKS + others don't allow custom authentication methods or allow you to use an external CA for the cluster. Running a proxy agent in each cluster makes sense and is similar to how Infra approaches it: I hadn't seen that configuration in your architecture pages!
Have you considered distributing certificates signed by the cluster CA itself (to avoid proxies altogether)? In 1.22 onwards there's a new ExpirationSeconds field when creating a certificate signing request: https://github.com/kubernetes/enhancements/issues/2784 . I imagine this will be supported by all the hosted Kubernetes services - we've been watching this closely.
are you saying that because you can have multiple proxies, they aren't centralized? or that at least this is one mode you can use, but the standard one is using a proxy?
* Proxy is used to handle SSO, Web UI and intercept traffic for session capture. You can have one proxy per your organization, multiple proxies or, if you don't want to intercept traffic, no proxies at all.
* Auth server is used to issue certificates and send audit logs and session recordings to external systems.
* Nodes (end system agents) sometimes are helpful, but not required. For example, if you want to capture system calls in your SSH session, you can deploy node. Or you can use OpenSSH with Teleport if you wish.
Because Teleport is based on certificate authorities, the following deployments are possible:
* One, "centralized" HA pair of proxies intercepting all your traffic (K8s, databases, web, etc). This is actually helpful for many cases, as you have just one entry point in your system to protect, vs many.
* Multiple, "decentralized" proxies in multiple datacenters. This is helpful for large organizations with many datacenters all over the world.
* No proxies at all. You can issue certificates with or without Teleport and reach your target clusters directly, as long as they trust the CA. It's a bit harder for managed K8s, but easy to do with self-hosted K8s, SSH, Databases, etc that support mTLS cert auth. This is super helpful for integrations with larger echo system - any system that supports cert auth should work with Teleport out of the box.
* You can have one auth server HA pair managing a single certificate authority.
* You can have multiple, independent auth servers (teleport clusters) with certificate authorities and trust established between them.
* You can use your own CA tooling with Teleport.
The way we think about Teleport is that it's a combination of certificate authority management system, proxies (intercepting traffic and recording sessions) and nodes (for some services, like SSH providing advanced auditing capabilities with BPF).
You can combine those components, or replace them with whatever makes sense.
As someone who is a big fan of Teleport, sorry, I just don't get it.
> Teleport doesn't provide identity provider integrations beyond GitHub (e.g. Okta) in their open source project
Right, and if you're a small team (5-10 people, like you're targeting) you don't really need SSO on the infra layer. It's a nice to have, it's best practice, but the truth is, by the time you really need it (enough engineers that account management is a pain), you typically have the budget for an Enterprise license.
> They have a different architecture that involves deploying a centralized proxy service (whereas Infra verifies credentials at the destination infrastructure vs at a central proxy).
So anyway you need to deploy something central to issue certificates. And anyway, if, to quote you, "We plan to make money by running a managed service version of Infra so teams don’t need to host and upgrade Infra manually.", isn't that the central proxy service? Yet the open-source version avoids it somehow?
> We plan to make money by running a managed service version of Infra so teams don’t need to host and upgrade Infra manually
So you want to sell to teams that a) are too small to afford the license for a product like Teleport Enterprise, b) have enough money that they can afford a premium product above and beyond the free offering provided by their Kubernetes vendor, like https://github.com/kubernetes-sigs/aws-iam-authenticator (for EKS), c) are willing to install and maintain another agent on their cluster (infra), but aren't willing to install and maintain the central proxy point?
> we've designed Infra around an extensible REST API from the start whereas Teleport uses GRPC.
This isn't really important from a product perspective. For what it's worth, Teleport started with a REST API; they moved to gRPC because, if I recall correctly, gRPC helped them scale to support larger infrastructure better.
If you're launching a competing product to Teleport, which is now by far the most mature product in the space, then currently, at least from where I'm sitting, you aren't offering sufficient added value compared to the incumbent offerings, which also include CloudFlare Access, Checkpoint Harmony Connect SASE, Hashicorp Boundary (their offerings aren't quite Kubernetes native, but it's the same idea)...
> you typically have the budget for an enterprise license.
Not all enterprises are the same and not all companies with more than 100 engineers are ready to dedicate a significant amount of capital to yearly costs for access control. Especially when you can "Make do" with an open source solution and spend the cash on a product that is less replaceable or more necessary. I would also add that this is the Only open-source solution that I've seen that would actually support blanket oidc integration and more specifically with Google workspace etc. Most competitors like Teleport, cloudflare, etc have proper oidc integration for an idp locked behind a pay wall. (Would love to know of any that dont)
> isn't that the central proxy service?
Teleport offers authentication AND a proxy that will let you connect back to your services via their proxy. The certificates that get issued for those backend services are usable as long as you can talk to the service but the proxy acts as an identity aware proxy locked behind your idp or whatever authentication you are using with teleport. From what I can tell infra does not offer a proxy to connect you back to your network. You would host it somewhere and expect users to be able to directly route to infra.internal.company and k8s.internal.company
IMO the fact that they are actually offering a fully open source product without locking any features behind a pay wall makes them worth watching. Obviously they aren't at parity with Teleport, and they don't support SSH or other protocols currently but I expect they'll have a lot of support in the community.
> Especially when you can "Make do" with an open source solution and spend the cash on a product that is less replaceable or more necessary
Ah, but you're getting to the crux of my (hopefully constructive) criticism. Ultimately the goal here isn't to create a useful open-source project and offer it for free. The goal is to open a business (OP is YC W21). That means having a business model where you a) do expect teams to pay you, and b) the number of teams and the amount of money they are willing to pay, in aggregate, is higher than the costs to develop the product.
If offering SSO as part of the open-source core provides enough value that customers do not need to pay you, then your business will fail. And then the open-source project will, in all likelihood, fail, without commercial backing behind it.
If the revenue plan is to sell a managed SaaS tenant, then the price for that managed SaaS tenant must be competitive with established offerings. Which means that it must be competitive with Teleport's managed offering, Cloudflare Access, cloud vendor tie-ins (e.g. IAM authenticator), etc. This sector has enough offerings that it is competitive and the price is quickly getting commoditized. That is not a good strategy for a startup that is not showing a 10x better product than the competition.
SSO on infrastructure is not a must for everyone but it’s a very nice thing to have. Teleport pricing for small teams doesn’t make sense, it’s more expensive than GitHub enterprise that provides SSO, and Infra is very welcome to provide basic features to everyone and not locked behind a “contact us” price.
We plan to build a managed service to provide a 'centralized experience'. This is where, we'd issue certificates/tokens for the users & machines. That being said, many of our users want to make sure that should Infra's server go down, their access will continually work for a configured time-interval. This is why we validate the credentials on the destination side.
Regarding gRPC, we actually started with that, and based on feedback to work with users' systems added REST API support.
Great question! We looked heavily into Dex before creating Infra, and even spoke with their maintainers.
Dex is a federated OIDC provider. Most managed Kubernetes services (e.g. Azure AKS) don't support using custom OIDC providers for authentication and therefore can't easily be wired up to use Dex. Infra is designed to work with any Kubernetes distribution regardless of where it's hosted.
Even with self-hosted clusters that do support Dex, Dex doesn't manage authorization mappings (i.e. Kubernetes RBAC) for users and groups. Teams still need to manually create & remove RBAC roles for users and groups as they are added and removed from identity providers such as Okta. Infra can be configured to map roles for users and groups to Kubernetes clusters, and we're working to support dynamic provisioning protocols such as SCIM to make sure users are automatically revoked as they are removed from identity providers.