Local development against a remote Kubernetes or OpenShift cluster
Alternatives To Telepresence
Project NameStarsDownloadsRepos Using ThisPackages Using ThisMost Recent CommitTotal ReleasesLatest ReleaseOpen IssuesLicenseLanguage
Telepresence5,86744 hours ago355September 23, 2022354otherGo
Local development against a remote Kubernetes or OpenShift cluster
24 days ago152apache-2.0Go
The container native, cloud agnostic serverless platform.
Udemy Docker Mastery4,730
11 days ago26mitJavaScript
Docker Mastery Udemy course to build, compose, deploy, and manage containers from local development to high-availability in the cloud
5 years agoapache-2.0Go
Open source event based Platform as a Service
4 months ago9apache-2.0Go
Develop microservices locally while being connected to your Kubernetes environment
Local Dev With Docker For Mac Kubernetes87
5 years agoMakefile
Notes about local development with Docker for Mac and Kubernetes
Istio Workspace58
3 months ago36August 08, 202279apache-2.0Go
Safely develop and test on any Kubernetes cluster without affecting others.
2 days ago4Shell
Tidepool Local Development
Vault Dev Docker15
a year agomitShell
Vault docker image for local development
Microservice Blog9
2 years ago12TypeScript
An over compilcated blog to show case some microservices consepts 😅
Alternatives To Telepresence
Select To Compare

Alternative Project Comparisons

Telepresence 2: fast, efficient local development for Kubernetes microservices

Telepresence gives developers infinite scale development environments for Kubernetes.

Slack: Discuss in the #telepresence channel (

With Telepresence:

  • You run one service locally, using your favorite IDE and other tools
  • You run the rest of your application in the cloud, where there is unlimited memory and compute

This gives developers:

  • A fast local dev loop, with no waiting for a container build / push / deploy
  • Ability to use their favorite local tools (IDE, debugger, etc.)
  • Ability to run large-scale applications that can't run locally

Quick Start

A few quick ways to start using Telepresence

  • Telepresence Quick Start: Quick Start
  • Install Telepresence: Install
  • Contributor's Guide: Guide
  • Meetings: Check out our community meeting schedule for opportunities to interact with Telepresence developers


Telepresence documentation is available on the Ambassador Labs webside:

Telepresence 2

Telepresence 2 is based on learnings from the original Telepresence architecture. Rewritten in Go, Telepresence 2 provides a simpler and more powerful user experience, improved performance, and better reliability than Telepresence 1. More details on Telepresence 2 are below.


Install an interceptable service:

Start with an empty cluster:

$ kubectl create deploy hello
deployment.apps/hello created
$ kubectl expose deploy hello --port 80 --target-port 8080
service/hello exposed
$ kubectl get ns,svc,deploy,po
NAME                        STATUS   AGE
namespace/kube-system       Active   53m
namespace/default           Active   53m
namespace/kube-public       Active   53m
namespace/kube-node-lease   Active   53m

NAME                 TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)   AGE
service/kubernetes   ClusterIP      <none>        443/TCP   53m
service/hello        ClusterIP   <none>        80/TCP    2m

NAME                    READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/hello   1/1     1            1           2m

NAME                        READY   STATUS    RESTARTS   AGE
pod/hello-9954f98bf-6p2k9   1/1     Running   0          2m15s

Check telepresence version

$ telepresence version
Client: v2.6.7 (api v3)
Root Daemon: v2.6.7 (api v3)
User Daemon: v2.6.7 (api v3)

Setup Traffic Manager in the cluster

Install Traffic Manager in your cluster. By default, it will reside in the ambassador namespace:

$ telepresence helm install

Traffic Manager installed successfully

Establish a connection to the cluster (outbound traffic)

Let telepresence connect:

$ telepresence connect
Launching Telepresence Root Daemon
Launching Telepresence User Daemon
Connected to context default (

A session is now active and outbound connections will be routed to the cluster. I.e. your laptop is "inside" the cluster.

$ curl hello.default
real path=/

server_version=nginx: 1.10.0 - lua: 10001

-no body in request-

Intercept the service. I.e. redirect traffic to it to our laptop (inbound traffic)

Add an intercept for the hello deployment on port 9000. Here, we also start a service listening on that port:

$ telepresence intercept hello --port 9000 -- python3 -m http.server 9000
Using Deployment hello
    Intercept name         : hello
    State                  : ACTIVE
    Workload kind          : Deployment
    Destination            :
    Service Port Identifier: 80
    Volume Mount Point     : /tmp/telfs-524630891
    Intercepting           : all TCP requests
Serving HTTP on port 9000 ( ...

The python -m httpserver is now started on port 9000 and will run until terminated by <ctrl>-C. Access it from a browser using http://hello/ or use curl from another terminal. With curl, it presents a html listing from the directory where the server was started. Something like:

$ curl hello
<meta http-equiv="Content-Type" content="text/html; charset=utf-8">
<title>Directory listing for /</title>
<h1>Directory listing for /</h1>
<li><a href="file1.txt">file1.txt</a></li>
<li><a href="file2.txt">file2.txt</a></li>

Observe that the python service reports that it's being accessed: - - [16/Jun/2022 11:39:20] "GET / HTTP/1.1" 200 -

Since telepresence is now intercepting services in the default namespace, all services in that namespace can now be reached directly by their name. You can of course still use the namespaced name too, e.g. curl hello.default.

Clean-up and close daemon processes

End the service with <ctrl>-C and then try curl hello.default or http://hello.default again. The intercept is gone, and the echo service responds as normal. Using just curl hello will no longer succeed. This is because telepresence stopped mapping the default namespace when there were no more intercepts using it.

Now end the session too. Your desktop no longer has access to the cluster internals.

$ telepresence quit
Telepresence Network disconnecting...done
Telepresence Traffic Manager disconnecting...done
$ curl hello.default
curl: (6) Could not resolve host: hello.default

The telepresence daemons are still running in the background, which is harmless. You'll need to stop them before you upgrade telepresence. That's done by passing the options -u (stop user daemon) and -r (stop root daemon) to the quit command.

$ telepresence quit -ur
Telepresence Network quitting...done
Telepresence Traffic Manager quitting...done

What got installed in the cluster?

Telepresence installs the Traffic Manager in your cluster if it is not already present. This deployment remains unless you uninstall it.

Telepresence injects the Traffic Agent as an additional container into the pods of the workload you intercept, and will optionally install an init-container to route traffic through the agent (the init-container is only injected when the service is headless or uses a numerical targetPort). The modifications persist unless you uninstall them.

At first glance, we can see that the deployment is installed ...

$ kubectl get svc,deploy,pod
service/kubernetes   ClusterIP       <none>        443/TCP                      7d22h
service/hello        ClusterIP    <none>        80/TCP                       13m

NAME                    READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/hello   1/1     1            1           13m

NAME                         READY   STATUS    RESTARTS        AGE
pod/hello-774455b6f5-6x6vs   2/2     Running   0               10m

... and that the traffic-manager is installed in the "ambassador" namespace.

$ kubectl -n ambassador get svc,deploy,pod
NAME                      TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)    AGE
service/traffic-manager   ClusterIP   None           <none>        8081/TCP   17m
service/agent-injector    ClusterIP   <none>        443/TCP    17m

NAME                              READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/traffic-manager   1/1     1            1           17m

NAME                                  READY   STATUS    RESTARTS   AGE
pod/traffic-manager-dcd4cc64f-6v5bp   1/1     Running   0          17m

The traffic-agent is installed too, in the hello pod. Here together with an init-container, because the service is using a numerical targetPort.

$ kubectl describe pod hello-774455b6f5-6x6vs 
Name:         hello-774455b6f5-6x6vs
Namespace:    default
Priority:     0
Node:         multi/
Start Time:   Thu, 16 Jun 2022 11:38:22 +0200
Labels:       app=hello
Annotations: enabled
Status:       Running
Controlled By:  ReplicaSet/hello-774455b6f5
Init Containers:
    Container ID:  containerd://e968352b3d85d6f966ac55f02da2401f93935f6df1f087b06bbe1cfc8854d5fb
    Image ID:[email protected]:2652d2767d1e8968be3fb22f365747315e25ac95e12c3d39f1206080a1e66af3
    Port:          <none>
    Host Port:     <none>
    State:          Terminated
      Reason:       Completed
      Exit Code:    0
      Started:      Thu, 16 Jun 2022 11:38:39 +0200
      Finished:     Thu, 16 Jun 2022 11:38:39 +0200
    Ready:          True
    Restart Count:  0
    Environment:    <none>
      /etc/traffic-agent from traffic-config (rw)
      /var/run/secrets/ from kube-api-access-wzhhs (ro)
    Container ID:   containerd://80d4645769a06b8671b5a4ce29d28abfa72ce5659ba96916c231bb9629593a29
    Image ID:       sha256:523cad1a4df732d41406c9de49f932cd60d56ffd50619158a2977fd1066028f9
    Port:           <none>
    Host Port:      <none>
    State:          Running
      Started:      Thu, 16 Jun 2022 11:38:40 +0200
    Ready:          True
    Restart Count:  0
    Environment:    <none>
      /var/run/secrets/ from kube-api-access-wzhhs (ro)
    Container ID:  containerd://ef3605a60f7c02229f156e3dc0e99f9b055fba1037587513871e64180670d0a4
    Image ID:[email protected]:2652d2767d1e8968be3fb22f365747315e25ac95e12c3d39f1206080a1e66af3
    Port:          9900/TCP
    Host Port:     0/TCP
    State:          Running
      Started:      Thu, 16 Jun 2022 11:38:41 +0200
    Ready:          True
    Restart Count:  0
    Readiness:      exec [/bin/stat /tmp/agent/ready] delay=0s timeout=1s period=10s #success=1 #failure=3
      _TEL_AGENT_POD_IP:   (v1:status.podIP)
      _TEL_AGENT_NAME:    hello-774455b6f5-6x6vs (
      /etc/traffic-agent from traffic-config (rw)
      /tel_app_exports from export-volume (rw)
      /tel_pod_info from traffic-annotations (rw)
      /var/run/secrets/ from kube-api-access-wzhhs (ro)
  Type              Status
  Initialized       True 
  Ready             True 
  ContainersReady   True 
  PodScheduled      True 
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
    Type:  DownwardAPI (a volume populated by information about the pod)
      metadata.annotations -> annotations
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      telepresence-agents
    Optional:  false
    Type:        EmptyDir (a temporary directory that shares a pod's lifetime)
    SizeLimit:   <unset>
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations: op=Exists for 300s
        op=Exists for 300s
  Type    Reason     Age   From               Message
  ----    ------     ----  ----               -------
  Normal  Scheduled  13m   default-scheduler  Successfully assigned default/hello-774455b6f5-6x6vs to multi
  Normal  Pulling    13m   kubelet            Pulling image ""
  Normal  Pulled     13m   kubelet            Successfully pulled image "" in 17.043659509s
  Normal  Created    13m   kubelet            Created container tel-agent-init
  Normal  Started    13m   kubelet            Started container tel-agent-init
  Normal  Pulled     13m   kubelet            Container image "" already present on machine
  Normal  Created    13m   kubelet            Created container echoserver
  Normal  Started    13m   kubelet            Started container echoserver
  Normal  Pulled     13m   kubelet            Container image "" already present on machine
  Normal  Created    13m   kubelet            Created container traffic-agent
  Normal  Started    13m   kubelet            Started container traffic-agent


You can uninstall the traffic-agent from specific deployments or from all deployments. Or you can choose to uninstall everything in which case the traffic-manager and all traffic-agents will be uninstalled.

$ telepresence helm uninstall

will remove everything that was automatically installed by telepresence from the cluster.


The telepresence background processes daemon and connector both produces log files that can be very helpful when problems are encountered. The files are named daemon.log and connector.log. The location of the logs differ depending on what platform that is used:

  • macOS ~/Library/Logs/telepresence
  • Linux ~/.cache/telepresence/logs
  • Windows "%USERPROFILE%\AppData\Local\logs"

Visit the troubleshooting section in the Telepresence documentation for more advice:

Popular Kubernetes Projects
Popular Local Development Projects
Popular Virtualization Categories
Related Searches

Get A Weekly Email With Trending Projects For These Categories
No Spam. Unsubscribe easily at any time.
Local Development