Istio with Kubernetes

Using Istio as an API gateway with Kubernetes

Istio with Kubernetes

We have looked at Kong as an API gateway previously, to allow us to manage access to our services within K8s.

Istio can provide a similar function and comes with other useful features in its tool kit, such as broad traffic management, circuit breaking, intelligent load balancing as well as tracing and monitoring with Kiali.

Rather than a single application, Istio includes its own discovery (istiod) and load balancing (envoy) deployments. Envoy acts as a proxy for any selected service, allowing access to it to be managed.

For more details see the architecture link below.

Installing Istio

Get the istioctl binary.

curl -L | sh -

Install the demo profile which will include everything we need.

istioctl install --set profile=demo
✔ Istio core installed
✔ Istiod installed
✔ Egress gateways installed
✔ Ingress gateways installed
✔ Addons installed
- Pruning removed resources
 Pruned object HorizontalPodAutoscaler:istio-system:istiod.
 Pruned object HorizontalPodAutoscaler:istio-system:istio-ingressgateway.
✔ Installation complete          

Check the version.

istioctl version
client version: 1.6.4
control plane version: 1.6.4
data plane version: 1.6.4 (2 proxies)

Notice  the applications that have been deployed.

kubectl get svc -n istio-system
NAME                        TYPE           CLUSTER-IP       EXTERNAL-IP   PORT(S)    grafana                     ClusterIP    <none>        3000/TCP   istio-egressgateway         ClusterIP     <none>        80/TCP,443/TCP,15443/TCP 
istio-ingressgateway        LoadBalancer    localhost     15020:31891/TCP,80:32309/TCP,443:31967/TCP,31400:30096/TCP,15443:32721/TCP 
istiod                      ClusterIP    <none>        15010/TCP,15012/TCP,443/TCP,15014/TCP,53/UDP,853/TCP  
jaeger-agent                ClusterIP      None             <none>        5775/UDP,6831/UDP,6832/UDP
jaeger-collector            ClusterIP     <none>        14267/TCP,14268/TCP,14250/TCP 
jaeger-collector-headless   ClusterIP      None             <none>        14250/TCP
jaeger-query                ClusterIP    <none>        16686/TCP
kiali                       ClusterIP     <none>        20001/TCP
prometheus                  ClusterIP     <none>        9090/TCP 
tracing                     ClusterIP   <none>        80/TCP   
zipkin                      ClusterIP   <none>        9411/TCP 

Included are Grafana, Jaeger, Kiali, Prometheus and Zipkin. We will also briefly look at Grafana and Kiali  here.

Side Car Proxies

Set the side car proxies to be automatically created for any pods in the vadal namespace.

Create the namespace.

kubectl create ns vadal
namespace/vadal created

Enable istio injection.

kubectl label namespace vadal istio-injection=enabled

Deploy our vadal-echo image (see previous blog), to the vadal namespace.

kubectl create deployment -n vadal vecho --image=vadal-echo:0.0.1-SNAPSHOT
kubectl expose deploy -n vadal vecho --port 80 --target-port=8080

Istio Ingress

kubectl get svc istio-ingressgateway -n istio-system
istio-ingressgateway   LoadBalancer   localhost     15020:31891/TCP,80:32309/TCP,443:31967/TCP,31400:30096/TCP,15443:32721/TCP

First we need a gateway configuration.

kind: Gateway
 name: vadal-gateway
 namespace: istio-system
   istio: ingressgateway
   - port:
       number: 80
       name: http
       protocol: HTTP
       - vadal.local

Note: set the host name vadal.local (for example) to point to your host ip in /etc/hosts.

Then we need a virtual service.

kind: VirtualService
 name: echo
 namespace: vadal
   - vadal.local
   - vadal-gateway.istio-system.svc.cluster.local
   - match:
     - uri:
         prefix: /echo
       uri: /
       - destination:
           host: vecho.vadal.svc.cluster.local
             number: 80

Try it out:

curl -i vadal.local/echo
HTTP/1.1 200 OK
content-type: application/json
date: Thu, 09 Jul 2020 21:37:31 GMT
x-envoy-upstream-service-time: 7
server: istio-envoy
transfer-encoding: chunked


Although we hand crafted grafana/prometheus before, istio's demo profile installs this for us, with the two connected to each other.

Expose it from the node.

kubectl -n istio-system edit svc/grafana

Change type ClusterIP -> NodePort


nodePort: 30003

  - name: http
    nodePort: 30003
    port: 3000
    protocol: TCP
    targetPort: 3000
    app: grafana
  sessionAffinity: None
  type: NodePort

kubectl get svc grafana istio-ingressgateway -n istio-system
NAME                   TYPE           CLUSTER-IP      EXTERNAL-IP   PORT(S)                                                                      AGE
grafana                NodePort           3000:30003/TCP                                                               5h15m

Checkout various istio dashboards




A Gui to manage Istio and your services.

kubectl -n istio-system edit svc/kiali

Change type to NodePort and add nodePort: 30004

- name: http-kiali
    nodePort: 30004
    port: 20001
    protocol: TCP
    targetPort: 20001
    app: kiali
  sessionAffinity: None
  type: NodePort

Check it out:




We installed istio and used it's gateway and virtual service architecture to serve up our vadal-echo service in it's own namespace.

We could also observe our services behaviour in Grafana and in Kiali.

Next time we will secure our vadal-echo service.

Further details

Istio Architecture:

Comparison with Kong:,migrated%20to%20start%20leveraging%20K8s.

Related Article