Microservice spike part 4 - run in an Istio service mesh
This is part 4 in a series of posts where I spike out a cloud microservice app on GCP. In this post I set up an Istio service mesh to run my app.
- Part 1 - the CryptoTracker app introduces my prototype microservice app
- Part 2 - run in GCP gets the app running in Docker in Google Compute Engine
- Part 3 - run in GKE gets the app running in Google Kubernetes Engine
- Part 4 - run in an Istio service mesh (this page) gets the app running in an Istio service mesh
- Part 5 - secure the app via TLS secures the app by using TLS over HTTPS
Install Istio
There are many articles which describe how to install Istio but they all seem to add the Bookinfo sample application, even if you deselect it! Given that I’m paying for GCP (well, I have a limited budget with my free trial) I want to keep my usage as small as possible.
Since I want to play with Helm package manager anyway, I decided to install with Helm via helm template
.
Install helm client
Pretty straightforward instructions on the Helm github site:
- In the GCP Cloud Shell I downloaded the Helm package using
wget https://storage.googleapis.com/kubernetes-helm/helm-v2.10.0-linux-amd64.tar.gz
- Following the instructions I extracted and moved the
helm
client binary to my$HOME
folder so it’s accessible on my path
Download and set up Istio
Instructions on the Istio site, but I simply ran curl -L https://git.io/getLatestIstio | sh -
which downloaded and unpacked the latest Istio release (1.0.1 for me). It even gives the export PATH
command needed to add istioctl
to the path.
Istio’s GKE-specific instructions detail how to get my k8s cluster ready:
- I set my Cloud console up to talk to k8s using
gcloud container clusters get-credentials ctmd-cluster --zone europe-west1-b
- I gave my account cluster admin permission using
kubectl create clusterrolebinding cluster-admin-binding --clusterrole=cluster-admin --user=$(gcloud config get-value core/account)
- I then created a namespace which Istio will run under:
kubectl create namespace istio-system
Finally, I can install Istio. First I used the helm
client to create my k8s manifest. From $HOME
I ran ./helm template istio-1.0.1/install/kubernetes/helm/istio --name istio --namespace istio-system > $HOME/istio.yaml
. This created a ~120Kb istio.yaml
file which I can then install into my cluster with kubectl apply -f $HOME/istio.yaml
.
A whole bunch of Istio-related stuff was installed into my cluster (but not the Bookinfo sample app!) and, after a while, Istio is up and running!
Install my app into Istio
Next thing is to get my application up and running. Rather than use the GKE UI I’m using yaml manifest files, these will be used by Istio to inject its sidecar proxies.
k8s manifest
In my previous post I used the GKE UI to deploy my app. Once running I can view the generated yaml manifest for the deployment and service and use this to create a k8s manifest. I can then deploy this manifest into my Istio-controlled cluster via kubectl apply -f <(istioctl kube-inject -f ctmd.yaml)
. This used istioctl
to inject the sidecars and other Istio config into my manifest, then run the whole lot into my cluster via kubectl
.
My app is up and running in Istio! It crashes, of course, but that’s because I’m missing a few bits of config.
Mongo connection string
My app expects the environment variable MongoDB__ConnectionString
to be present. While I could set this via the GKE UI before, now it needs to be set via the command line / a manifest. Since the environment variable represents a connection string and contains the username and password needed to connect to Atlas, it should be considered a “secret”. I decided to configure my app manifest to look for the environment variable from a kubernetes secret. Rather than permit the secret to be stored in a manifest or source control, I set it manually in the Cloud shell using kubectl create secret generic ctmd-config --from-literal=MongoDB__ConnectionString=<my-mongo-connection-string>
. I can then instruct my deployment to source this environment variable from my secret:
Egress to MongoDB Atlas
From the Istio egress docs:
By default, Istio-enabled services are unable to access URLs outside of the cluster because the pod uses iptables to transparently redirect all outbound traffic to the sidecar proxy, which only handles intra-cluster destinations.
I created an istio ServiceEntry
to permit egress to MongoDB Atlas:
The reference docs for ServiceEntry
are here. More details, and a good explanation about ServiceEntry
, are available here. I used both those links to create the yaml.
hosts
: This is ignored for non-http protocolsaddresses
: To keep things simple (and because I don’t want to hardcode my Atlas IP addresses) I permit access to any IP addressports
: Egress is permitted to the default mongo portlocation
: Indicates that this service is external to the mesh. See ServiceEntry.Locationresolution
: See ServiceEntry.Resolution
This gets my app up and running. The k8s secret supplies the mongo connection string and the ServiceEntry
permits egress from the istio mesh to my mongo cluster.
Allowing external traffic to hit my app
By default, no traffic from outside the Istio service mesh is permitted in. In order to allow traffic to enter the cluster we must configure ingress.
A Gateway sits at the edge of the service mesh and describes what traffic may enter (ports, protocols, etc). My gateway simply allows any http traffic on port 80:
The gateway just permits traffic to enter the cluster. A VirtualService is then needed to route that traffic to the correct service. It is bound to the gateway and forwards traffic arriving at the specified port / hosts. In my manifest, all http traffic arriving at the ctmd-gateway
gateway with the url prefix /api
is forwarded on to the marketdata
k8s service on port 80
.
Testing
I can get the external ip address of my istio gateway from the GKE UI by looking for the “Load balancer” service “istio-ingressgateway”. I can then point my browser to http://<istio gateway>/api/v1/currencies
and see my list of currencies!
K8s and istio manifests are available on github.
cloud (15) k8s (4)