Yeey! I just got access to Digital Ocean Kubernetes service (still on a limited offer) so my minons there need to get busy now!



5$ for a k8s is the cheapest offer I have seen so far. The first problem I run into was how to smoothly run kubectl commands and aslo easily move from one k8s context to another.

DigitalOcean documentation suggest to pass argument to the config file on each command

So instead of

kubectl get nodes


you do


kubectl --kubeconfig="cluster1-kubeconfig-dupe.yaml" get nodes


So what is going on?

If you used GKE or AKS, they will let you use a cli command to set the cluster and context into your local Kubectl client config (stored at $HOME/.kube/config). Other providers might not have a nice tool that does that so they might give you a config in form of a downloadable yaml instead such as DigitalOcean.

As per kubectl Cheat Sheet, you can run the following command to sort of merge the configurations of many files in different locations in your system. All it does is to save an environment variable has a list of paths to these config files:

In Linux:

export KUBECONFIG=~/.kube/config:~/path/to/your/config/yalm/file

On windows:

$env:KUBECONFIG ="$HOME\.kube\config; path\to\your\config\yalm\file "

Notice that for windows we separate paths using semicolons

Now when you run kubectl config view you will see a merged view for all your config files. So, you can run kubectl config use-context on any of the listed contexts.

Now if you open another shell, you will lose your changes. On windows PowerShell, in order to persist the change you made on windows, you need to store the file permanently to your system environment variables:

[Environment]::SetEnvironmentVariable("KUBECONFIG", "$HOME\.kube\config;$HOME\.kube\do-api-yeelo-kubeconfig.yaml", "Machine")

Please note that frequestly these tokens get expired so you will need to donwload the config file from DO again and save it in the same name and location.

Hi My name is Abdi-Rahman Daud. I am a software engineer and... In this blog I will be sharing whatever I find useful in the technology sector


This blog is running on Azure managed container service AKS using the dotnet based mini blog template. 


I followed a couple of other blogs and github issues to make this work. So let me share the docker file:

FROM microsoft/dotnet:latest
COPY . /app
RUN ["dotnet", "restore"]
RUN ["dotnet", "build", "-c", "release"]
EXPOSE 80/tcp
ENTRYPOINT ["dotnet", "run", "--server.urls", "http://*:80"]


and the yaml file is as follows

 apiVersion: v1

kind: PersistentVolumeClaim


  name: myApp-pv-claim

  annotations: default



    - ReadWriteOnce



      storage: 1Gi


apiVersion: apps/v1beta2

kind: Deployment


  name: myApp


  replicas: 1

  minReadySeconds: 10


    type: RollingUpdate


      maxUnavailable: 1

      maxSurge: 1




        app: myApp



      - name: myApp


        imagePullPolicy: Always


        - name: myApp-pv-storage

          mountPath: /mnt/azure


        - containerPort: 80


          - name: secret-my-register


      - name: myApp-pv-storage


          claimName: myApp-pv-claim


apiVersion: v1

kind: Service


  name: myApp


  type: LoadBalancer


  - port: 80

    targetPort: 80


    app: myApp

Docker version is 18, and k8s is 1.8

After each merge to master, this is the hook I am using to deploy (the last two lines are not working as expected so I will manually delete the pod to apply the new image, I am sure there is a better way to do this)

docker build -t local-image:latest .
docker tag locale-image:latest my-image-at-azure-acr:latest
az acr login -n my-acr-at-azure
docker push my-image-at-azure-acr:latest
kubectl apply -f myblog.yaml --validate=false
kubectl get pods