<?xml version="1.0" encoding="utf-8"?>
<feed xmlns="http://www.w3.org/2005/Atom">
<title></title>
<subtitle>Dafuq did I just see?</subtitle>
<link href="http://boranseref.com/feed.php" rel="self" />
<id>http://boranseref.com/feed.php</id>
<updated>2020-01-15T15:31:01+00:00</updated>
<entry>
<title type="html">About Conftest</title>
<content type="html">&lt;p&gt;Conftest is a tool to help you write tests against structured configuration data. It relies on &lt;a href=&quot;https://www.openpolicyagent.org/docs/latest/policy-language/&quot; target=&quot;_blank&quot;&gt;Rego&lt;/a&gt; which is a nice query language that comes with a bunch of built-in functions that are ready to use. By using it, you can write tests against the config types below:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;YAML/JSON&lt;/li&gt;
&lt;li&gt;INI&lt;/li&gt;
&lt;li&gt;TOML&lt;/li&gt;
&lt;li&gt;HOCON&lt;/li&gt;
&lt;li&gt;HCL/HCL2&lt;/li&gt;
&lt;li&gt;CUE&lt;/li&gt;
&lt;li&gt;Dockerfile&lt;/li&gt;
&lt;li&gt;EDN&lt;/li&gt;
&lt;li&gt;XML&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;When it comes to talking about conftest&#039;s pros/cons, there&#039;re some unique features that some other testing tools don&#039;t have.&lt;br /&gt;&lt;br /&gt;Pros:&lt;br /&gt;You can:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;write more declarative tests(policies) which are not simply assertions.  &lt;/li&gt;
&lt;li&gt;write tests against many kinds of config types. &lt;/li&gt;
&lt;li&gt;use &lt;span style=&quot;background-color: #c0c0c0;&quot;&gt;--combine&lt;/span&gt; flag to combine some different files in one context for using their variables globally.&lt;/li&gt;
&lt;li&gt;use &lt;span style=&quot;background-color: #c0c0c0;&quot;&gt;parse&lt;/span&gt; command to see how the inputs are parsed.&lt;/li&gt;
&lt;li&gt;combine different input types in one test run and apply combined policy against them.&lt;/li&gt;
&lt;li&gt;Pull/push policies from different kinds of sources like S3, docker registry, github file, etc... &lt;/li&gt;
&lt;li&gt;Find real-world examples in &lt;span style=&quot;background-color: #c0c0c0;&quot;&gt;examples/&lt;/span&gt; folder&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Cons:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Learning Rego could be a little bit time consuming&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Finally, I encourage folks either to look at conftest&#039;s source code and rego language. &lt;br /&gt;It&#039;s a simple, single-threaded command-line tool. I recommend folks to integrate it to their organizations also PR&#039;s are welcome.&lt;br /&gt;&lt;br /&gt;Here&#039;s the repo: &lt;a href=&quot;https://github.com/instrumenta/conftest&quot; target=&quot;_blank&quot;&gt;https://github.com/instrumenta/conftest&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Thanks! &lt;/p&gt;</content>
<link href="http://boranseref.com/index.php?controller=post&amp;amp;action=view&amp;amp;id_post=17" />
<id>http://boranseref.com/index.php?controller=post&amp;amp;action=view&amp;amp;id_post=17</id>
<updated>2020-01-15T15:31:01+00:00</updated>
<category term="Uncategorised"/>
</entry>
<entry>
<title type="html">Run Spark Jobs on Kubernetes</title>
<content type="html">&lt;p&gt;Hello All,&lt;/p&gt;
&lt;p&gt;&lt;br /&gt; In this article, I&#039;m gonna show you how we transform our ETL processes to spark which runs as Kubernetes pods.&lt;br /&gt;&lt;br /&gt;Before that, we prefer custom python codes for our ETLs. &lt;br /&gt;The problem about this project is a need for a distributed key-value store and when we pick a solution like Redis, It creates too much internal I/O between slave docker containers and Redis. The performance with spark is much better.&lt;br /&gt;Also, the master creates numbers of slaves and manages the containers. Sometimes, docker-py library fails with communicating the docker-engine and the master can&#039;t delete the slaves or Redis containers. This causes idempotency problems.&lt;br /&gt;You have to distribute the slave containers across your docker cluster which means that you have to put too many cross-functional requirements next to your business code.&lt;/p&gt;
&lt;p&gt;&lt;br /&gt; We inspect the spark documentation for Kubernetes because we have been already using Kubernetes for our production environment.&lt;br /&gt;We use the version 2.3.3 for Spark-Kubernetes.&lt;br /&gt;You can have a look at this: &lt;a href=&quot;https://spark.apache.org/docs/2.3.3/running-on-kubernetes.html&quot; target=&quot;_blank&quot;&gt;https://spark.apache.org/docs/2.3.3/running-on-kubernetes.html&lt;/a&gt;&lt;br /&gt;Even the Spark Documentation says the feature is experimental for now, we started to run spark jobs on our Kubernetes cluster. &lt;/p&gt;
&lt;p&gt;This feature allows us to run spark across our cluster.&lt;/p&gt;
&lt;p&gt;Easy to use.&lt;/p&gt;
&lt;p&gt;Secured. Because you have to create a specific user for spark driver and executors.&lt;/p&gt;
&lt;p&gt;Enough parameters for Kubernetes (node-selector for computation, core limit, number of executors, etc.)&lt;/p&gt;
&lt;p&gt; &lt;/p&gt;
&lt;p&gt;We bundled the spark submit codes with our artifact jar.&lt;br /&gt;After this step, the docker container can make a request to k8s master, starts the driver pod, and the driver pod creates executors from the same image.&lt;br /&gt;This allows us to bundle all the things in one image. If the code change, CI creates a new bundle and publish it to the registry.&lt;br /&gt;The image describes the architecture below.&lt;br /&gt;&lt;br /&gt;&lt;img src=&quot;http://boranseref.com/content/public/upload/untitleddiagram_0_o.png&quot; alt=&quot;undefined&quot; /&gt;&lt;br /&gt;First of all, you have to create a base image.&lt;br /&gt;Download the &quot;spark-2.3.3-bin-hadoop2.7&quot; from here &lt;a href=&quot;https://spark.apache.org/downloads.html&quot; target=&quot;_blank&quot;&gt;https://spark.apache.org/downloads.html&lt;/a&gt; and unzip it.&lt;br /&gt;Create an image from this.&lt;/p&gt;
&lt;pre&gt;&lt;code data-language=&quot;shell&quot;&gt;./bin/docker-image-tool.sh -r internal-registry-url.com:5000 -t base build
./bin/docker-image-tool.sh -r internal-registry-url.com:5000 -t base push&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;br /&gt;We created multi-staged Dockerfile like this.&lt;br /&gt;&lt;br /&gt;&lt;/p&gt;
&lt;pre&gt;&lt;code data-language=&quot;shell&quot;&gt;FROM hseeberger/scala-sbt:11.0.1_2.12.7_1.2.6  AS build-env

COPY . /app

WORKDIR /app
ENV SPARK_APPLICATION_MAIN_CLASS Main

RUN sbt update &amp;amp;&amp;amp; \
    sbt clean assembly

RUN SPARK_APPLICATION_JAR_LOCATION=`find /app/target -iname &#039;*-assembly-*.jar&#039; | head -n1` &amp;amp;&amp;amp; \
    export SPARK_APPLICATION_JAR_LOCATION &amp;amp;&amp;amp; \
    mkdir /publish &amp;amp;&amp;amp; \
    cp -R ${SPARK_APPLICATION_JAR_LOCATION} /publish/ &amp;amp;&amp;amp; \
    ls -la ${SPARK_APPLICATION_JAR_LOCATION} &amp;amp;&amp;amp; \
    ls -la /publish

FROM internal-registry-url.com:5000/spark:base

RUN apk add --no-cache tzdata
ENV TZ=Europe/Istanbul
RUN ln -snf /usr/share/zoneinfo/$TZ /etc/localtime &amp;amp;&amp;amp; echo $TZ &amp;gt; /etc/timezone

COPY --from=build-env /publish/* /opt/spark/examples/jars/
COPY --from=build-env /app/secrets/* /opt/spark/secrets/
COPY --from=build-env /app/run.sh /opt/spark/

WORKDIR /opt/spark

CMD [ &quot;/opt/spark/run.sh&quot; ]&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;code data-language=&quot;shell&quot;&gt;&lt;br /&gt;And our run.sh script is like this : &lt;br /&gt;&lt;br /&gt;&lt;/code&gt;&lt;/p&gt;
&lt;pre&gt;&lt;code data-language=&quot;shell&quot;&gt;#!/bin/bash

bin/spark-submit \
   --master k8s://https://${KUBERNETS_MASTER}:6443 \
   --deploy-mode cluster \
   --name coverage-${MORDOR_ENV} \
   --class Main \
   --conf spark.executor.instances=${NUMBER_OF_EXECUTORS} \
   --conf spark.kubernetes.authenticate.driver.serviceAccountName=spark \
   --conf spark.kubernetes.driverEnv.MORDOR_ENV=${MORDOR_ENV} \
   --conf spark.kubernetes.driver.label.app=coverage-${MORDOR_ENV} \
   --conf spark.kubernetes.container.image.pullPolicy=Always \
   --conf spark.kubernetes.container.image=http://internal-registry-url.com:5000/coveragecalculator:${VERSION} \
   --conf spark.kubernetes.driver.pod.name=coverage-${MORDOR_ENV} \
   --conf spark.kubernetes.authenticate.submission.caCertFile=/opt/spark/secrets/${CRT_FILE} \
   --conf spark.kubernetes.authenticate.submission.oauthToken=${CRT_TOKEN} \
   --conf spark.kubernetes.driver.limit.cores=${DRIVER_CORE_LIMIT} \
   --conf spark.kubernetes.executor.limit.cores=${EXECUTOR_CORE_LIMIT} \
   local:///opt/spark/examples/jars/CoverageCalculator-assembly-0.1.jar &lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;code data-language=&quot;shell&quot;&gt; &lt;br /&gt;Notice that, you have to place the secrets in secrets/ folder in order to create pods with single image.&lt;br /&gt;After the driver pod created, it uses the internal executor pod creation scripts which also placed in spark:base image described also in the spark-kubernetes documentation.  &lt;br /&gt;&lt;br /&gt;We created the pipelines as build-push -&amp;gt; run-on-qa-cluster -&amp;gt; run-on-preprod-cluster -&amp;gt; run-on-prod-cluster&lt;br /&gt;&lt;br /&gt;&lt;br /&gt;&lt;img src=&quot;http://boranseref.com/content/public/upload/screenshotfrom2019-04-1611-17-58_0_o.png&quot; alt=&quot;undefined&quot; /&gt;&lt;br /&gt;&lt;br /&gt;&lt;br /&gt;The run scripts placed in pipeline, pass the parameters to run.sh and we run like this : &lt;br /&gt;&lt;br /&gt;&lt;/code&gt;&lt;/p&gt;
&lt;pre&gt;&lt;code data-language=&quot;shell&quot;&gt;docker run -i --entrypoint /bin/bash -e KUBERNETS_MASTER=&#039;yourkubernetesmasterip&#039; -e NUMBER_OF_EXECUTORS=5 -e MORDOR_ENV=&#039;qa&#039; -e VERSION=$GO_PIPELINE_LABEL -e CRT_FILE=&#039;non_prod_ca.crt&#039; -e CRT_TOKEN=&#039;THE_USER_CRT_TOKEN&#039; -e DRIVER_CORE_LIMIT=2 -e EXECUTOR_CORE_LIMIT=2 -v /etc/resolv.conf:/etc/resolv.conf:ro -v /etc/localtime:/etc/localtime:ro 192.168.57.20:5000/coveragecalculator:$GO_PIPELINE_LABEL /opt/spark/run.sh&lt;br /&gt;&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;code data-language=&quot;shell&quot;&gt; &lt;br /&gt;This command creates one driver pod which has core limit equals to 2.&lt;br /&gt;And after that, 5 executor Pods are created by spark:base. Each one of them has also 2 core limits.&lt;br /&gt;&lt;br /&gt;&lt;img src=&quot;http://boranseref.com/content/public/upload/spark-k8s_0_o.png&quot; alt=&quot;undefined&quot; /&gt;&lt;br /&gt;&lt;/code&gt;&lt;/p&gt;</content>
<link href="http://boranseref.com/index.php?controller=post&amp;amp;action=view&amp;amp;id_post=16" />
<id>http://boranseref.com/index.php?controller=post&amp;amp;action=view&amp;amp;id_post=16</id>
<updated>2019-04-16T08:03:44+00:00</updated>
<category term="Uncategorised"/>
</entry>
<entry>
<title type="html">Horizontal Pod Autoscaling in Kubernetes using Prometheus</title>
<content type="html">&lt;p&gt;Hi All,&lt;/p&gt;
&lt;p&gt;In this Article, I&#039;m gonna talk about the Kubernetes Horizontal Pod Autoscale object and the Custom Metrics API and how we scale our API&#039;s in Hepsiburada.&lt;br /&gt;Before digging into HPA, take a look at &lt;a href=&quot;https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/&quot; target=&quot;_blank&quot;&gt;https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;HPA determines if we need more pods and scales the number of Pod. You can scale using the CPU and memory metrics using &quot;K8s Metrics-Server&quot;.&lt;/p&gt;
&lt;p&gt;However, Kubernetes 1.6 adds support for making use of custom metrics in the Horizontal Pod Autoscaler. With Custom Metrics, you can attach Influxdb/Prometheus or another third party time series db.&lt;/p&gt;
&lt;p&gt;There is a nice project and ready to go YAML&#039;s in GitHub &lt;a href=&quot;https://github.com/stefanprodan/k8s-prom-hpa&quot; target=&quot;_blank&quot;&gt;https://github.com/stefanprodan/k8s-prom-hpa&lt;/a&gt; with described and detailed autoscale mechanism deeply.&lt;/p&gt;
&lt;p&gt;The Prometheus collects metrics from your applications/pods and stores them on Prometheus. You can use the annotations in your deployment YAML&#039;s.&lt;/p&gt;
&lt;p&gt;The default path is &quot;/metrics&quot;&lt;/p&gt;
&lt;pre&gt;&lt;code data-language=&quot;python&quot;&gt;annotations:
	prometheus.io/scrape: &#039;true&#039;
	prometheus.io/path: &#039;/metrics-text&#039;&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The Custom Metrics API is responsible for collect data from Prometheus and passes them to HPA.&lt;br /&gt;&lt;br /&gt;&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;http://boranseref.com/content/public/upload/k8s-hpa-prom_0_o.png&quot; alt=&quot;undefined&quot; /&gt; &lt;/p&gt;
&lt;p&gt; &lt;/p&gt;
&lt;p&gt;After you connect your HPA, you can test and verify its working properly.&lt;/p&gt;
&lt;pre&gt;&lt;code data-language=&quot;shell&quot;&gt;kubectl get --raw &quot;/apis/custom.metrics.k8s.io/v1beta1&quot; | jq .
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The exposed metrics that also exists in Prometheus are shown below.&lt;br /&gt;&lt;br /&gt;&lt;img src=&quot;http://boranseref.com/content/public/upload/screenshotfrom2018-11-1910-04-49_0_o.png&quot; alt=&quot;undefined&quot; /&gt;&lt;/p&gt;
&lt;p&gt;For example, the &quot;application_httprequests_active&quot; metric is exposed by our API. Also, this can be used with HPA like this.&lt;/p&gt;
&lt;p&gt; &lt;/p&gt;
&lt;pre&gt;&lt;code data-language=&quot;python&quot;&gt;apiVersion: autoscaling/v2beta1
kind: HorizontalPodAutoscaler
metadata:
  name: podinfo
spec:
  scaleTargetRef:
    apiVersion: extensions/v1beta1
    kind: Deployment
    name: podinfo
  minReplicas: 5
  maxReplicas: 40
  metrics:
  - type: Pods
    pods:
      metricName: application_httprequests_active
      targetAverageValue: 1000&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;br /&gt;Here are the instances of our Grafana Dashboards which is connected to Prometheus and shows autoscale&#039;s in Kubernetes. You can inspect the Pod memory and the newly created Pods can be seen there. At &quot;07:56&quot; and &quot;08:00&quot; people started to use Search API more and after scaling process, metrics become normal.&lt;/p&gt;
&lt;p&gt; &lt;img src=&quot;http://boranseref.com/content/public/upload/screenshotfrom2018-11-1909-57-47_0_o.png&quot; alt=&quot;undefined&quot; /&gt;&lt;/p&gt;</content>
<link href="http://boranseref.com/index.php?controller=post&amp;amp;action=view&amp;amp;id_post=14" />
<id>http://boranseref.com/index.php?controller=post&amp;amp;action=view&amp;amp;id_post=14</id>
<updated>2018-11-19T07:00:43+00:00</updated>
<category term="Uncategorised"/>
</entry>
<entry>
<title type="html">Deploying .NET Core app to Kubernetes using GoCD</title>
<content type="html">&lt;p&gt;      It&#039;s been a long time since I have written my last post. In this period, I dig into Kubernetes mostly. Kubernetes is a deployment automation system that manages containers in distributed environments. It simplifies common tasks like deployment, scaling, configuration, versioning, log management and a lot more. &lt;/p&gt;
&lt;p&gt;In this article, you will find how can a dotnetcore app put into kubernetes using blue-green deployment and using the pipeline as code. In this case, I used GoCD and their yaml plugin: &lt;a href=&quot;https://github.com/tomzo/gocd-yaml-config-plugin&quot; target=&quot;_blank&quot;&gt;https://github.com/tomzo/gocd-yaml-config-plugin&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;First of all, you have to dockerise your dotnetcore app. Here is a snippet for example.&lt;/p&gt;
&lt;pre&gt;&lt;code data-language=&quot;generic&quot;&gt;FROM microsoft/dotnet:2.0.5-sdk-2.1.4 AS build-env

WORKDIR /workdir
COPY . /workdir

RUN dotnet restore ./WebApp.sln
RUN dotnet test ./src/tests/WebApp.IntegrationTests
RUN dotnet test ./src/tests/WebApp.UnitTests
RUN dotnet publish ./src/WebApp/WebApp.csproj -c Release -o /publish

FROM microsoft/dotnet:2.0.5-runtime
WORKDIR /app
COPY --from=build-env ./publish .

EXPOSE 3333/tcp
CMD [&quot;dotnet&quot;, &quot;WebApp.dll&quot;, &quot;--server.urls&quot;, &quot;http://*:3333&quot;]&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;After that, put a &quot;&lt;em&gt;kubernetes&lt;/em&gt;&quot; folder in your Project&#039;s root. Folder structure can be like this:&lt;/p&gt;
&lt;pre&gt;&lt;code data-language=&quot;generic&quot;&gt;- kubernetes
    --  deployment.yaml
    --  service.yaml
    --  switch_environment.sh
- src
    ....
- ci.gocd.yaml
- Dockerfile
- WebApp.sln&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Your &quot;deployment.yaml&quot; should be like this : &lt;/p&gt;
&lt;pre&gt;&lt;code data-language=&quot;python&quot;&gt;apiVersion: apps/v1beta1
kind: Deployment
metadata:
  name: webapp-${ENV}
spec:
  replicas: ${PODS}
  template:
    metadata:
      labels:
        app: webapp
        ENV: ${ENV}
    spec:
      containers:
      - name: webapp
        image: yourdockerregistry:5000/webapp:${IMAGE_TAG}
        resources:
          requests:
            cpu: &quot;750m&quot;
        ports:
        - containerPort: 3333
        readinessProbe:
          tcpSocket:
              port: 3333
          initialDelaySeconds: 15
          periodSeconds: 5
        livenessProbe:
          httpGet:
            path: /status
            port: 3333
          initialDelaySeconds: 15
          periodSeconds: 10
      terminationGracePeriodSeconds: 30
---
apiVersion: autoscaling/v2beta1
kind: HorizontalPodAutoscaler
metadata:
  name: webapp-${ENV}
spec:
  scaleTargetRef:
    apiVersion: apps/v1beta1
    kind: Deployment
    name: webapp-${ENV}
  minReplicas: 10
  maxReplicas: 25
  metrics:
  - type: Pods
    pods:
      metricName: cpu_usage # Metrics Comming From Prometheus. List of metrics : kubectl get --raw &quot;/apis/custom.metrics.k8s.io/v1beta1&quot; | jq .
      targetAverageValue: 0.6 # If average pod CPU over %50, Pods will be scaled.&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;In this snippet, you will see some ENV variables for parametric values like image tag, deployment environment, blue-green deployment etc.. &lt;br /&gt;You can also use helm for rolling deployments, version bump-ups but I will use much more simple thing: &quot;envsubst&quot;&lt;/p&gt;
&lt;p&gt;The other mechanism is horizontal scaling in the cluster. You can merge deployment and scaling in one yaml. &lt;br /&gt;In this instance, I used K8s&#039; custom metric API.&lt;/p&gt;
&lt;p&gt;Take a look if you wanna this or just skip it: &lt;a href=&quot;https://github.com/stefanprodan/k8s-prom-hpa&quot;&gt;https://github.com/stefanprodan/k8s-prom-hpa&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;And the service.yaml should be like this :&lt;/p&gt;
&lt;pre&gt;&lt;code data-language=&quot;python&quot;&gt;apiVersion: v1
kind: Service
metadata:
  name: webapp-svc
spec:
  type: NodePort
  ports:
  - port: 3333
    nodePort: 30333
    targetPort: 3333
    protocol: TCP
    name: http
  selector:
    app: webapp
    ENV: ${ENV}&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;We will use K8s&#039; selectors in order to get blue-green switch for deployments. The selector object will take the suitable pods and bind into service.&lt;br /&gt;And I used nodePort because of binding services to Load Balancer externally.&lt;/p&gt;
&lt;p&gt;You can bind like this : &lt;br /&gt;AGENTIP1:30333 http://servicedns.com&lt;br /&gt;AGENTIP2:30333 http://servicedns.com &lt;br /&gt;AGENTIP3:30333 http://servicedns.com&lt;/p&gt;
&lt;p&gt;You don&#039;t have to give each agent&#039;s IP to load balancer because K8s have also internal Load-Balancing. (That&#039;s not a good approach. Managing in Loadbalancer in K8s simply better)&lt;/p&gt;
&lt;p&gt;Your &quot;switch_environment.sh&quot; file can be like this.&lt;/p&gt;
&lt;pre&gt;&lt;code data-language=&quot;shell&quot;&gt;#!/bin/bash

if [ -z &quot;$1&quot; ]
  then
    echo &quot;No argument supplied&quot;
    exit 1
fi

if ! kubectl get svc $1
  then
    echo &quot;No service found : ${1}&quot;
    exit 1
fi

ENVIRONMENT=$(kubectl describe svc $1 | grep ENV | awk &#039;{print $2}&#039; | cut -d&quot;,&quot; -f1 | cut -d&quot;=&quot; -f2)

if [ $ENVIRONMENT == &quot;blue&quot; ]; then
    ENV=green envsubst &amp;lt; service.yaml | kubectl apply -f -
    echo &quot;Switched to green&quot;
else
    ENV=blue envsubst &amp;lt; service.yaml | kubectl apply -f -
    echo &quot;Switched to blue&quot;
fi&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;After all, bind all these items in one gocd.yml file.&lt;/p&gt;
&lt;pre&gt;&lt;code data-language=&quot;python&quot;&gt;format_version: 2
environments:
  WebAPI:
    pipelines:
      - webapp-build-and-push
      - webapp-deploy-to-prod-blue
      - webapp-deploy-to-prod-green
      - webapp-switch-environment

pipelines:
  webapp-build-and-push:
    group: webapp
    label_template: &quot;1.1.${COUNT}&quot;
    materials:
      project:
        git: http://github.com/example/webapp.git
        branch: master
        destination: app
    stages:
      - buildAndPush:
          clean_workspace: true
          jobs:
            buildAndPush:
              tasks:
               - exec:
                  working_directory: app/build-scripts
                  command: /bin/bash
                  arguments:
                    -  -c 
                    - &#039;./build-and-publish.sh&#039;

  webapp-deploy-to-prod-blue:
    group: webapp
    label_template: &quot;${webapp-build-and-push}&quot;
    materials:
      webapp-build-and-push:
        type: pipeline
        pipeline: webapp-build-and-push
        stage: deploy
      project:
        git: http://github.com/example/webapp.git
        branch: master
        destination: app
    stages:
      - build:
          approval:
            type: manual
          clean_workspace: true
          jobs:
            build:
              tasks:
               - exec:
                  working_directory: app/kubernetes
                  command: /bin/bash
                  arguments:
                    -  -c 
                    - &#039;ENV=blue IMAGE_TAG=$GO_PIPELINE_LABEL PODS=10 envsubst &amp;lt; deployment.yaml | kubectl apply -f -&#039;
               - exec:
                  working_directory: app/kubernetes
                  command: /bin/bash
                  arguments:
                    -  -c 
                    - &#039;kubectl rollout status deployment webapp-blue&#039;

  webapp-deploy-to-prod-green:
    group: webapp
    label_template: &quot;${webapp-build-and-push}&quot;
    materials:
      webapp-build-and-push:
        type: pipeline
        pipeline: webapp-build-and-push
        stage: deploy
      project:
        git: http://github.com/example/webapp.git
        branch: master
        destination: app
    stages:
      - build:
          approval:
            type: manual
          clean_workspace: true
          jobs:
            build:
              tasks:
               - exec:
                  working_directory: app/kubernetes
                  command: /bin/bash
                  arguments:
                    -  -c 
                    - &#039;ENV=green IMAGE_TAG=$GO_PIPELINE_LABEL PODS=10 envsubst &amp;lt; deployment.yaml | kubectl apply -f -&#039;
               - exec:
                  working_directory: app/kubernetes
                  command: /bin/bash
                  arguments:
                    -  -c 
                    - &#039;kubectl rollout status deployment webapp-green&#039;

  webapp-switch-environment:
    group: webapp
    label_template: &quot;${COUNT}&quot;
    materials:
      webapp-build-and-push:
        type: pipeline
        pipeline: webapp-build-and-push
        stage: deploy
      project:
        git: http://github.com/example/webapp.git
        branch: master
        destination: app
    stages:
      - build:
          approval:
            type: manual
          clean_workspace: true
          jobs:
            build:
              tasks:
               - exec:
                  working_directory: app/kubernetes
                  command: /bin/bash
                  arguments:
                    -  -c 
                    - &#039;./switch_environment.sh webapp-svc&#039;&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt; Now, You have 4 pipelines:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;webapp-build-and-push&lt;/li&gt;
&lt;li&gt;webapp-deploy-to-prod-blue&lt;/li&gt;
&lt;li&gt;webapp-deploy-to-prod-green&lt;/li&gt;
&lt;li&gt;webapp-switch-environment&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;You can define your build script to build, dockerise the application.&lt;br /&gt;If you have Test, Staging environments, put them in the &quot;gocd.yaml&quot; too. (In order to simplify, I removed those lines)&lt;br /&gt;That&#039;s it! After that, you have :&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Dockerised dotnetcore app&lt;/li&gt;
&lt;li&gt;Kubernetes Deployment pipelines&lt;/li&gt;
&lt;li&gt;Blue-Green Switch Pipeline which controls kubernetes service (You have to configure kubectl for gocd agents)&lt;/li&gt;
&lt;li&gt;Horizontal Pod Autoscaler (CPU based autoscale mechanism in the cluster) &lt;/li&gt;
&lt;/ul&gt;</content>
<link href="http://boranseref.com/index.php?controller=post&amp;amp;action=view&amp;amp;id_post=13" />
<id>http://boranseref.com/index.php?controller=post&amp;amp;action=view&amp;amp;id_post=13</id>
<updated>2018-07-17T08:17:33+00:00</updated>
<category term="Uncategorised"/>
</entry>
</feed>