Skip to content

Kubernetes Deployment

1. Introduction

In the following section We will explain how to install the service in a Kubernetes environment.

2. Manual deployment

2.1 Introduction

The service OCR can be deployed in kubernetes with kubectl:

kubectl apply -f manifest.yaml

Using a manifest.yaml file similar to this:

apiVersion: v1
kind: Namespace
metadata:
  name: facephi-ocr-service
---

apiVersion: v1
kind: Secret
metadata:
  name: ocr-license-secret
  namespace: facephi-ocr-service
stringData:
  license.lic: |-
    {
      CONFIG_DIR=<provided by facephi>
      LICENSE_TYPE=<provided by facephi>
      LICENSE_BEHAVIOUR=<provided by facephi>
      LICENSE_ID=<provided by facephi>
      LICENSE_DATA=<provided by facephi>
      LICENSE_KEY=<provided by facephi>
    }
---

apiVersion: apps/v1
kind: Deployment
metadata:
  name: ocr
  namespace: facephi-ocr-service
spec:
  replicas: 3
  selector:
    matchLabels:
      app: ocr
  template:
    metadata:
      labels:
        app: ocr
    spec:
      volumes:
        - name: license-volume
          secret:
            secretName: ocr-license-secret
            defaultMode: 420
      containers:
        - name: facephi-ocr-service
          image: facephicorp.jfrog.io/docker-pro-fphi/facephi-ocr-service:2.4.6
          imagePullPolicy: IfNotPresent
          ports:
            - containerPort: 6982
          volumeMounts:
            - name: license-volume
              readOnly: true
              mountPath: /service/license/license.lic
              subPath: license.lic
          resources:
            limits:
              cpu: 1024m
              memory: 2Gi
            requests:
              cpu: 512m
              memory: 1Gi
---

apiVersion: v1
kind: Service
metadata:
  name: ocr-service
  namespace: facephi-ocr-service
spec:
  ports:
    - name: http
      protocol: TCP
      port: 80
      targetPort: 6982
  selector:
    app: ocr
  type: ClusterIP
---

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: ocr-ingress
  namespace: facephi-ocr-service
  annotations:
    konghq.com/strip-path: "true"
spec:
  ingressClassName: kong
  rules:
    - host: <your own dns for this service>
      http:
        paths:
          - path: /ocr
            pathType: Prefix
            backend:
              service:
                name: ocr-service
                port:
                  number: 80
---
It is important to be logged previously in artifactory or get the image facephicorp.jfrog.io/docker-pro-fphi/facephi-ocr-service and store it in a docker image repository where the cluster can download it.

2.2 Secret

It is required to declare a kubernetes secret where the license is passed to kubernetes.

apiVersion: v1
kind: Secret
metadata:
  name: ocr-license-secret
  namespace: facephi-ocr-service
stringData:
  # Write here your license content. E.g:
  license.lic: |-
    {
       "key":"XXXXXX-XXXXXX-XXXXXX-XXXXXX-XXXXXX-XXXXXX",
       "type":"NODE_ONLINE"
    }

2.3 Deployment

Remember to log in Artifactory or download previously the docker image.

2.3.1 Volumes

You need to add the volume with the client license. By default, the path to store the license file is /service/license.

Once that secret is created, the deployment will associate the volume in the appropriate path with the following lines:

...
spec:
  ...
  template:
    ...
    spec:
      volumes:
        - name: license-volume
          secret:
            secretName: ocr-license-secret
            defaultMode: 420
        ...
      containers:
        ...
        - volumeMounts:
            - name: license-volume
              readOnly: true
              mountPath: /service/license/license.lic
              subPath: license.lic

spec.volumes[0].secret.secretName searches the namespace for the previously generated secret and stores it in a volume with the name license-volume. When mounting the license-volume associated with the Secret is searched for, and mountPath is set to the path where the file is stored, and we can specify a particular object of the secret with subPath, in this case the key license.

2.3.2 Resources

After numerous load and stress tests, the following results have been obtained to ensure a response time of between 2700 and 3000 ms for ID cards.

Service CPU Memory Time avg
/api/v1/process/ 512m 1Gi 5.4s
/api/v1/process/ 1024m 2Gi 2.9s
/api/v1/process/ 2048m 4Gi 2.8s

With these tests, the following configuration is established at the request and limits level.

spec:
  ...
  template:
    ...
    spec:
      ...
      containers:
        ...
          resources:
            limits:
              cpu: 2048m
              memory: 4Gi
            requests:
              cpu: 1024m
              memory: 2Gi

2.4 Service

2.4.1 LoadBalancer

We take into account that we will set up a LoadBalancer with Kong in front to access the FacePhi OCR Service service. Note that the service is exposed on port 80 and attacks the Pod on 6982.

apiVersion: v1
kind: Service
metadata:
  name: ocr-service
  namespace: facephi-ocr-service
spec:
  ports:
    - name: http
      protocol: TCP
      port: 80
      targetPort: 6982
  selector:
    app: ocr
  type: ClusterIP

2.5 Ingress

We set up an Ingress in front to redirect requests from Kong to the service within the Pod that we previously exposed on port 80.

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: ocr-ingress
  namespace: facephi-ocr-service
  annotations:
    konghq.com/strip-path: "true"
spec:
  ingressClassName: kong
  rules:
    - host: <your own dns for this service>
      http:
        paths:
          - path: /ocr
            pathType: Prefix
            backend:
              service:
                name: ocr-service
                port:
                  number: 80

3. Types of Instances

If the cluster is going to make use of kubernetes HPA resources to scale the number of pods, it is recommended to take into account the maximum number of pods supported by each instance. The maximum number of pods that is considered appropriate based on the tests is reflected in the following table.

Instance type CPU Memory OCR Pod Capacity
c5.xlarge 4 8 3
c5.2xlarge 8 16 6
c5.4xlarge 16 32 12

It is possible to add more OCR pods but sacrificing response time.