Select Page

This post walks through a couple of steps that are generally done to get your kubernetes cluster on the path to be production ready. The 2 steps I will cover in this article are mounting alfresco-global.properties as a volume and how to use an external database. Usually a kubernetes volume is a folder with data that is accessible to the containers running in the pod. For our use case, we will just mount a single alfresco-global.properties file as a volume. To use an external database, we assume that you have set up the external database as per Alfresco’s documentation. The steps shown in this article are used to run Alfresco Content Services (ACS) in a local kubernetes cluster like Docker Desktop. This article assumes you have read the previous articles in this series:

Vijay Prince,
Solutions Architect, Zia Consulting

The commands shown in this blog have been tested on my Docker Desktop Kubernetes cluster that is deployed on macOS Catalina. The helm subchart that is created using the steps shown below can be viewed at https://github.com/ziaconsulting/demo-helm-acs/tree/v3.0.

Introduction

We would like to be able to spin up a custom Alfresco image in multiple environments like Dev, Test and Prod. The differences in these environments are captured in the alfresco-global.properties. If we build a custom image including the alfresco-global.properties, we are forced to create a new image per environment. To reduce the risk of introducing unexpected changes when building different images, we want to use the same image across all environments. To facilitate this, we want to inject a volume to the custom repository image.

One way to add a volume to the helm charts is to duplicate the existing deployment template and add our custom volume. This is not ideal as it could introduce complexities during upgrade. To avoid this, we will use a PodPreset.

A PodPreset is a kubernetes resource that is used to inject additional requirements to a pod during creation of the pod. In the PodPreset manifest, we will be able to provide the label selector to uniquely identify the pod we want these additional requirements to be injected. In our use case, we will want to inject a custom volume with alfresco-global.properties into the repository pod.

Enable k8s v1alpha1 features

PodPreset is still part of the v1alpha1 release. Hence, before we can use this resource, we will need to enable our cluster to use the v1alpha1 features. This can be done by updating the API server in the cluster.

Use of alpha features are not enabled by most managed cloud providers. Only Google Kubernetes Engine (GKE) gives us an option to create a cluster with alpha features. When setting up a kubernetes cluster using Kops, we can configure the master nodes to enable the alpha features.

When running managed kubernetes clusters, we will not be able to use this feature to load alfresco-global.properties as a volume. In this case, we can pass these properties in values.yaml or using CI/CD tools. More details on this will be provided in future blog articles.

Docker Desktop on a Mac uses hypervisor to run a Docker Daemon. To update the API Server of the cluster, we will need to access this vm. This can be done using the following command.

$ docker run -it --privileged --pid=host alpine nsenter -t 1 -m -i /bin/bash

The above command spins up an alpine image container to use the Docker host process tree and launches nsenter with PID 1

Update “/etc/kubernetes/manifests/kube-apiserver.yaml” with the instructions to include PodPreset. Find the code block that has

–enable-admission-plugins=NodeRestriction and append PodPreset to it. Insert – –runtime-config=settings.k8s.io/v1alpha1=true into the same block as shown below.

# Updates to file /etc/kubernetes/manifests/kube-apiserver.yaml

- --runtime-config=settings.k8s.io/v1alpha1=true
- --enable-admission-plugins=NodeRestriction,PodPreset

Setup Helm chart

We will clone https://github.com/ziaconsulting/demo-helm-acs/tree/v2.0 and make the necessary changes. The following commands are executed to give you a base from which you can make your custom changes.

$ git clone -b v2.0 https://github.com/ziaconsulting/demo-helm-acs.git
$ cd demo-helm-acs
$ rm -rf .git/ # Remove repo references to use in your env

All the commands that we will execute in this article are run in the demo-helm-acs folder. Now, we are ready to update the helm chart to mount alfresco-global.properties as a volume.

Update Helm chart – alfresco-global.properties as volume 

We will create a new helm template that will create the properties we need to update in alfresco-global.properties as a kubernetes ConfigMap. A ConfigMap is an kubernetes API object that lets you store non-confidential configuration data. We can store environment specific configuration separately from the container images. This decoupling of the configuration data and the application allows us to use configuration values for different environments without having to update the images. The ConfigMap, will be mounted as a volume in a pod that is targeted by the PodPreset configuration. The below yaml file contains this template.

apiVersion: v1
kind: ConfigMap
metadata:
  name: alfresco-global-config
data:
  alfresco-global.properties: |-
    messaging.broker.username=admin
    messaging.broker.password=admin
    vj.common=common property
---
apiVersion: settings.k8s.io/v1alpha1
kind: PodPreset
metadata:
  name: alfresco-global
spec:
  selector:
    matchLabels:
      component: repository
  volumeMounts:
    - name: alf-global-vol
      mountPath: /usr/local/tomcat/shared/classes
  volumes:
    - name: alf-global-vol
      configMap:
        name: alfresco-global-config

The ConfigMap alfresco-global-config can have multiple properties. We want these properties to be loaded into the alfresco-global.properties file when we spin up ACS. The PodPreset named alfresco-global will match a pod that has the label component: repository and mount the ConfigMap as a Volume named alf-global-vol.

Before we used Helm Charts for deployment, we had to configure many properties in alfresco-global.properties. In kubernetes, these properties are configured in the ConfigMap to dynamically populate based on the helm release names as shown in the below screenshot. More details on this can be found at https://github.com/Alfresco/acs-deployment/blob/master/helm/alfresco-content-services/templates/config-repository.yaml. 

Multiple environments – namespace, values 

When we are running multiple ACS instances in multiple environments, the above method allows us to inject multiple properties into these environments. This method of injecting custom properties becomes interesting when we want to add custom properties depending on the environment. 

There are a couple of options we can use to inject properties depending on the environment.

  1. Namespace: If we are running our QA and Prod in the same cluster but in different namespaces, we can specify that a property should be injected based on the namespace name in the helm template.
  2. Command line configuration: When we use helm install to spin up ACS, we can pass in command line configuration like –set env=dev.

The below yaml configuration shows 2 properties vj.ns.var1 & vj.env.var2 that are injected in the repository based on the namespace name and helm configuration values respectively. We will update the helm template with Go template constructs to dynamically inject properties based on variables. Helm templating language is a combination of Go template language with extra functions and wrappers to expose some objects to the helm templates.

apiVersion: v1
kind: ConfigMap
metadata:
  name: alfresco-global-config
data:
  alfresco-global.properties: |-
    messaging.broker.username=admin
    messaging.broker.password=admin
    vj.common=common property
    {{- if eq .Release.Namespace "vj-development" }}
    vj.ns.var1=dev ns vj-development
    {{- end }}
    {{- if eq .Release.Namespace "vj-production" }}
    vj.ns.var1=prod ns vj-production
    {{- end }}
    {{- if eq .Values.env "dev" }}
    vj.env.var2=dev values.env
    {{- end }}

If we are running a helm install command inside a namespace called vj-development, the variable vj.ns.var1 will have the value dev ns vj-development. At the same time, if we also pass in configuration env=dev, the property vj.env.var2 will have the value dev values.env. 

Validation – alfresco-global.properties 

In your Docker Desktop kubernetes cluster, create a new context with a new namespace and switch to this context.  

A kubernetes ‘context’ refers to a group of access parameters. This consists of the kubernetes cluster, user and a namespace. When we use a context, it allows all our commands to be directed to the specific namespace the context is tied to. If we don’t create a context, we have to specify the namespace in every command. 

A namespace provides the scope for the kubernetes objects. It is a virtual cluster that allows multiple teams and projects to use different namespaces in the same physical cluster. We can also use namespaces to run multiple environments like Development, QA and Production on the same physical kubernetes cluster.  

The below commands creates the context and allows commands to act on the context vj-dev.

# Create custom context vj-dev
$ kubectl config set-context vj-dev --namespace=vj-development \
  --cluster=docker-desktop \
  --user=docker-desktop
$ kubectl config use-context vj-dev # Switch to new context vj-dev
$ kubectl create namespace vj-development # Create the namespace

We can now spin up our custom ACS helm chart using the command below.

$ helm install helm/zia-acs --name acs-vj \
--set env=dev \
--set global.alfrescoRegistryPullSecrets=quay-registry-zia-secret;

The ConfigMap alfresco-global-config that is loaded should have the values as shown below.

$ kubectl describe configmap alfresco-global-config
Name:         alfresco-global-config
Namespace:    vj-development
Labels:       
Annotations:  

Data
====
alfresco-global.properties:
----
messaging.broker.username=admin
messaging.broker.password=admin
vj.common=common property
vj.ns.var1=dev ns vj-development
vj.env.var2=dev values.env
Events:  

To confirm that these properties are loaded, you can also download the JMX dump and validate. You can access the JMX settings page at https://localhost/alfresco/s/enterprise/admin/admin-jmx-settings.

Update Helm chart – External DB  

In many kubernetes clusters that require database access, the database is maintained external to the cluster. In these instances, we will need to configure our custom helm chart to talk to an external Database. In our use case, we will assume that we will connect to an external MariaDB database. We will need to update our values.yaml file with the following values.

## helm/zia-acs/values.yaml
  database:
    external: true
    driver: "org.mariadb.jdbc.Driver"
    user: alfresco
    password: alfresco
    url: "'jdbc:mariadb://192.168.27.111:3306/alfresco?useUnicode=yes&characterEncoding=UTF-8'"

  postgresql:
    enabled: false

The example shows us connecting to an external MariaDB running on server 192.168.27.111. Since we don’t need the default PostgreSQL DB in the cluster, we set postgresql.enabled=false in the values.yaml file above. 

JDBC Client:

Depending on your DB, you will need to download the JDBC client for alfresco to connect to the DB. In our previous article https://www.ziaconsulting.com/developer-help/kubernetes-4/, we have talked about how to integrate a custom SDK project into the custom helm chart. We will get the mariadb JDBC client and place it into the SDK project as shown below.

 

$ wget -P zia-custom/zia-custom-platform-docker/src/main/docker/extensions https://downloads.mariadb.com/Connectors/java/connector-java-2.2.5/mariadb-java-client-2.2.5.jar

We do not need to worry about this jar being packaged into the custom image because the default Dockerfile generated by the SDK copies all jar files found inside the extensions/ folder into $TOMCAT_DIR/webapps/alfresco/WEB-INF/lib/ in the image. We will now need to rebuild the SDK project and the docker images using the below commands before we can spin up ACS.

$ mvn -f zia-custom/pom.xml clean package
$ docker-compose -f zia-custom/target/classes/docker/docker-compose.yml build --no-cache

Validation – External DB

Before we can spin up the helm chart,  we have to make sure we bring down any running helm chart with name acs-vj and clean the PersistentVolume(PV) of the Content Store and the SOLR indexes. Cleaning the PV’s is a destructive act. This allows us to delete all the existing data and allow us to spin a new ACS installation using the external database. We can bring down the helm chart using the command below.

$ helm delete acs-vj --purge

In our local Docker Desktop environment, we can clean the data in the PV’s by clearing the contents of the folder used by the PV’s as shown below.

# The folders will vary depending on your setup
$ /bin/rm -rf /Users/vijayprince/helm/pvc/volumes/acs/* ;  \
/bin/rm -rf /Users/vijayprince/helm/pvc/volumes/solr/* 

We can now spin up our custom ACS helm chart using the command below.

$ helm install helm/zia-acs --name acs-vj \
--set env=dev \
--set global.alfrescoRegistryPullSecrets=quay-registry-zia-secret;

If we have successfully set up the external DB, we will be able to validate by navigating to https://localhost/alfresco in a browser.

Cleanup

You can clean up the helm install by using the below command. Warning: This command will remove all your charts.

$ helm delete $(helm list -aq) --purge

To specifically remove your chart, use the below command by passing your chart release name

$ helm delete acs-vj --purge

Conclusion

In this article, we have seen how to inject custom alfresco-global.properties based on the environment and how to use an external DB. The sample code can be found at https://github.com/ziaconsulting/demo-helm-acs/tree/v3.0. In the next blog post, I will show the different options we have for log aggregation. Until then, be sure to catch up on this series, or contact Zia to continue the conversation.

Pin It on Pinterest

Sharing is caring

Share this post with your friends!