Select Page

Part 1: Review of Deployment Methods

Alfresco Content Services is a content management product offered by Hyland. It is somewhat unique in that it does not ship with a supported installer. This article is the first in a series providing guidance around making decisions during the installation of Alfresco. It gives an overview of the options and discusses the advantages and disadvantages of each. Following articles will discuss the specific deployment options in greater detail.

Hyland documents 4 deployment methods for Alfresco Content Services (ACS): 

  • Manual Installation
  • Ansible Installation
  • Containerized Deployment via Docker-Compose
  • Containerized Deployment with Kubernetes and Helm Charts

While Hyland documented each of these methods to some degree, they are reference implementations that are not supported by the product support organization or ready for production as provided. To be successful, Hyland recommends having a team that is familiar with the chosen deployment tooling, as well as the components of the Alfresco product.

As one of Hyland’s Platinum Partners, Zia Consulting focuses on Enterprise Alfresco. The information in this article applies to the Community Edition, but has not been fully validated to date.

The following thoughts provide reasons for choosing each method.

Manual installation

The most rudimentary way to install Alfresco is to copy the components provided by Hyland, place them in the environment, and configure them yourself.

The two primary reasons to use the manual installation method are to deploy to Windows or to use a configuration management tool such as Puppet, Chef, or SaltStack.

TIP: Working through a manual install provides extensive insight into how ACS works. This is highly recommended for anyone wanting to gain experience with the software, even those planning to use one of the other deployment methods for production.

Hyland documents the process of installing WAR files to Tomcat. However, the documentation covers the minimum configuration necessary, without considering the other configurations necessary for setting up a production environment. Furthermore, the instructions are not sufficient for establishing a functional production system because they do not address the installation of several other required components, including Search Services, ActiveMQ, a TRouter, and TEngines. Alfresco provides documentation on their docs site, on project GitHub pages, and on their community site. They also rely on documentation provided by other vendors like ActiveMQ and Solr.

 

Ansible installation

Starting with ACS 7, Alfresco provides a reference Ansible playbook for deploying a single ACS stack. The playbook supports deploying ACS 6.2 as well. Out of the box, it deploys to specific versions of RedHat/Centos or Ubuntu. The playbook does not support deploying a cluster or deploying to Windows hosts.

When using containers is not possible, the Ansible playbook provides the steps required for a manual deployment on machines running a distribution of Linux supported by Hyland’s playbooks.

TIP: It is recommended to review these playbooks, and all other associated documentation, when planning to perform a manual installation.

NOTE: Zia Consulting uses this playbook to deploy full stacks on several servers, and then joins the servers into a cluster. Neither the cluster configuration, or instructions for deploying multiple ACS instances to multiple servers with a single invocation, are provided.

 

Containerized Deployment Options

The next two deployment methods use containers that are built, sealed, tested, and delivered by Hyland. Before diving into these methods, it is helpful to understand the history and nature of containers, and why it is preferred to use container based deployments whenever possible.

Containers are a modern approach to running several executables in isolated and resource constrained environments on a single computer. The 1960’s brought the ability to use special CPU instructions to run several virtual computers, with multiple operating systems, on a single physical computer at the same time. A single large computer could maintain a high utilization while keeping multiple virtual machines isolated with respect to security and performance.

In 1979, Unix V7 added support for chroot. This allowed someone to run an executable with a restricted view of the filesystem. By placing a minimal copy of the operating system libraries and configuration files in a directory, a person could execute a program in a jail that sees the minimal copy as the root of the filesystem and can’t break out of that jail. Over the next quarter century, Unix and Linux kernels added other forms of resource limits and jails. These enabled a person to limit  process resources, such as CPU and memory, and established specialized networks for jailed processes. 

Google released Process Containers that provided a consistent system for limiting resources and jailing processes on Linux in 2006, and later renamed it Control Groups (CGroups). By 2008 Linux Containers (LXC) was released, consolidating support for CGroups and Linux Namespaces.

In 2013, Docker simplified the use of many of the LXC features. This allowed a person to create immutable and cryptographically signed filesystem images, and then execute a program inside a mutable but ephemeral copy of the image. This running program only saw the filesystem defined by the image, and could have resource limits and its own networking. It was a container. This gave people many of the benefits of virtual machines without requiring a whole new kernel for each service. Containers shared the Kernel with the host OS.

A container based approach is often preferred when considering which deployment method to use for production. It is important to understand that containers:

  • Are built on supported and tested base operating systems.
  • Include specific supported and tested versions of dependencies and configurations.
  • Have a folder structure that is consistent from one release to another.
  • Allow developers to use identical binaries that are released to live environments.
  • Are immutable, meaning the exact same code is running in each of our environments (e.g. development, stage, and production).
  • Allow for deployment on a wider variety of linux kernels, distributions and versions.
  • Provide isolation and resource sharing advantages similar to virtual machines, while allowing for higher utilization of hardware resources.
  • Often offer simplified upgrades by referencing newer images and restarting.
  • Encourage good DevOps practices like those outlined at Twelve-Factors.

Hyland/Alfresco does not provide Windows or ARM containers. This is why Zia prefers not to run Linux containers on Windows or ARM machines for production deployments.

Docker Compose installation

Hyland provides basic Docker Compose files for ACS versions 6.1, 6.2 and 7.x that are intended for developers to spin up a development environment in a few minutes.

For customers that do not have a team with experience in deploying production applications on Kubernetes, it can be very beneficial to use containers on physical or virtual servers when deploying ACS.

Extending Docker Compose files to gain the benefits of a container based deployment in a production environment is helpful when Kubernetes is not a good option for customers. Future articles in this series will cover this in greater detail.

 

Kubernetes via Helm installation

Hyland also provides reference Helm charts for ACS versions 6.1, 6.2 and 7.x. These are intended to be used for clustered deployments.

It is common for customers to struggle to install, configure, maintain, upgrade, and run production workloads on Kubernetes. Customers who are successful with Kubernetes tend to have a dedicated IT team that is responsible for these activities. 

We often see companies using Kubernetes on a cloud provider such as Amazon Web Services, Google Cloud Platform, Microsoft Azure, or on RedHat’s OpenShift. While these offerings tend to reduce the complexity of running Kubernetes, they still require specialized knowledge. This is  especially true around networking, storage, configuration, and maintenance.

The provided Helm charts are a great start, including best practices not covered by any of the other deployment methods. However, they still require a significant amount of customization to run a production environment. For customers that don’t already have strong skills in Helm and Kubernetes, this can add risk compared to using Docker Compose.

Click here to see our blog series with more detail on running Alfresco on Kubernetes. It is important to note that some things have changed since it was released a couple years ago. Of course, feel free to contact us here with any related questions.

Bindu Wavell – Chief Architect
ABOUT THE AUTHOR

Bindu has 25+ years of consulting experience with enterprise system integration. As the content lead, he provides technical and architectural reviews and guidance. Bindu supports project teams and runs the Alfresco support practice, overseeing issues across multiple customers. Additionally, he’s active in the Alfresco community, including being a member of the Order of the Bees, where he is a contributor to support tools projects. He’s the creator of the Alfresco Yeoman generator. Bindu is a tea connoisseur, and interested in hobby robotics and automation.

Pin It on Pinterest

Sharing is caring

Share this post with your friends!