The challenge
Single site deployed on a dedicated AEM installation is a rather rare situation. Well, that might make sense for a very large site deployed on different markets, available in multiple languages. But even in such a scenario at some point, the business wants to deploy microsites or campaigns.
Usually, the business uses AEM installation as a platform on which it can build and deploy more and more sites going forward. Often, more than one company or team works at the same time on the site that will go to the platform.
So, how complicated is it to manage such a situation?
In a pre-cloud-manager world, the situation was rather simple: every company or every team has a dedicated git repository, dedicated code and some CI/CD mechanism to build, test and deploy the application.
Nowadays, the situation has changed. Why? Mostly because of the specifics of AEM as a Cloud Service.
Why is AEM as a Cloud “problematic”?
Due to its architecture AEM as a Cloud requires a single codebase containing immutable parts of the system (“mono-repo”) that will be used to build a Docker image, which will be deployed in Kubernetes pod. That is why Cloud Manager supports only one production pipeline.
That’s the specifics of the architecture and thanks to that it offers many advantages, like autoscaling, auto-updates, etc.
So how to handle that situation? Are we doomed to a single site case? Not necessarily.
The requirements
What are the requirements we would like to meet to build the platform on top of AEM as a Cloud Service?
- a few different teams should work together, in parallel
- the teams could work independently
- every team should have a possibility to validate their code against quality gates
- the code that each team delivers separately should be validated against platform specifics
Possible solution
The idea is to implement a few-step process that will allow independent teams to meet the requirements set by Cloud Manager. Theoretically, that could look as follows:
- Developers locally try to recreate (at least) some of the Quality Gates. That includes: Unit test coverage, SonarQube & OakPal rules – either by recreating Adobe’s rules (listed here) or (I have no idea whether that is possible) by getting them from Adobe.
- Before merging their code to the master branch in their “local/project” repository, developers should pass the above-mentioned “local” quality checks. That should lower the risk of failing on the Cloud Manager level.
- Once the feature is merged to the master branch of Company’s code repository, CI/CD should automatically push the master branch to Adobe’s repo (to a branch named after project/company name) and trigger Code Quality Pipeline on that branch (please note, that, unlike the production pipeline, there’s no limit for the number of Code Quality Pipelines). The company is responsible for fixing all the issues reported by Cloud Manager.
- When the code is ready to go live, the integration team reviews the code in terms of being consistent with the platform and merge it to the master branch.
- Execution of the Production Pipeline will deploy the application to the production or will detect e.g., performance issues.
Two teams, two levels
The above scenario suggests that apart from the operational teams there should be one team, strictly responsible for the platform (including merging the code to a single code repository). Actually, in my opinion, that is not strictly bound to the Cloud scenario. In most of the cases, such an approach makes sense to keep the platform consistent and stable.
A feedback
What do you think, does such an approach make sense? What are the downsides of it? I am also keen on knowing other solutions that worked in real-life scenarios, I think that sharing your strategies would be very beneficial for the others.