Quantcast
Channel: Dan Klco – Perficient Blogs
Viewing all 49 articles
Browse latest View live

Exploring the Sling Feature Model: Part 2 – Composite NodeStore

$
0
0

In my previous post Exploring the Sling Feature Model, I described the process of migrating from a Sling Provisioning project setup to a Sling Feature Model project. 

Now that we have the Sling Provisioning Model project converted, we can move on to the fun stuff and create a Composite NodeStore. We’ll use Docker to build the Composite Node Store in the container image.

Creating the Composite NodeStore Seed

The Composite NodeStore works by combining one or more static “secondary” node stores with a mutable primary NodeStore. In the case of AEM as a Cloud Service, the /apps and /libs directories are mounted as a secondary SegmentStore, while the remainder of the repository is mounted as a MongoDB-backed Document Store

For our simplified example, we will create a secondary static SegmentStore for /apps and /libs and combine that with a primary SegmentStore for the remainder of the repository. Since the secondary SegmentStore will be read-only, we must “seed” the repository to pre-create the static paths /apps and /libs

To do this, we have a feature specifically to seed the repository with the /apps and /libs temporarily mutable. We can then use the aggregate-features goal of the Sling Feature Maven Plugin to combine this with the primary Feature Model to create a feature slingcms-composite-seed. When we start a Sling instance using this feature, it will create the nodes under these paths based on the feature contents. 

As shown below, while seeding the repository is written to the libs SegmentStore. It’s also worth mentioning that with the Feature Model Launcher, by default, the OSGi Framework runs in a completely different directory from the repository and pulls the bundle JARs from the local Maven repository.

Seeding a Composite NodeStore

Our updated Dockerfile runs the following steps to build the container image:

  • Downloads the Feature Model Launcher JAR and Feature Model JSON files
  • Starts the Sling instance using the slingcms-composite-seed model in the background
  • Polls the Felix Health Checks until the tag “systemalive” returns 200
  • Once the 200 status is returned the Sling instance is stopped and the build cleans up the launcher and symlinks the SegmentStore directory into the expected path

Naming Gaffes

Hopefully, you are more careful than me, but one thing to keep in mind is that the Sling Feature Launcher will happily start as long as it has a valid model. For example, you can easily spend a significant amount of time trying to understand why nothing responds with this model:

org.apache.sling.cms.feature-0.16.3-SNAPSHOT-composite-seed.slingosgifeature

Instead of the one I meant:

org.apache.sling.cms.feature-0.16.3-SNAPSHOT-slingcms-composite-seed.slingosgifeature

Since the non-aggregate model is a valid model, the Sling Feature Launcher will happily start, but it simply creates an OSGi container with only a couple of configuration which naturally does… nothing. 

Starting and Running

Once the repository is been fully started and seeded, we’ll run a different Feature Model to run the instance. Similar to the Composite Seed Feature Model, the slingcms-composite-runtime Composite Model will use the composite repository, however it runs the libs mount in readonly mode.

To use the runtime Feature Model, the CMD directive in the Dockerfile calls the Sling Feature Model Launcher with the slingcms-composite-runtime Feature Model. In addition, we’ll mount a volume in the docker-compose.yml to separate the mutable volume out from the container disk, that way the repository persists between restarts and container deletion.

While in runtime mode, the Composite repository looks like the diagram below, leveraging a Docker volume for the global SegmentStore and the local seeded repository for the libs SegmentStore:

Composite NodeStore Runtime

End to End

Here’s a quick video showing the process of creating a Container-ized version of Sling CMS with a Composite NodeStore from end to end.

Details: Build Arguments & Dependencies

The current example implementation uses Apache Maven to pull down the Feature Models with a custom settings.xml and Build Arguments in the Dockerfile. By changing the settings.xml and the Build Arguments, you could override the Feature Model being produced to use a custom Feature Model, for example an aggregate of Sling CMS and your custom Sling CMS app.

We’ll cover the process of producing a custom aggregate in the next blog post in the Exploring the Sling Feature Model series. If you’d like to learn more about the Sling Feature Model, you should check out my previous post on Converting Provisioning Models to Feature Models.


The Missing Guide to Your Adobe Managed Services Servers

$
0
0

Working with Adobe Managed Services (AMS) I’ve wished I had a guide to the common activities I need to log into a server to perform. Here is a quick starter to help you find your way around the servers provisioned by Adobe Managed Services. Since I can’t guarantee AMS environments are consistent (and checking with other teams I’ve confirmed they’re not) you may find these commands or paths don’t exactly match yours, but they should be a good start.

SSH Access

Before you can SSH into the AMS hosts, you will need to reach out to the CSE in order to create a user name and add your public key to the ~/.ssh/authorized_keys file on the host. I’d note that some AMS customers are still issued password-based users (in 2020!!) so ssh-copy-id may come in handy.

By default, the CSE will create a single username for all users to access. You can request they create named users as well, however, you will need to be vigilant in adding/removing users as these users are not managed in a centralized repository.

Lower vs. Upper Environment Access

By default, you will have more (if still limited) access to the Lower environments as compared to the upper environments. Even in the upper environments, you can request a “jailed user” with read-only access to the logs.

In the lower environments, you will be able to do a number of “write” activities by invoking sudo. For a full list of the commands you can execute run:

sudo -l

Your allowed sudo commands will generally use the full file path, it’s important to note in that case, you cannot just execute the command from the relative directory, e.g. this won’t work:

cd /etc/httpd/conf.d
sudo vi dispatcher_vhost.conf

but this will:

sudo vi /etc/httpd/conf.d/dispatcher_vhost.conf

Dispatchers

For those not experienced with AEM, Dispatcher servers run Apache httpd with a special module called the Dispatcher and serve as a proxy, cache and quasi-security layer for the AEM Authors and Publishers.

Important Directories:

  • Logs: /mnt/var/log/httpd/
    Note – you will not be able to change to the log directory and must use sudo
  • Apache Docroot: /mnt/var/www/html/
  • Apache Server Configuration: /etc/httpd/
  • Dispatcher Configuration: /etc/httpd/conf.dispatcher.d/

Useful Commands:

  • Restart Apache: sudo service httpd restart
  • Diagnose build errors: sudo journalctl --system -u httpd
  • List the Apache environment variables: cat /etc/sysconfig/httpd
  • List log files: sudo ls /mnt/var/log/httpd/
  • Tail log file: sudo tail -f /mnt/var/log/httpd/[log-file-name]
    Note – you cannot tail /mnt/var/log/httpd/*, you need to explicitly mention the log files to tail, e.g:
    sudo tail -f /mnt/var/log/httpd/access_log /mnt/var/log/httpd/error_log
  • Edit a dispatcher configuration file: sudo vi /etc/httpd/conf.dispatcher.d/[configuration-file]

AEM Instances

This applies for both AEM Author and Publish instances as the setup of each from the AMS perspective is nearly identical besides the runmode.

Directories:

  • AEM Installation: /mnt/crx/[author|publish]/crx-quickstart
    Note – You won’t be able to access the parent of the crx-quickstart folder and have to change directly into the crx-quickstart path
  • Logs: /mnt/crx/[author|publish]/crx-quickstart/logs

Useful Commands:

  • Restart AEM: sudo /etc/init.d/cq5 restart
  • List Logs: ls /mnt/crx/[author|publish]/crx-quickstart/logs
  • Tail logs: tail -f /mnt/crx/[author|publish]/crx-quickstart/logs/[log-file]

Hopefully, this helps you feel your away around your AMS installation and remember, we’re always here to help. Having trouble getting to something or diagnosing an issue? Leave a comment!

Apache Sling JVM Performance Comparison

$
0
0

With the recent proliferation of Java Virtual Machine (JVM) implementations, it’s difficult to know which implementation is the best for your use case. While proprietary vendors generally prefer Oracle Java, there are several open source options with different approaches and capabilities. 

Given how the implementations vary in some underlying technical specifics, the “correct” JVM implementation will vary based on the use case. Apache Sling, in particular has some specific needs given the complexity of the OSGi / JCR backend and the Sling Resource Resolution framework. 

Test Strategy

To help get some real data on which JVM implementation works best for running Apache Sling, I created a project to:

  1. Install a number of JVM implementations and monitoring tools
  2. For each JVM:
    1. Setup an instance of Apache Sling CMS, using no additional parameters
    2. Install a content package to the Sling CMS instance
    3. Run a performance test using siege
  3. Consolidate the data into a CSV

If you are curious, you can checkout the Sling JVM Comparison project on Github.

The project installs and compares the following JVM implementations on version 11:

To create a meaningful comparison, I setup and ran the test an Amazon EC2 m5.large instance running Ubuntu 18.04 LTS “Bionic Beaver” and captured the results.

Startup / Install Performance

An important performance comparison is the amount of time it takes to get an instance running. To measure this, I captured the time in milliseconds to start the Apache Sling CMS instance and the amount of time required to upload and install the same content package. There is a potential variance in the capture of the startup time as the test process polls the Sling instance to see when it responds successfully to a request to determine startup time.

OpenJDK Hotspot and Amazon Coretto are essentially tied as the leaders of the pack with Oracle JDK and GraalVM following shortly behind. Azul Zulu and Eclipse OpenJ9 take 78% and 87% longer to start as OpenJDK Hotspot. Interestingly, most of the JVM implementations take approximately the same time to install the content package, however, Eclipse OpenJ9 takes 35% longer to install the content package.

Performance Under Load

To check performance under load, I tested the instances using siege using a list of URLs over the course of an hour with blocks of 15 minutes on and 15 minutes off. 

First, we can take a look at the throughput per second:

And next, we can look at the raw transaction count:

Both show the same story, OpenJDK Hotspot, Amazon Coretto and Oracle JDK taking the top spots for performance with GraalVM, Azul Zulu and Eclipse OpenJ9 trailing behind.

Memory Usage

Finally, given how memory intensive Java applications can be, it’s important to consider memory usage and here the differences are quite stark:

Eclipse OpenJ9 is significantly less memory intensive, using only 55% of the average memory of the 4 middle-tier JVM implementations. GraalVM also sits outside the average, using 15% more memory than the same middle-tier JVM implementations.

Summary and Findings

From a raw performance perspective, OpenJDK Hotspot is the clear winner with Amazon Coretto close behind. If you are all in on Amazon or want a long-term supported JVM option, Amazon Coretto would be worth considering.

For those running Apache Sling on memory-limited hosting options, Eclipse OpenJ9 is the best option. While there is a tradeoff for performance, when you only have a Gigabyte or two of memory, reducing the load by 45% will make a tremendous difference.

Credit

Thanks to Paul Bjorkstrand for coming up with idea for this post. 

Apache Sling JVM Performance Followup

$
0
0

In a comment on my previous post Apache Sling JVM PerformanceGil Tene made an insightful comment about the potential possibility of performance impact from speed from the underlying environment or other tests.

To accommodate for this possibility, I re-ran the tests inside a loop, randomizing order of the JVM execution for each iteration 25 times. As Gil predicted this did bring the OpenJDK implementations closer together with GraalVM and OpenJ9 as outliers. 

Startup Performance

Interestingly, with the multiple iterations, OpenJ9 actually became the fastest starting JVM implementation, though practically tied with OpenJDK Hotspot and Azul Zulu. GraalVM was almost 6 seconds slower to start on average.

Package Installation Performance

Package performance was quite interesting as every JVM besides OpenJ9 averaged out nearly identically. 

Transaction Performance

The transaction performance varies significantly from the initial results with GraalVM taking the lead in the rate and quantity of transactions and OpenJ9 handles almost 5 transactions fewer per second than GraalVM.

This is honestly quite different than I expected. My hypothesis was that the OpenJDK-based implementations would net out pretty similarly, but in actuality, there was a statistically significant difference between each implementation.

Memory Usage

The full run of 25 iterations showed roughly the same results in terms of memory usage. OpenJ9 used significantly less memory and GraalVM significantly more with OpenJ9 using 60% of the average memory of the OpenJDK implementations.

Outliers

One of the interesting things to observe is that there were some extreme outliers, for example, package installation which generally took ~30 seconds, occasionally taking over 2.5 minutes. It seems like this is related to the underlying hardware as there’s not a pattern in the iteration order, iteration number of JVM implementation. To avoid skewing the data, I excluded these outliers from the other charts.

Revised Summary and Findings

With multiple runs, the differences between the OpenJDK codebase implementations (e.g. OracleJDK, Amazon Coretto, Azul Zulu and OpenJDK Hotspot), reduces significantly. The performance and startup differences small enough, that licensing would be the primary criteria I’d recommend when considering the JVM implementation to use.

If raw performance is the primary concern, GraalVM demonstrated a consistently higher transaction rate over the iterations at the cost of a slower startup and higher memory usage.

For lower-end usages or container-based usages, OpenJ9 continues to be an excellent choice with it’s low memory usage and especially after it demonstrated the promised faster startup on average over the multiple iterations. 

Make Your Adobe Managed Services Migration a Success

$
0
0

With companies looking to reduce costs and increase agility, many are looking to move their CMS to the cloud. Adobe offers two cloud solutions for
AEM, Adobe Managed Services (AMS), and AEM Cloud Service. For existing on-premise customers, AMS is a lighter lift as it is closer to on-premise architectures.

Is AMS a Fit?

The first question to ask is: is AMS a fit? AMS is not the best solution for all customers. AMS is good for organizations with a single development team, simple/limited integrations, and limited internal technical teams. Teams with complex integrations, extensive DevOps capabilities, or multiple teams will find AMS limiting.

Once you’ve validated that AMS makes sense for your organization, here’s how you can make sure your AMS migration is a success!

Environment and Stack

  • Consider how to integrate existing CI/CD pipelines with Adobe Cloud Manager. Cloud Manager’s limitations can have a ripple effect on code and configuration changes, e.g. possible consolidation of multiple AEM projects
  • This Jenkins plug-in for Adobe Cloud Manager can trigger deployments from Jenkins
  • The selection of your Adobe Managed Services CSE is very important as there tends to be a wide variance in knowledge and personalities.
  • Use Adobe IMS for federated authentication with Creative Cloud and Experience Cloud. This will make login scenarios simpler for your internal users as well as partners

Application / Development Practices

  • Evaluate integration patterns to ensure they use publicly available API endpoints or endpoints which can be whitelisted by IP
  • Setup of a local SonarQube instance to identify issues and speed-up development once you are in a steady state
  • Evaluate customizations to the AEM JAR execution or dispatcher configuration to ensure compatibility

Migration and Testing

  • Evaluate different tools for the physical migration based on the scope / size, e.g. VLT RCP vs Oak Backups vs Packages (which tend to be cumbersome when they get too large)
  • Leverage automated migration and testing to expedite the process
  • Crawl the site for backlinks and combine with historical analytics to identify key pages to perform manual testing

 

Rollout and Go-Live

  • Be prepared to allow extra time in your project to meet requirements mandated by Adobe Managed Services. E.g. code coverage, dispatcher configurations, OSGi configurations
  • Plan in advance for performance testing. Adobe Cloud Manager does a very limited check, but it is not sufficient for go-live validation
  • Plan go-live well ahead of time as Adobe CSEs only work business hours in local time zones and have a large number of off days
  • If go-live is outside of CSE working hours, notify them at least a month ahead of time

Next – to the Cloud!

Adobe Managed Services may run in a cloud environment, but to get to true Cloud Native capabilities you want to get to AEM Cloud Service. To help customers get started on their Cloud roadmap, Adobe is offering convertible contracts where you can start on AMS and then convert to AEM Cloud Service.

If you need some time to pay down technical debt or make architectural changes, moving to AMS is a great first step to uncouple your implementation before going all-in on cloud. Need help with your Adobe Cloud strategy? We’re here to help!

Exploring the Sling Feature Model: Part 3 – Custom Aggregates

$
0
0

In the first post of the Exploring the Sling Feature Model series, I discussed the process of converting the Sling CMS app from the Sling Provisioning Model to the Sling Feature Model. So how does this apply to your custom applications?

To illustrate, let’s convert my personal site, danklco.com, which is currently managed via Sling CMS, to the Feature Model.

It’s worth noting that I could keep running my site the way it is, by using a pre-built Sling CMS Runnable Jar, but that my goal is to run my site in Kubernetes for simplicity of upgrades, deployment, and management.

Step 1: Refactor Project Structure

Currently, my personal website code is a single OSGi Bundle which I deploy with Github Actions. To support the Sling Feature Model, I’m going to convert the project into a multi-module project and add a new sub-project for my feature.

The new project structure will look like:

/mysite
/bundle
/feature
/images

Step 2: Generate Features

 

The custom feature is pretty simple, defining my custom code bundles and configurations. A number of parameters are supplied so they can be changed and I’m not putting secrets in code:

{
    "bundles": [
        {
            "id": "com.danklco:com.danklco.slingcms.plugins.disqus:1.1-SNAPSHOT",
            "start-order": "20"
        },
        {
            "id": "com.danklco:com.danklco.slingcms.plugins.twitter:1.0",
            "start-order": "20"
        },
        {
            "id": "com.danklco:com.danklco.site.cna.bundle:1.0.0-SNAPSHOT",
            "start-order": "20"
        }
    ],
    "configurations": {
        "org.apache.sling.cms.core.analytics.impl.GeoLocatorImpl": {
            "scheduler.expression": "0 0 0 ? * WED",
            "licenseKey": "${MAXMIND_LICENSE_KEY}"
        },
        "org.apache.sling.cms.reference.impl.SearchServiceImpl": {
            "searchServiceUsername": "dklco-com-search-user"
        },
        "org.apache.sling.commons.crypto.internal.FilePasswordProvider~default": {
            "names": [
                "default"
            ],
            "path": "/opt/slingcms/passwd"
        },
        "org.apache.sling.commons.crypto.jasypt.internal.JasyptRandomIvGeneratorRegistrar~default": {
            "algorithm": "SHA1PRNG"
        },
        "org.apache.sling.commons.crypto.jasypt.internal.JasyptRandomSaltGeneratorRegistrar~default": {
            "algorithm": "SHA1PRNG"
        },
        "org.apache.sling.commons.crypto.jasypt.internal.JasyptStandardPBEStringCryptoService~default": {
            "algorithm": "PBEWITHHMACSHA512ANDAES_256",
            "saltGenerator.target": "",
            "securityProviderName": "",
            "ivGenerator.target": "",
            "securityProvider.target": "",
            "keyObtentionIterations": 1000,
            "names": [
                "default"
            ],
            "stringOutputType": "base64"
        },
        "org.apache.sling.commons.messaging.mail.internal.SimpleMailService~default": {
            "connectionListeners.target": "",
            "transportListeners.target": "",
            "username": "${SMTP_USERNAME}",
            "mail.smtps.from": "${SMTP_USERNAME}",
            "messageIdProvider.target": "",
            "mail.smtps.host": "${SMTP_HOST}",
            "names": [
                "default"
            ],
            "password": "${SMTP_ENC_PASSWORD}",
            "mail.smtps.port": 465,
            "cryptoService.target": "",
            "threadpool.name": "default"
        },
        "org.apache.sling.commons.messaging.mail.internal.SimpleMessageIdProvider~default": {
            "host": "danklco.com",
            "names": [
                "default"
            ]
        }
    }
}

To create a usable model, I’ll need to combine the Sling CMS model and my custom model, which can be accomplished with the Sling Feature model. To support the Composite Node Store, I’ll want to generate two separate aggregates, one for seeding and one for running the instance.

Since the Sling Feature Model JSON will resolve dependencies at runtime from Apache Maven, we’ll also want to generate Feature Archives or FAR files which bundles the models with their dependencies.

 

<plugin>
    <groupid>org.apache.sling</groupid>
    <artifactid>slingfeature-maven-plugin</artifactid>
    <version>1.3.0</version>
    <extensions>true</extensions>
    <configuration>
        <framework>
            <groupid>org.apache.felix</groupid>
            <artifactid>org.apache.felix.framework</artifactid>
            <version>6.0.3</version>
        </framework>
        <aggregates>
            <aggregate>
                <classifier>danklco-com-seed</classifier>
                <filesinclude>**/*.json</filesinclude>
                <includeartifact>
                    <groupid>org.apache.sling</groupid>
                    <artifactid>org.apache.sling.cms.feature</artifactid>
                    <version>0.16.3-SNAPSHOT</version>
                    <classifier>slingcms-composite-seed</classifier>
                    <type>slingosgifeature</type>
                </includeartifact>
                <includeartifact>
                    <groupid>org.apache.sling</groupid>
                    <artifactid>org.apache.sling.cms.feature</artifactid>
                    <version>0.16.3-SNAPSHOT</version>
                    <classifier>standalone</classifier>
                    <type>slingosgifeature</type>
                </includeartifact>
                <title>DanKlco.com</title>
            </aggregate>
            <aggregate>
                <classifier>danklco-com-runtime</classifier>
                <filesinclude>**/*.json</filesinclude>
                <includeartifact>
                    <groupid>org.apache.sling</groupid>
                    <artifactid>org.apache.sling.cms.feature</artifactid>
                    <version>0.16.3-SNAPSHOT</version>
                    <classifier>slingcms-composite-runtime</classifier>
                    <type>slingosgifeature</type>
                </includeartifact>
                <includeartifact>
                    <groupid>org.apache.sling</groupid>
                    <artifactid>org.apache.sling.cms.feature</artifactid>
                    <version>0.16.3-SNAPSHOT</version>
                    <classifier>standalone</classifier>
                    <type>slingosgifeature</type>
                </includeartifact>
                <title>DanKlco.com</title>
            </aggregate>
        </aggregates>
        <scans>
            <scan>
                <includeclassifier>danklco-com-seed</includeclassifier>
            </scan>
            <scan>
                <includeclassifier>danklco-com-runtime</includeclassifier>
            </scan>
        </scans>
        <archives>
            <archive>
                <classifier>danklco-com-seed-far</classifier>
                <includeclassifier>danklco-com-seed</includeclassifier>
            </archive>
            <archive>
                <classifier>danklco-com-runtime-far</classifier>
                <includeclassifier>danklco-com-runtime</includeclassifier>
            </archive>
        </archives>
    </configuration>
    <executions>
        <execution>
            <id>aggregate-features</id>
            <phase>prepare-package</phase>
            <goals>
                <goal>aggregate-features</goal>
                <goal>analyse-features</goal>
                <goal>attach-features</goal>
                <goal>attach-featurearchives</goal>
            </goals>
            <configuration>
                <replacepropertyvariables>MAXMIND_LICENSE_KEY,SMTP_HOST,SMTP_USERNAME,SMTP_ENC_PASSWORD</replacepropertyvariables>
            </configuration>
        </execution>
    </executions>
</plugin>

 

Step 3: Build Docker Images

 

Since the goal is to run this in Kubernetes, we’ll create Docker images for running Sling CMS and Apache web server. Since I’m running a lean server, I’ll want to run this as a standalone instance using the Composite Repository so the datastore persists between instances.

To populate variables into the images and coordinate the full build, we’ll use Apache Maven to process the Docker files and input files as Maven artifacts and kick off the docker build. Unlike the Sling CMS build, we’re not leveraging Apache Maven to download the artifacts within the Docker build, we’ll pre-fetch them during the maven build and supply them to the Docker build.

Side Note – Variables

 

One challenge to note when attempting to reproduce an actual instance, there are a quite a few variables required for the application to actually work. For my local testing I have a bash script to provide all of the required properties to Maven, but since they include secrets like passwords I’ve not put it in source control.

See it in Action!

Seeing something work is work a thousand words, so check out this GIF of the build process in action:

Building Cloud Native Apps with Apache Sling CMS

and check out the code on GitHub: https://github.com/klcodanr/danklco.com-site/tree/cloud-native-sling

What’s Next?

All of this is leading up to having a fully running Cloud Native Apache Sling CMS instance in Kubernetes, but before that my next post is going to talk about using Sling Content Distribution and Sling Discovery to support publishing content between Author and Renderer Apache Sling CMS instances. Check back soon!

Case Insensitive Queries with the AEM Query Builder

$
0
0

Recently, I needed to perform a query using the AEM Query Builder which was case insensitive. While I normally prefer using JCR SQL2 queries, in this case Query Builder was a better fit as I wanted consuming applications to be able to manipulate the query and doing so as a map is significantly easier than doing so as a string.

I was surprised to find that there was no native Query Builder Predicate for doing case insensitive queries so I ended up writing my own.

The predicate works by lower casing the value and then using the XPath fn:lower-case method to compare to the field value in lower case.

/*
This is free and unencumbered software released into the public domain.

Anyone is free to copy, modify, publish, use, compile, sell, or
distribute this software, either in source code form or as a compiled
binary, for any purpose, commercial or non-commercial, and by any
means.

In jurisdictions that recognize copyright laws, the author or authors
of this software dedicate any and all copyright interest in the
software to the public domain. We make this dedication for the benefit
of the public at large and to the detriment of our heirs and
successors. We intend this dedication to be an overt act of
relinquishment in perpetuity of all present and future rights to this
software under copyright law.

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.
IN NO EVENT SHALL THE AUTHORS BE LIABLE FOR ANY CLAIM, DAMAGES OR
OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
OTHER DEALINGS IN THE SOFTWARE.

For more information, please refer to 
*/
package com.perficient.adobe.predicates;

import java.util.Locale;
import java.util.Optional;

import com.day.cq.search.Predicate;
import com.day.cq.search.eval.AbstractPredicateEvaluator;
import com.day.cq.search.eval.EvaluationContext;

import org.osgi.service.component.annotations.Component;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;

@Component(factory = "com.day.cq.search.eval.PredicateEvaluator/equalsIgnoreCase")
public class CaseInsensitiveEquals extends AbstractPredicateEvaluator {

    private static final Logger log = LoggerFactory.getLogger(CaseInsensitiveEquals.class);

    static final String PREDICATE_PROPERTY = "property";
    static final String PREDICATE_VALUE = "value";
    static final String PREDICATE_LOCALE = "locale";

    @Override
    public String getXPathExpression(Predicate predicate, EvaluationContext context) {
        log.debug("Evaluating predicate: {}", predicate);
        String property = predicate.get(PREDICATE_PROPERTY);
        Locale locale = Optional.ofNullable(predicate.get(PREDICATE_LOCALE)).map(lt -> Locale.forLanguageTag(lt))
                .orElse(Locale.getDefault());
        String value = predicate.get(PREDICATE_VALUE).toLowerCase(locale).replace("'", "''");
        String query = String.format("fn:lower-case(@%s)='%s'", property, value);
        log.debug("Generated query: {}", query);
        return query;
    }
} 

Once the custom predicate is available in your application, it can be used in any Query Builder query as such:

path=/content/test
equalsIgnoreCase.property=test
equalsIgnoreCase.value=test
equalsIgnoreCase.locale=en_US 

The locale parameter is not required, but would generally be recommended unless the input will always be in the system default locale.

A Short, Semi-Accurate History of Web Content Management

$
0
0

Progress, far from consisting in change, depends on retentiveness. When change is absolute there remains no being to improve and no direction is set for possible improvement: and when experience is not retained […] infancy is perpetual. Those who cannot remember the past are condemned to repeat it. George Santayana

To understand where we are going in Web Content Management, we must first understand how we got to where we are now. Let’s embark on a semi-accurate and mildly satirical journey through the history of Web Content Management.

In the Beginning…

In the beginning, the web developer created raw HTML without form or process and saw it was bad.

Incantations such as FrontPage and Dreamweaver could produce elegant pages from arcane HTML tags, but code and content were inexorably mixed any changes to words or images required developer support.

Developers grew tired of marketing buzzing on about customer journeys, content velocity, and interactions and decided something must be done to provide separation between content and code.

The Expanse of XML

And the web developer said, “Let there be XML in the breadth of the codebase to distinguish between the content and the code and let the XML be called content and have a system for marketers to manage it so I don’t have to”. Along came enterprise software vendors with solutions to convert large piles of cash into maintainable websites.

The web developer wrote code in XSLT and marketers wrote content in XML and they saw it was bad.

Content and code were theoretically separate, but new features and changes still required coordination between marketing and web developers and interactive web applications and marketing websites were divided by a chasm deeper than the darkest trenches of the seas.

Rise of the Web Content Frameworks

Lamenting the disconnect between web apps and websites, the web developer said “if only I could have one system that would join a content management system and web application framework as one, then it would finally be good”. And came Drupal, Adobe Experience Manager, and Sitecore which provided all of the features the marketer and web developer wanted out of the box. 

Lamentably, the marketer had eaten from the tree of design and saw that the website was naked. The web developer extended the out of the box features and soon saw it was bad.

Upgrades were excruciating and due to the entanglement of custom and framework code, seemingly small changes required many hours and significant costs.

Single Page Everything

Away in the mystic land of Silicon Valley, the one true prophet of technology, introduced the one true Single Page Application framework React. Shortly thereafter, the other one true prophet of technology introduced Angular which is also the one true framework for Single Page Applications. 

These Single Page Applications enabled the web developer to create websites that avoid the one thing that bothers users more than anything else, page reloads and so the marketer and web developer decided to rebuild all of their sites so they would finally be good.

The web developer rebuilt the site’s Single Page Apps and the marketer saw it was bad. No longer self-enabled, every change once again had to through a development release process. 

Let Them GET /cake.json

Storming the bastille of the now-traditional CMS paradigm, a new cadre of headless CMS solutions fomented a revolution to overthrow the class system of static websites and dynamic CMS systems and replace it with an egalitarian, universal Content API.

Swept up in revolutionary fever, the web developer and the marketer again re-implemented half of the website on a new platform before realizing it was also bad. 

No longer able to leverage a base framework, the web developer had to reimplement the wheel, while the marketer struggled to understand the context of the content without visualizing it on a page with the form-based content authoring. Thus the cycle continued and the universe collapsed on itself in an explosion of budgets and technology debt when the web developer mentioned re-implementing with GraphQL.

Learning from the Past

Web Content Management is a discipline which takes a week to learn and a career to master. While at the simplest, it is just putting markup and binary assets on the internet, the inherent contradictions in needs, goals, and capabilities introduce tremendous challenges.

Based on my experience in this industry and many lessons learned I’ve come to the following conclusions:

  • One-size-fits-all solutions are rarely completely right, most often the solution involves multiple approaches working together
  • If choosing a one-size-fits-all solution, you must be clear on what you are giving up and ensure those compromises are worth the architectural simplification
  • When defining the content structure, start with Authoring, not the experience being created. If you understand how content will go into the system and then how it will be exposed, you will understand the optimal approach to content
  • The interrelation of code and content in HTML requires balancing the desire to create rich content with the technical debt, complexity, and brand consistency challenges this introduces

What’s your takeaway? Leave a comment below and let’s discuss!


Upcoming Webinar – Sling RepoInit

$
0
0

Curious about using Sling RepoInit? Want to learn more in depth about how Sling RepoInit can enable your AEM DevOps team to manage the initial repository state in code?

I’ll be leading a virtual discussion on Sling RepoInit with the Detroit AEM Meetup on Thursday July 9th from 6:00 – 6:50 PM EST.

This talk will:

  • Introduce the benefits Sling RepoInit as a provisioning method
  • Compare Sling RepoInit to other methods of initializing a repository
  • Show how to manage permissions, configurations, and initial content using RepoInit
  • Demonstrate using Sling RepoInit for both AEM as a Cloud Service and AEM 6
  • Discuss a method for making it easier to manage Sling RepoInit configurations and future options 

More info about Repoinit: https://sling.apache.org/documentation/bundles/repository-initialization.html 

This talk will be useful for AEM Technical Experts, Architects, and Developers, especially those interested in AEM as a Cloud Service.

Register Here

Viewing all 49 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>