Category Archives: Tips & Best Practices

Maintaining code quality with SonarQube

When working in a large solution of a project that’s been going on for years (Sitecore project or not), there’s bound to be technical debts here and there. Technical decisions which were taken in not an ideal condition which leads to shortcuts being made, not necessarily wrong decisions but the best decisions considering the situation at the time.

So how does one maintain and improve code quality with such code base? especially when people would come and go on the project and without fully understanding the code base as a whole they would perform the bare minimum to complete the work which usually ends up as more  technical debts which never gets pick-up as it gets buried deep within the code base as time goes.

There are several ways we can help reduce the on-going technical debts by:

  • Increasing our test coverage (unit test, functional test, integration test)
  • Doing PR review
  • Implementing the boy scout rule for every PR (leave the camp ground cleaner when you first arrived)

Been digging around for the last week seeking for ways to extract the health state of a code base and get some sort of stats for which we can then define as a baseline for improvement iterations. Which then I had a quick look at what tools available and gathered my notes on SonarQube.

SonarQube Dashboard Report

I first heard about SonarQube in my old company 4 years back (was called Sonar back then). I never gave much thought of it back then but now decided to have a quick look into it and so far I like what I’m seeing and the capabilities it have.

One of the main reasons why I’m so keen with this tool is that it can provides the stats that I’m looking for from the current code base, the key thing to know if you’re doing some sort of improvements you would need to have concrete parameters. Another big reason is since the tools has been around for a while and matured, it has lots of plugins – including plugins which integrate nicely to our tooling and processes.

Here’s some of my notes so far on this tool which looks pretty cool.

Integration with the developer experience

  1. Integrate with Visual Studio
  2. Integrate with Resharper

 

Integration with JIRA

https://docs.sonarqube.org/display/PLUG/JIRA+Plugin

This plugin can create JIRA tickets based on SonarQube reports. At the time of writing this plugin has been labeled as deprecated though.

Integration with Team City

https://confluence.jetbrains.com/display/TW/SonarQube+Integration

This plugin offers the following features:

  • Easy SonarQube Runner deployment and configuration
  • SonarQube Server connections management
  • Test results locator. Works for Surefire and MSTest reports at the moment
  • JaCoCo coverage result locator
  • Navigation to SonarQube Server from the TeamCity UI: the View in sonar link on the Build Results page takes you to the sonar dashboard of the analysed project once the build is finished.

Now, you don’t really need this plugin if all you need is to run the SonarQube analysis runner and view the report in the SonarQube web application. All you need is just to execute the SonarQube analysis runner command which will generate the XML report files when you build your solution file and send it to the SonarQube server.

Integration with Bitbucket

https://marketplace.atlassian.com/plugins/ch.mibex.bitbucket.sonar/cloud/overview

  • Extract the health state of your code base

  • Highlight failed quality checks

  • Instant feedback in your PR – really cool I’d say 🙂

 

So far it looks really promising,  hopefully I can share more on this tool as I get some time to play around with it.

How TDS Git Delta Deploy make my life easier

In one of recent project that I’ve encountered most of the team members raise the concern about having a such long production deployment time, which could span the entire normal working hours and even more if they encountered some issues.

One of the thing that I noticed can be optimized immediately is the TDS package installation process, where instead of installing the full TDS packages which can contains thousands of items (in my case it was more than 80,000 items) we can instead use the option of only deploying the delta items – which for every release cycle (2 weeks) normally amount to around 50 items or even less.

The TDS Git Delta Deploy itself is something that Sitecore MVP Sean Holmesby created to answer how to have a true “delta” of content items to be deployed to the target environment. Have a read of the full article here.

So, I know the solution of one the paint point area then I just need to install Sean’s nuget package and go on my merry way right? well.. not exactly.

The problem

In the current solution the team have multiple TDS projects per features (Helix anyone?), those TDS projects would then be bundled using TDS package bundling to create the final combined package.

To give a little bit of illustration, see below:

Given the following TDS projects

  • TDS Foundation Z
  • TDS Feature A
  • TDS Feature B
  • TDS Feature C
  • TDS Project X  (will bundle all content items from the above TDS projects)

So what’s the problem? the package bundling doesn’t respect the TDS Git Delta Deploy it will always include all the items instead of just the delta.

The solution

I reached out to the nice folks in the #tds slack channel in https://sitecorechat.slack.com/ for suggestions – btw if you haven’t joined then I suggest you do that first else you’re missing out 🙂

Not long after I raised the question, John Rappel replied that he has a fix for that in the latest nuget package – awesome!. I upgraded the nuget package and true enough that it now works. Read more about John’s bugfixes and feature updates here.

So now I got TDS delta packages working locally, it’s now time to integrate it in our Continuous Build process.

The setup in Team City

TDS Git Delta Deploy work by using “git diff” command in it’s core, which for this to work correctly you would need to ensure that:

  • TC is configured to use the agent side checkout
  • TC is doing a full “git fetch” instead of just retrieving a single branch
    • If you’re using TC version 10.0.4 and above you’re in luck as apparently you can set it using the following approach 
    • If you’re using TC version prior 10.0.4 then your other option to have this is to do a custom build step that perform a git fetch

A quick POC confirm that the approach is solid and I see those delta packages being generated!

TDS Delta packages generated.. what now?

With the TDS delta packages generated and ready to use, all we have to do now is to include it as part of our Continuous Deployment process. We’re using Octopus Deploy so it’s quite easy to setup.

You can use something like Sitecore Package Deployer to automate the package installation process and include it as part of Octopus deploy step. In our case, since we already have an existing custom web service that does the same job so we leverage that instead.

quick tip:

Enable TDS “publish package after deploy” to streamline the deployment process by automatically publishing only the deployed items

here’s the reference on how to do just that https://hedgehogdevelopment.github.io/tds/chapter4.html

What’s the outcome?

Having all this in place really helped streamlined the way we do a Sitecore deployment and cut massive time which in turns faster deployment => more time to work on something else => productivity++

Sitecore logging integration with Graylog

With Sitecore infrastructure which expands beyond the basic 1 CM and 1 CD, having a centralized logging mechanism is crucial in order to understand what happens in the servers in a consolidated view.

Some of the popular APM out there in the market offers an automatic  records of exception/errors which occurred in the application, this proves really useful when investigating a problem down to the exact line of code which raised the exception. Some types of issues requires you to dig through the Sitecore logs in order to gain an understanding what happened in the server itself , for example when investigating publishing issues, security audits, index crawling, exm logs and fxm logs in the delivery servers.

For those types issues we typically look at different types of Sitecore logs in each servers in order to find out what exactly happened, if we have multiple servers in the picture we would need to go into each servers or pull down the log files from each servers and analyze the log files. This is where centralized logging for Sitecore logs will come in handy as we would have a centralized view of the log files which span multiple servers and we can perform a search query against the log information.

A couple of centralized Log provider in the market are:

  • https://azure.microsoft.com/en-us/services/application-insights/
  • https://www.splunk.com/
  • https://www.elastic.co/products/elasticsearch
  • https://www.graylog.org/

I stumble across Graylog in a recent project where we currently pushing windows event logs to it but not the Sitecore log information yet. So I played around with it a bit to get a feel *rolling sleeves*.

There’s a couple of steps that we need to figure out in order to integrate the Sitecore logs to Graylog

  1. Create a custom log appender
  2. Install Graylog server
  3. Configure Graylog server
  4. Send Sitecore log information to Graylog server

Create a custom log appender

Sitecore uses log4net as the logging framework which is extensible, by default it uses the LogFileAppender class which outputs the log information in text files.

As I want to send the Sitecore log information to Graylog server through http protocol, I would need to create a custom log appender. And as sending log information from Sitecore to Graylog server needs to be in a certain format, GELF format to be precise, we would need to format the log format to match GELF specification in order for Graylog server to understand and able to parse it.

I found the gelf4net library which was mentioned in the Graylog documentation which already did all the heavy lifting of formatting the data. When I installed the nuget package and configured the log4net section in Sitecore according to this library documentation it doesn’t work though.

One gotcha that I found is that when we want to create a custom log appender in Sitecore , we need to reference the AppenderSkeleton class in the Sitecore.Kernel assembly – previously I added log4net nuget package (comes with gelf4net as a dependency) and was hoping that would work, instead it failed miserably 🙁

In the end I created my own appender class which replicate gelf4net appender implementation. https://github.com/reyrahadian/sitecore-gelf-logappender/blob/master/ScGraylog/Appender/GelfHttpAppender.cs 

Install Graylog server

Having read through the Graylog documentation, the easiest and quickest way to setup a Graylog server for my POC is to download the VM. It’s all preconfigured, we just need to load the .voa file using VMware player or VirtualBox to get it up and running.

reference: http://docs.graylog.org/en/latest/pages/getting_started.html

Configure Graylog server

The next things that we need to do after we have our Graylog server up and running is to configure the input source. There’s multiple options that Graylog provides: http, tcp, udp or file dumps.

HTTP input source fits what I need for my POC, so I created a new HTTP input source and have it running.

Send Sitecore log information to Graylog server

Here’s where we put things together. With our custom log appender ready, now we only need to send the Sitecore log information to our Graylog server by using the following log4net configuration

Check if things works as expected

If everything works as expected then you should see some log information coming from Sitecore

 

source code: https://github.com/reyrahadian/sitecore-gelf-logappender

Sitecore Server Role Checker Tool

When configuring Sitecore in a distributed environment, you typically have more than 1 server in the production environment configured as a different roles (CM, CD, Processing, Reporting Service, etc).

More often than not I’ve noticed that issues are raised due to misconfiguration rather than implementation itself. You then go to Sitecore documentation site and check the configuration for each of the server that you’ve setup to see if there’s anything that you miss.

In the 8.0 documentation you would need to read a long list of tables containing information which config files that you need to enable/disable.

Since the 8.1 release, these steps are simplified with Sitecore providing us an excel spreadsheet file as a guide to enable/disable the config files depending on the role(s) that you want to setup. These steps are manual though and highly likely that we will miss one or two config files and could cause some issues down the line.

With the number of projects that you need to review, this task will start eating up your time and should really be automated. Some might have already done so and create a little tool tucked away somewhere.

I’ve decided to create my own version of the tool called Sitecore Server Role Checker, can’t be more obvious than that 🙂

How does it work?

This tool would basically uses a converted csv format from the Sitecore official spreadsheet guide and read the configuration based on your selected roles. It currently supports the following Sitecore version

  • 8.1 update 3
  • 8.2 initial release
  • 8.2 update 1
  • 8.2 update 2

Only those version is supported as those are only the spreadsheet available for now.

How do I use it?

Follow this simple steps

  1. Choose your Sitecore version
  2. Choose your search engine provider
  3. Browse to your website folder
  4. Tick your intended role(s) for this particular Sitecore instance
  5. Click the Analyze button

It would then go through each of your configuration files and report if there’s any config files that should be enabled/disabled.

Through the tool you can also quickly disable/enable those config files.

What it doesn’t do

  • Check if your config files is updated accordingly, e.q:
    • Changing robot detection handler in CM
    • Configuring remote reporting service url
    • etc
  • Check the configuration files for WFFM, EXM.. yet
  • Make you coffee

Where can I get it?

The code is available in Github

Note that this tool is not deeply tested, if you have any issues or suggestions with the tool then raise a ticket in Github or do a PR

update: 22 February 2017

The tool is now available at Sitecore Marketplace

Sitecore SOLR index update strategies in a scaled environment

Sitecore by default ships with two main search provider which is Lucene or SOLR.

I will not delve too much on which one you should use as that’s been covered in https://doc.sitecore.net/sitecore_experience_platform/setting_up__maintaining/search_and_indexing/indexing/using_solr_or_lucene

I’d like to highlight some of the points mentioned in the article, where using SOLR is mandatory:

  1. You have a dedicated processing server
  2. You have multiple CM servers

The main thing here is that if sitecore_analytics_index exist in multiple servers then you would need to use SOLR. But since all the indexes are now in a centralized server, what’s the index update strategies for SOLR would be?

Typically in this setup you would set

  • CM as the indexing server
    • will perform index rebuild
    • Set the index update strategies as you fit (except manual)
  • CD only reads from index
    • Does not perform index rebuild
    • Set the index update strategies to manual

If you have multiple CM server then you can set one of the CM as the one that perform index rebuild while the other one only reads from the index.

https://doc.sitecore.net/sitecore_experience_platform/setting_up__maintaining/search_and_indexing/indexing/index_update_strategies

Rebuild sitecore_analytics_index contact data

There’s been an interesting question around how to rebuild the sitecore_analytics_index but specifically only for contact data.

As you’ve probably know that sitecore_analytics_index is not available to be indexed manually – go to the indexing manager in the control panel if you want to check. This is because the index is configured using an observer crawlers.

However there are ways to manually rebuild the sitecore_analytics_index. The example given in that seems to be specifically about rebuilding the interactions documents while what we want is to build the contacts.

I gained some hints from the provided example though, let’s have a look at the following code.

In particular this line

I then go to the handy showconfig.aspx page to find the following

Which shows the processingPools configured for the aggregation processes. From there I can spot the contact element definition which seems to be the one that I’m looking for and start building the code for it.

result:

Depending on your needs, you might want to only reindex known contacts for example. You can take the above example and adjust it yourself.

 

Sitecore 8 – Create a custom personalization rule

Sitecore OOTB comes with tons of personalization rules that we can use, however there might be times where we require to create a custom rule condition that fits to the client business requirement.

Luckily with Sitecore framework that’s known easy to extend, we can easily achieve this.

Here’s the steps to create a custom personalization rule:

1.Register the tag

First of all we need to create a new tag item under /sitecore/system/Settings/Rules/Definitions/Tags

create-custom-personalization-rule-create-tag2. Register a new rule element

Create a new element folder in /sitecore/system/Settings/Rules/Definitions/Elements

create-custom-personalization-rule-create-element-folder3. Create a new personalization condition rule

create-custom-personalization-rule-create-personalization-rule

Here I’m creating a new rule and calling it “Specific Template Name” which will do what exactly that, checking if the current context item is based on a specific template name – not really useful in real world scenario but this post is about how to setup a custom personalization rule 🙂

In the newly created rule we need to set the Text and Type fields

Text field contains the text that we’re going to present to the content author, here’s the following text that I use

the [TemplateName, StringOperator,,compares to] represents the format of input that we want to get from the content author. It follows the following format

  • OperatorId, defines the public property of the class where we want to assign the value coming from the content author input
  • StringOperator, the built-in macro that we want to use to get the input from the user. In this case this will be a string comparation operation
  • blank, this parameter will depend on the type of macro that we use, this could be a default text value if we are using the default macro,  it could be a default start path if we’re using the Tree macro  – think of it like setting a field source when we’re building a template in Sitecore. A full list of macros can be found in /sitecore/system/Settings/Rules/Definitions/Macros
  • compares to, the last parameter is the text representation that we want to show to the content author, this value is clickable and when clicked Sitecore will display the appropriate input control based on the macro that we set on the second parameter

The Type field is the full assembly name of the custom class that we want to use to perform the logic behind this personalization rule

4. Assign our custom element default tag definition to our custom tag

create-custom-personalization-rule-assign-element-tag-definition-to-tag

Of course we  can assign our custom element tag definition to one of the existing tag under /sitecore/system/Settings/Rules/Definitions/Tags so that it will automatically appear on the content author personalization rule window however if you want to make it obvious which one is your custom ones then I recommend creating your own custom tag and assign it to that

4. Assign the tag to Conditional Rendering

The last step to make the rule visible for the content author to choose from is to assign our custom tag to one of default Rules Context folder

create-custom-personalization-rule-assign-tag

From the picture we’re setting the tag to the Conditional Rendering rules context which will appear when the user want to personalize a certain component, but you can also some other Rules Context folders such as FXM ones.

After we assign the tag, we can verify that the custom personalization rule is available for the content author to choose from

create-custom-personalization-rule-verifying-the-custom-rule-appears

Here’s the class that’s responsible to evaluate the custom rule condition

The class logic is simple and not doing much for the purposes of the tutorial. In a real world scenario you can make a rule that reads from a custom database, read a configuration file, call an external API (be careful with this as it will increase page load time).

This may sound like a lot of work at first compared to just creating a custom code in the rendering or similar approach. But when we consider that we’re enabling the content author/marketers to do it by themselves and removing/minimizing the dependency towards IT it would open more possibilities for the marketers of what they can do using the platform and lessen time to market.