Please note that as of version 4.1 the System Configuration Wizard in the CMC now automates the process of splitting the adaptive processing servers. The following assumes you are manually splitting the APS services in SAP BusinessObjects 4.0. You can also use this knowledge to manually split the APS services in 4.1 but 4.1 specifics are not mentioned below.
With the release of SAP BusinessObjects 4.0 many of the new and even existing services or applications within the platform were grouped into a single process called the “Adaptive Processing Server” (APS). From the perspective of the operating system, the APS runs as a single PID (Process ID). From the perspective of SAP BusinessObjects, this single service is a container for 20+ different processes (Sub-Services) within the BusinessObjects 4.0 platform. The APS is a pure Java based process and it is initialized with 1 GB of Java Heap (-Xmx1g). For most environments this is not a sufficient amount of RAM. In addition it was not wise for SAP to have placed such a high number of sub-processes in a single PID. Because the default SAP BusinessObjects 4.0 deployment essentially places all the eggs in a single basket with very little Java memory bandwidth, a considerable amount of re-configuration has to be implemented, post installation, to properly size and deploy these services. There have been multiple blog postings and SAP notes created in relation to this topic. Most of which I will reference in this blog posting. The main goal of this article is to simplify the reasons that splitting these services is needed,to aggregate all the information that is currently available online and to provide a few thoughts on how you can approach the splitting of the services.
Splitting the Adaptive Processing Server is needed for the following reasons:
- There are multiple disparate sub-processes running in the default APS container. In some cases a single sub-process can exceed the default Java memory parameters and crash the entire stack of services. In short, it is very risky (from a high availability perspective) to run everything in a single PID. If one sub-service fails, they all fail. To prevent a single task from crashing the entire stack, the sub-processes should be split into different APS containers in a way that each container has its own OS level PID (Process ID).
- Each sub-service will have its own unique Java memory requirements depending on the amount of data and the kind of data source. The only way to give each sub-service a unique or custom Java memory parameter is to run it as is own service or PID.
- Depending on the hardware specifications, it might be necessary to run some of the sub-services on their own node. They will have to be split out in order for them to be horizontally deployed.
- If there is a need to stop a single sub-service (without stopping the entire stack) the sub-service must exist in its own PID. There are also other deployment scenarios where some sub-services must be stopped and disabled on one node and enabled and started on another. If they are not split out, this is not possible.
How to allocate?
How you choose to split and group sub-services together into individual containers or groups of containers is almost subjective. The APS sizing guides from SAP will give you a general initial RAM amount for each service and recommendations for deployment. If each services is split into its own container and run on the same node, that node would need over 26 GB of RAM + several more GB assuming other BOBJ services are running on the node as well. This might be a bit extreme if 26+ GB of RAM is hard to obtain for a single node. However, as the document states, you don’t always have to deploy everything individually. When I go to a customer’s site, I general ask them which products they plan to use. Based on that list I try to anticipate the size and complexity of the reports to determine how the APS needs to be split to manage the system. For some clients I end up with 8 distinct containers with others I might have just 10. It all depends on whether they plan to utilize individual tools associated with any given APS sub-service. In general, I group the processes together according to the expected utilization of the platform and similarities of the services. I have established a standard base line that I deploy at each customer site and then I make a few changes based on the expected utilization. I will also run each grouping on an appropriate SIA node if the environment is setup as a distributed cluster. The following link will give you an example of my standard base line Link to Document. After reviewing the linked document above, the following will list the most likely candidates for change based on my experience.
- APS_DATAFEDERATOR: If the customer plan to utilize multi-source UNX universes, the Java Heap (-Xmx) will need to be anywhere from 4GB to 10GB+ depending on the size of the data processed by DF and the complexity of cross database joins.
- APS_MDAS: If Analysis for OLAP is utilized and the datasets are large, this might need to go up to the 4GB+ range as well. In addition, if I expect heavy utilization of the MDAS and the use of STS, I will generally move STS into its own container. It’s not that I expect STS to use a lot of RAM, but if MDAS crashes, I don’t want the STS to go down with it.
- APS_PLATFORMSEARCH: If you are running a continuous crawl and have a lot of reports and universes, you might need to increase this services heap as high as 4GB.
- APS_WEBI: The most likely candidate for change in APS_WEBI would be the isolation of the DSL Bridge Service and the Visualization Service. The DSL bridge services can take a big hit with Direct Binding to BEx reports. The visualization services can take a big hit if the Web Intelligence charts contain a lot of data points. The max heap setting for the DSL Bridge service might need to go as high as 32GB+. The memory allocation all depends on the amount of data that is extracted from BW via a BEx (BICS) query. I have also seen a lot of bug induced crashes for both the DSL Bridge and Visualization services. This is just yet another reason to isolate these services.
The memory settings and spiting up of the remaining services can be approached as needed. In general I try not to over engineer the break out. Doing so requires a lot of RAM on the server that might never get utilized. As I group more services together, I lower the overall RAM requirements for the system. In addition, just because the HEAP is set to a high number of GBs for each service does not mean that the service will utilize all of the allocation. You can always take a chance and run 20 GB of services (Max Heap) on a system with only 16 GB of RAM. To be safe you should always have sufficient RAM to support the sum total of all running Java processes Heap, but risk takers might take a more participle approach. In some cases I might just leave a few sub-services in the original APS container with a 1GB Java Heap because I know they will never get utilized.
Should I clone or create new?
I generally create each server from scratch. I have noticed that cloning a server will carry over command line arguments from sub-processes that are unrelated to the actual sub-services that will eventually be running in the container. This does not necessarily create any issues, but I prefer the command line to be as clean as possible. After I create the new service from scratch, I update the following command line arguments:
- The Java Heap Max Size (-Xmx1g): This is the total allocation of RAM that the JVM is allowed to reserve from the OS. If the Java process needs more than the amount specified, it will likely crash. I generally copy the APS service command line into a text editor, find the argument, make the changes, copy the text and paste it back into the command line. Be careful. Any mistakes will prevent the service from starting. Use the guides linked in the document to determine the setting size.
- The Java work directory (-work ): This is the path to where the work directory will exists. It is very important that the directory is not utilized by another APS services. I generally add the service name to the end of the path. For example: I will add “APS_PUBLISH” to the end of the work directory path. -workdir …/java/pjs/container/work/APS_PUBLISH . This insures that no other APS service uses the same path. BOBJ will try to mitigate this in the event that you forget. However the BOBJ method is to simply add the word “AdaptiveProcessingServer1” to the end of the work path. For each additional service it adds an additional number. This does not always work if you are adding and removing APS services frequently to the node. In addition, it is much easier to find the work directory, for a given process, when the path contains the name of the server.
Links to other blogs
Dallas Marks started the conversation on the APS several months ago. I added a few comments to his blog when BOBJ 4.0 was first released. This is a really good starting point.
Raphael Branger mentions an issue with the Visualization Service
Links to SAP document:
Official SAP APS Sizing Guide: sdn.sap.com
APS Sizing SAP Note: https://service.sap.com/sap/support/notes/1694041