The most important settings are RAM, MaxJobs/FlowLimit, Engine.ThreadCount, HTTP Response Thread Pool size (type can be set to single for savings) and JMS MaxSessions (in client ack mode).
MaxSessions and MaxJobs should be equal per process, the same is true for correlated MaxJobs and HTTP Thread Pool. Engine.ThreadCount should be 10-20% greater than sum of FlowLimits, HTTP threads. When components are stacked every following should have FlowLimit exceeding previous by at least 20% (consider BW engine restarts and backlog). You can decrease RAM usage by tweaking XML namespaces usage (prefixes defined in process namespace registry should match prefixes used in activities, to get rid of redundant namespace declarations on every node you can 'exclude prefixes' from XML roots).
Interesting case is with huge volume of large messages (~1000 per second, > 100KB) handled by different processes of various duration: resource utilization is high and every process starter is flow controlled. When flow control kicks in it closes JMS receiver, it usually has got prefetched messages inside a session and all work for fetching them is lost and repeated by other receiver. Now, the default receive time unit is 1 second, which is not enough under heavy load. So, we've got the same messages floating and stuck between server and BW process. Overall performance is degraded very much. The solution is to disable prefetch and increase JMS receiver timeout. Described case can be traced with 'show consumers full' within tibemsadmin console.
środa, listopada 06, 2013
Subskrybuj:
Komentarze do posta (Atom)
0 komentarze:
Prześlij komentarz