środa, maja 29, 2013
poniedziałek, maja 27, 2013
How to make EMS-centric Tibco BW to be HA
Hihghly available means accessible and consistent. The concept is to synchronize EMS servers across data centers using global queues routing and use pair of them for service endpoints. For highly available transactions BW instances need to have shared object stores for Arjuna XA Transaction Manager.
More complex solution:
Complementary routes from local EMS instances are omitted for picture clarity. TM should not be local to App instance.
And simple solution:
piątek, maja 24, 2013
How to generate standalone GRAILS/GORM views
import java.util.Calendar
import org.codehaus.groovy.grails.web.servlet.DefaultGrailsApplicationAttributes;
import org.codehaus.groovy.grails.commons.*
import grails.gsp.PageRenderer
import grails.gsp.ExtendedPageRenderer
class StandaloneContentGenerator {
def PageRenderer groovyPageRenderer
def generateContent() {
...
HashMap map = new HashMap()
map.put(DefaultGrailsApplicationAttributes.CONTROLLER_NAME_ATTRIBUTE, 'logEntry')
map.put(DefaultGrailsApplicationAttributes.APP_URI_ATTRIBUTE, 'http://host:port/App')
def content = new ExtendedPageRenderer(groovyPageRenderer, map).render(view: "/logEntry/list",
model: [logEntryInstanceList: results, logEntryInstanceTotal: results.size,
"flash.message": 'Content generated'])
return content
}
}
import org.codehaus.groovy.grails.web.servlet.DefaultGrailsApplicationAttributes;
import org.codehaus.groovy.grails.commons.*
import grails.gsp.PageRenderer
import grails.gsp.ExtendedPageRenderer
class StandaloneContentGenerator {
def PageRenderer groovyPageRenderer
def generateContent() {
...
HashMap
map.put(DefaultGrailsApplicationAttributes.CONTROLLER_NAME_ATTRIBUTE, 'logEntry')
map.put(DefaultGrailsApplicationAttributes.APP_URI_ATTRIBUTE, 'http://host:port/App')
def content = new ExtendedPageRenderer(groovyPageRenderer, map).render(view: "/logEntry/list",
model: [logEntryInstanceList: results, logEntryInstanceTotal: results.size,
"flash.message": 'Content generated'])
return content
}
}
środa, maja 22, 2013
Początkowa faza implementacji serwera JMS
Provider potrafiący wysłać dane na kolejkę i odbierać z kolejki. Zawartość edukacyjna.
QInitialContext ctx = new QInitialContext();
ctx.setUrl("direct://C://temp");
QConnectionFactory connFactory = (QConnectionFactory) ctx.lookup("QueueConnectionFactory");
QueueConnection conn = (QueueConnection) connFactory.createConnection();
QueueSession sess = conn.createQueueSession(false, QueueSession.AUTO_ACKNOWLEDGE);
Queue q = sess.createQueue("test123");
QueueSender sender = sess.createSender(q);
int cnt = 0;
for (int i=0; i < 10; i++)
sender.send( sess.createTextMessage(System.currentTimeMillis() + "X" + ++cnt) );
QueueReceiver receiver = sess.createReceiver(q);
TextMessage m = null;
do {
m = (TextMessage) receiver.receive();
if (m!=null) {
System.out.println(m.getJMSMessageID() + ": " + m.getText());
}
}
while (m!=null);
sess.close();
conn.close();
QInitialContext ctx = new QInitialContext();
ctx.setUrl("direct://C://temp");
QConnectionFactory connFactory = (QConnectionFactory) ctx.lookup("QueueConnectionFactory");
QueueConnection conn = (QueueConnection) connFactory.createConnection();
QueueSession sess = conn.createQueueSession(false, QueueSession.AUTO_ACKNOWLEDGE);
Queue q = sess.createQueue("test123");
QueueSender sender = sess.createSender(q);
int cnt = 0;
for (int i=0; i < 10; i++)
sender.send( sess.createTextMessage(System.currentTimeMillis() + "X" + ++cnt) );
QueueReceiver receiver = sess.createReceiver(q);
TextMessage m = null;
do {
m = (TextMessage) receiver.receive();
if (m!=null) {
System.out.println(m.getJMSMessageID() + ": " + m.getText());
}
}
while (m!=null);
sess.close();
conn.close();
środa, maja 15, 2013
Jak spróbować odzyskać niedawno skasowane pliki pod warunkiem, że jakiś proces ma je jeszcze otwarte
export FOPEN_PID=`lsof | grep sync-msgs.db | awk '{print $2}'`
cd /proc/$FOPEN_PID/fd
ls -al
lrwx------ 1 tibco tibco 64 Apr 9 00:13 8 -> /storage/tibco/tibco/cfgmgmt/ems/data/datastore/async-msgs.db (deleted)
lrwx------ 1 tibco tibco 64 Apr 11 17:36 80 -> socket:[123929714]
lrwx------ 1 tibco tibco 64 Apr 11 17:36 81 -> socket:[122106504]
lrwx------ 1 tibco tibco 64 Apr 11 17:36 82 -> socket:[105165735]
lrwx------ 1 tibco tibco 64 Apr 11 17:36 83 -> socket:[104899617]
lrwx------ 1 tibco tibco 64 Apr 11 17:36 85 -> socket:[104899618]
lrwx------ 1 tibco tibco 64 Apr 11 17:36 86 -> socket:[122106507]
lrwx------ 1 tibco tibco 64 Apr 11 17:36 87 -> socket:[122106512]
lrwx------ 1 tibco tibco 64 Apr 11 17:36 88 -> socket:[115068666]
lrwx------ 1 tibco tibco 64 Apr 11 17:36 89 -> socket:[123882138]
lrwx------ 1 tibco tibco 64 Apr 9 00:13 9 -> /storage/tibco/tibco/cfgmgmt/ems/data/datastore/meta.db (deleted)
lrwx------ 1 tibco tibco 64 Apr 11 17:36 90 -> socket:[124029587]
lrwx------ 1 tibco tibco 64 Apr 11 17:36 91 -> socket:[117795240]
lrwx------ 1 tibco tibco 64 Apr 11 17:36 92 -> socket:[123973925]
lrwx------ 1 tibco tibco 64 Apr 11 17:36 93 -> socket:[124029595]
lrwx------ 1 tibco tibco 64 Apr 11 17:36 94 -> socket:[123973937]
[root@s12165 fd]# dd if=8 of=/storage/tibco/tibco/cfgmgmt/ems/data/datastore_async-msgs.db
9481645+0 records in
9481645+0 records out
4854602240 bytes (4.9 GB) copied, 44.7987 s, 108 MB/s
cd /proc/$FOPEN_PID/fd
ls -al
lrwx------ 1 tibco tibco 64 Apr 9 00:13 8 -> /storage/tibco/tibco/cfgmgmt/ems/data/datastore/async-msgs.db (deleted)
lrwx------ 1 tibco tibco 64 Apr 11 17:36 80 -> socket:[123929714]
lrwx------ 1 tibco tibco 64 Apr 11 17:36 81 -> socket:[122106504]
lrwx------ 1 tibco tibco 64 Apr 11 17:36 82 -> socket:[105165735]
lrwx------ 1 tibco tibco 64 Apr 11 17:36 83 -> socket:[104899617]
lrwx------ 1 tibco tibco 64 Apr 11 17:36 85 -> socket:[104899618]
lrwx------ 1 tibco tibco 64 Apr 11 17:36 86 -> socket:[122106507]
lrwx------ 1 tibco tibco 64 Apr 11 17:36 87 -> socket:[122106512]
lrwx------ 1 tibco tibco 64 Apr 11 17:36 88 -> socket:[115068666]
lrwx------ 1 tibco tibco 64 Apr 11 17:36 89 -> socket:[123882138]
lrwx------ 1 tibco tibco 64 Apr 9 00:13 9 -> /storage/tibco/tibco/cfgmgmt/ems/data/datastore/meta.db (deleted)
lrwx------ 1 tibco tibco 64 Apr 11 17:36 90 -> socket:[124029587]
lrwx------ 1 tibco tibco 64 Apr 11 17:36 91 -> socket:[117795240]
lrwx------ 1 tibco tibco 64 Apr 11 17:36 92 -> socket:[123973925]
lrwx------ 1 tibco tibco 64 Apr 11 17:36 93 -> socket:[124029595]
lrwx------ 1 tibco tibco 64 Apr 11 17:36 94 -> socket:[123973937]
[root@s12165 fd]# dd if=8 of=/storage/tibco/tibco/cfgmgmt/ems/data/datastore_async-msgs.db
9481645+0 records in
9481645+0 records out
4854602240 bytes (4.9 GB) copied, 44.7987 s, 108 MB/s
Shield pattern
Application is sensitive to input data. Certain combinations cause it to crash or do not behave correctly (for example: create record and do not perform rollback when processing fails). Create proxy before app and filter input data. Do not allow bad data to flow to app.
Cloud link pattern
Your infrastructure is located in two sites (maybe your company has divisions in different geographical locations). You would like to easily access services available in DC2 from local site in DC1 and vice versa. Without complex VPN and routing configuration. You can run two instances of Cloud Redirector (technology preview) on 2 machines - each in one DC.
java -Xms512m -Xmx512m -Dcrdi.config=C:/CRDI/crdi.client.properties -jar crdi.jar
This is a master configuration. Frontlink mapping contain mapping of transport TCP and UDP ports available in master's local network to services defined in slave's local network. Backlink mapping contain mapping of services exposed via CDRI to master's local network. Shared secret is used as a security key agreement between sites (also all data transferred between sites is encypted with dynamic key). Web App credentials are needed to access CRDI www interface.
Example of client config:
crdi.siteId=client
crdi.backlinkMapping=udp:7500=udp:127.0.0.255:7500,udp:7474=udp:127.0.0.255:7474,udp:7475=udp:127.0.0.255:7475
crdi.frontlinkMapping=40521=1521,udp:7500=udp:7500,udp:7474=udp:7474,udp:7475=udp:7475,7500=7500,7474=7474,7475=7475,7222=7222,8080=8080,40000=80
crdi.controlPort=32765
crdi.sibling=212.76.100.110:32768
crdi.transportPortRange=7000-9000,40000-41000
crdi.udpTransportPortRange=7000-9000,40000-41000
crdi.logLevel=TRACE
crdi.sharedSecret=passw0rd
crdi.webAppCredentials=admin:admin
java -Xms512m -Xmx512m -Dcrdi.config=C:/CRDI/crdi.client.properties -jar crdi.jar
wtorek, maja 14, 2013
Tibco BW HealthCheck pattern
It is possible for every BW component exposing services via HTTP or JMS to create HealthCheck process. It can be started with HTTP Receiver (no SOAPAction header) or JMS Receiver (Ping property used with JMS selector). Process should ping uderlying components or external systems and return success of failure status. Also it is possible to get current CPU/GC/MEM usage from JMX and Tibco BW processes statistics from Hawk. You open one URL and get detailed statistics from which you can check overal health of flows.
poniedziałek, maja 13, 2013
Czemu RedHat zarzucił Xen-a
Technologię Xen Source kupił Citrix, a RedHat koniecznie chciał być liderem aktywnie rozwijającym produkt do wirtualizacji - z możliwością definiowania własnych funkcjonalności, decydowaniu o planach wydawniczych, pełnym zarządzaniem kodem w ramach własnej firmy. W przypadku Xen-a nie miałby takiej kontroli. Dlatego decyzja o kontynuowaniu rozwiązania do wirtualizacji z KVM-em była jedyną słuszną. Klienci mający wykupiony support byli/są wspierani w migracji.
środa, maja 08, 2013
Poczta Polska - ostatni bastion socjalizmu
UKE zbadał terminowość doręczania przesyłek. Nadal jest źle, ale trochę lepiej. Są takie przypadki, że na zwykły list czeka się 6 dni albo dłużej (priorytetowy 4+).
Subskrybuj:
Posty (Atom)