piątek, kwietnia 13, 2018

Well designed filesystem with poorly designed cmdline util

This is a story about btrfs and docker build server. Decomissioned hardware with AMD Phenom X4 was brought to life again after Intel Spectre/Meltdown bugs, however problem of not tightly installed RAM modules went unnoticed. Electrical connection between pins was not reliable and from time to time headless server was crashing due to corrupted memory. After cold restart it was working again. Until it was loaded with building docker images. Once ssh failed with I/O error from filesystem and after warm reboot server was dead. After attaching display it was visible that boot process crashed early on btrfs open_ctree_failed.

OK, there is initrd with btrfsck. Let's try btrfsck --repair. According to output there is a disaster.

Let's check what else we have with this tool: -b use the first valid backup root copy. Command line tool did something, but after --repair rerun still nothing. Assessment of situation: brtfs seriously damaged, no backup since it was simple server for building docker images. Nothing important was on disks, everything important was pushed to the cloud. Reinstall Linux, now with ZFS? OK, booting USB installer. Next, next, hmm - installer sees btrfs volumes. How is it possible? Running dmesg. Kernel sees partition, tries to reply transactions but fails with checksums.

OK, now: btrfsck --repair -b --init-csum-tree. Waiting....

Mounting. Now kernel fails with extents. How about: btrfsck --repair -b --init-csum-tree --init-extent-tree?

After many minutes fsck finishes and I'm able to mount fs read write. Listing content and there is /root and /home. Reboot and server is fully operational!

Filesystem is probably rock solid, but fsck is written this way that it totally doesn't help people to recover filesystem.








piątek, kwietnia 06, 2018

Tibco BusinessWorks 5 in Docker is very simple

[builder@box opt]# docker build -t "tibco/eaistack" .
Sending build context to Docker daemon  581.7MB
Step 1/9 : FROM adoptopenjdk/openjdk9-openj9:jdk-9.181
 ---> f5e644bfbf5e
Step 2/9 : MAINTAINER Tibco Developer tibco.developer@company.com
 ---> Using cache
 ---> f72e93d7d0bf
Step 3/9 : ENV PROJECT Project
 ---> Running in 1162b66a8ff7
Removing intermediate container 1162b66a8ff7
 ---> bc4dc5f91389
Step 4/9 : RUN useradd -c 'Tibco user' -m -d /opt/tibco -s /bin/bash tibco
 ---> Running in aadb73ea6fce
Removing intermediate container aadb73ea6fce
 ---> cfa3f370c3ea
Step 5/9 : COPY tibco /opt/tibco
 ---> f755316e4ab5
Step 6/9 : RUN chown -R tibco:tibco /opt/tibco
 ---> Running in ceeca78eff6e
Removing intermediate container ceeca78eff6e
 ---> 7df870209be9
Step 7/9 : USER tibco
 ---> Running in a87922428472
Removing intermediate container a87922428472
 ---> 45581051bfd7
Step 8/9 : ENV HOME /opt/tibco
 ---> Running in 9cc9a18712a1
Removing intermediate container 9cc9a18712a1
 ---> fe1eb6c10268
Step 9/9 : ENTRYPOINT cd /opt/tibco/bw/5.13/bin; ./bwengine /opt/tibco/$PROJECT
 ---> Running in 646514a92838
Removing intermediate container 646514a92838
 ---> b5eaf0052739
Successfully built b5eaf0052739
Successfully tagged tibco/eaistack:latest
[builder@box opt]#

[builder@box opt]# docker run -i -t tibco/eaistack
Using work space directory /opt/tibco/bw/5.13/bin/working/5b987d721310
Creating trace file /opt/tibco/bw/5.13/bin/logs/5b987d721310.log
Using XMLReader org.apache.xerces.jaxp.SAXParserImpl$JAXPSAXParser
2018 Apr 05 21:28:16:375 GMT +0000 BW.5b987d721310 Info [BW-Core] BWENGINE-300001 Process Engine version 5.13.0, build V24, 2015-8-11
2018 Apr 05 21:28:16:423 GMT +0000 BW.5b987d721310 Info [BW-Core] BWENGINE-300009 BW Plugins: version 5.13.0, build V24, 2015-8-11
2018 Apr 05 21:28:16:433 GMT +0000 BW.5b987d721310 Info [BW-Core] BWENGINE-300010 XML Support: TIBCOXML Version 5.60.0.003
2018 Apr 05 21:28:16:433 GMT +0000 BW.5b987d721310 Info [BW-Core] BWENGINE-300011 Java version: Eclipse OpenJ9 VM 2.9
2018 Apr 05 21:28:16:434 GMT +0000 BW.5b987d721310 Info [BW-Core] BWENGINE-300012 OS version: amd64 Linux 4.15.14-300.fc27.x86_64
2018 Apr 05 21:28:18:770 GMT +0000 BW.5b987d721310 Info [BW-Core] BWENGINE-300013 Tibrv string encoding: UTF-8
creating file: /opt/tibco/bw/5.13/bin/working/5b987d721310/internal/nextJobidBlock
2018 Apr 05 21:28:19:514 GMT +0000 BW.5b987d721310 Info [BW-Core] BWENGINE-300002 Engine 5b987d721310 started
2018 Apr 05 21:28:19:777 GMT +0000 BW.5b987d721310 User [BW-User] - Job-1 [Entrypoint.process/Log]: BW Engine works fine
2018 Apr 05 21:28:19:784 GMT +0000 BW.5b987d721310 Info [BW-Core] BWENGINE-300014 Starting delayed shutdown, max-delay=[0], wait-for-checkpoints=[false]
2018 Apr 05 21:28:19:787 GMT +0000 BW.5b987d721310 Debug [BW-Core]  Shutdown max timeout exceeded, 0 jobs still running
job dispatcher with 8 threads, max queued = 1
2018 Apr 05 21:28:19:791 GMT +0000 BW.5b987d721310 Info [BW-Core] BWENGINE-300006 Engine 5b987d721310 terminating
[builder@box opt]#

[builder@box bw-time]# docker build -t tibco/bw-time .
Sending build context to Docker daemon  49.66kB
Step 1/5 : FROM tibco/eaistack:latest
 ---> b5eaf0052739
Step 2/5 : MAINTAINER Tibco Developer tibco.developer@company.com
 ---> Running in 249c3146f07c
Removing intermediate container 249c3146f07c
 ---> c275aa789efb
Step 3/5 : COPY BW-HTTP-Time /opt/tibco/projects/BW-HTTP-Time
 ---> a4aa2d95814e
Step 4/5 : ENV PROJECT projects/BW-HTTP-Time
 ---> Running in 7f65638aaab0
Removing intermediate container 7f65638aaab0
 ---> 4783982bc9ae
Step 5/5 : EXPOSE 8080
 ---> Running in 433e6a02617d
Removing intermediate container 433e6a02617d
 ---> a02dc0366ab0
Successfully built a02dc0366ab0
Successfully tagged tibco/bw-time:latest
[builder@box bw-time]#

[builder@box bw-time]# docker run -d -p 8090:8080 tibco/bw-time
[builder@box bw-time]# curl http://localhost:8090

Current dateTime is 2018-04-05T22:05:55.22Z




poniedziałek, kwietnia 02, 2018

Spring Framework 5 and Docker








Very soon with shared classes support on OpenJ9 - https://github.com/eclipse/openj9/issues/1244. Now unpack fat jar, delete google collections and run like this:


Please notice that very simple REST service consumes 32MB of disk space. If we have local docker registry it's not a problem. In case we use cloud we can optimize this size creating master docker image with common unpacked libs and then our dockerfile for single application should be derived from this master image. If we need orchestration of Spring applications we should put them inside Docker Swarm's stacks with replica count providing appropriate level of HA - ingress network with own service/name resolving will automagically take care of HA/LB.

Since the hell is frozen you can create Spring 5 application on Linux with Visual Studio Code and push it to Azure as a docker image.


Java - write once, deploy everywhere.






Remark for Eclipse OpenJ9: to control memory settings via environment variables use IBM_JAVA_OPTIONS instead of _JAVA_OPTIONS.