Tuesday, April 17, 2012

Installing WebSphere 6.1 on Ubuntu

Recently I have upgraded my Ubuntu to Precise (12.04). I always prefer a clean install which implies to reinstall some proprietary application like WebSphere.
It is always a pleasure to do that.

From time to time I wonder: why is it so hard to provide a .deb package...

Installation steps:

1. replace /bin/sh

/bin/sh points to dash which does not fulfill the expectations of the installer.
cd /bin
sudo ln -sf /bin/bash sh

2. install sun jdk 6

Other weak point is the OpenJDK.
In order to install the other jdk one must register (uncomment) the partner repository.
Recently Canonical does not support SunOracle JDK for legal and stability reasons.

3. install ia32-libs

In the XXI. century it does seem a bit old fashioned to install a 32 bit application, but WAS6.1 installer is 32 bit, so we have to get the support for that
sudo apt-get install ia32-libs

4. install WebSphere

run java -jar setup.jar

4.5 delete profiles created before installing feature pack

./manageprofiles.sh -delete -profileName profile_name

4. install feature pack

Which in turn installs some fixpacks first.
run java -jar setup.jar

5. install fixpacks

Fixpacks are handled through IBM Update Installer. After downloading, run the same old
java -jar setup.jar
The first thing to install must be the PK53084 fix, then come the two fix packs of 39.

6. create the profile

run pmt.sh from the ProfileManagement dir of the new WAS61 installation

7. start server

And now we come to a magical moment of firing up our brand new server

8. after party

In order to work correctly some settings must be applied:

TADAA!!!
We are done!
Have a beer!
Or maximum two!

Friday, January 20, 2012

Feeling Groovy II - https

In my former post I wrote about making direct get and post request to a webserver. Now I had to add some support for https protocol. Of course changing the URL was not enough as the SSL communication needs to know if the server certificate can be trusted or not. Well it was fun to google around for the details, but I do not really want to do it again.
So here are the steps:

1. Get the certificate!
2. Create a keystore!
keytool -import -file cert.cer -alias server -keystore server.jks
3. Modifiy the connection to use keystore and ignore some errors!
def decorateConnection(url, connection) {
    if (url.startsWith('https:')) {
        KeyStore keyStore = KeyStore.getInstance(KeyStore.defaultType)
        keyStore.load(getClass().getResourceAsStream('server.jks'), 'xxx'.toCharArray())
        TrustManagerFactory tmf = TrustManagerFactory.getInstance(TrustManagerFactory.defaultAlgorithm)
        tmf.init(keyStore)
        SSLContext ctx = SSLContext.getInstance("TLS")
        ctx.init(null, tmf.trustManagers, null)
        connection.SSLSocketFactory = ctx.socketFactory
        connection.hostnameVerifier = new HostnameVerifier() {
            public boolean verify(String hostname, SSLSession session) {
                return true
            }
        }
    }
}
When you create the keystore a password must be specified, that is that 'xxx' in the row 4.
Special thanks to people on stackoverflow.com and coderanch.com!

Tuesday, April 5, 2011

Prezi

Again a topic that might be well known to others already, and I am a bit of late. It is already 2 years since Prezi is in production. Its simplicity is powerful, the design is amazing. It changes the way people make presentations - actually they enjoy it!
But the thing that makes the whole thing outstanding is the focus.

As a programmer I always try to think abstract, solve problems by decomposition, trying to focus on a context that is small enough for resolution. Mindmaps are handy in such a process and I always use them to structure information. And now I have found a tool where I can map the mindmap into a presentation! And we can do even more:
  • use orientation
  • use pictures (effectively!)
  • share prezentations and cooperate on them
  • easily emphasize things in focus
I wonder if it could be used in schools. Maybe children could gain a more structured knowledge of curriculum. Maybe they could learn easier ways to learn.

All in all Prezi is worth to be praised.
And - as it was 5th of April in 2009 when Prezi went public - we wish it a happy birthday!

Sunday, March 27, 2011

Manage Dependency

Most of the programmers must be a way ahead of me in this issue. I have to admit that I do not have a stable knowledge on Maven. I have tried several times to understand it, but for me the xml is to hard to maintain, the one-artifact-per-project paradigm is to restrictive and the web of plugins that should be learned is too complex. I prefer Ant which is a bit old-fashioned, but well documented and readable. (Should I replace it Gradle seems to be a good choice.)
What I envy from Maven is the archetype and the dependency management. I really hate to collect all the dependent jars into the new project's lib directory. Fortunately there are some other ways to get along and that is the point of this post.

What is the adventage of using some kind of dependency management?
  • no need to store jars in VCS - lib directory becomes a first class citizen in .gitignore
  • no need to keep in mind transitive dependencies
  • in-house libraries can be stored in the same shared repository as the other libraries
  • easy to check upgrade possibilities
OK. So if it is so nice, how can I get there? As I considered Gradle as a replacement for Ant I saw that it uses Ivy to manage libraries, so it came naturally to take a look at it. Let us gather the main points that should be achieved:
  • test project that gets libs through declaration
  • shared repository for enterprise
  • publish in-house libs to shared repository
Declarative dependency
Setting up Ivy is easy: the ivy-[version].jar should be placed in the lib dir of ant. The dependency declaration goes to the ivy.xml file in the root directory. It could be something like this:
<ivy-module version="2.0">
    <info organisation="hu.progmatx" module="test-ivy"/>
    <dependencies>
        <dependency org="org.apache.velocity" 
                       name="velocity" rev="1.5"/>
    </dependencies>
</ivy-module>
(A nice repository of dependency descriptions is available on the net.)
So we have the definition how to get the files? Let us put together a very simple build.xml:
<project xmlns:ivy="antlib:org.apache.ivy.ant" 
         name="test-ivy" 
         default="resolve">
    <target name="resolve" 
            description="--> retrieve dependencies with ivy">
        <ivy:retrieve sync="true" 
                      symlink="true" 
                      refresh="true"/>
    </target>

    <target name="report" 
            description="--> report dependencies with ivy" 
            depends="resolve">
        <ivy:report />
    </target>
</project>
The main points are
  1. the namespace declaration that makes easy to call Ivy targets,
  2. resolve target which calls the retrieve task
  3. report task can display the dependencies in html or graphml
Some explanation on retrieve task might come in handy. First it implicitly calls the resolve task to compute the dependency graph. Then it downloads the files into the local cache. Then it would copy the files into the lib directory, but I prefer using symlinks (as you can see from the attribute of the task). The sync attribute ensures that unused dependencies will be removed.

Repository
Our next step is to establish a shared place of libraries. The structure of the repository can be customized and the way Maven does it can also be suitable. Take a look at the following structure:
shared-repository/
└── no-namespace
    └── ant
        └── ant
            ├── ivys
            │   ├── ivy-1.6.xml
            │   ├── ivy-1.6.xml.md5
            │   └── ivy-1.6.xml.sha1
            └── jars
                ├── ant-1.6.jar
                ├── ant-1.6.jar.md5
                └── ant-1.6.jar.sha1
This can be accessed through the configuration of the following properties in the build.xml:
...
<property name="ivy.shared.default.root" value="/media/shared-repository"/>
<property name="ivy.shared.default.ivy.pattern" value="no-namespace/[organisation]/[module]/ivys/ivy-[revision].xml"/>
<property name="ivy.shared.default.artifact.pattern" value="no-namespace/[organisation]/[module]/[type]s/[artifact]-[revision].[ext]"/>
...
Of course with shared repository I have to say Good bye! to symlinks.

Publish libraries
To maintain a repository like that by hand could be hard. Fortunately we have ivy:publish and ivy:install tasks to the rescue. One can be used to load up our own homebrew jars and the other is suitable to get a copy of the dependencies from the default Maven repo. Let us do it one by one!
To publish your jar just put the following in the build.xml:
<target name="publish" depends="resolve"
        description="--> publish module to shared repository">
    <ivy:publish resolver="shared" pubrevision="1.0">
         <artifacts pattern="build/jars/[artifact].[ext]" />
    </ivy:publish>
</target>
OK, it was too easy, so let's look at the other problem! That seems to be more complex, because we have to edit two files. Append this to build:
<property name="ivy.cache.dir" value="${basedir}/cache"/>
<property name="dest.repo.dir" value="/media/shared-repository"/>

<target name="maven2"
        description="--> install module from maven 2 repository">
    <ivy:settings id="copy.settings" file="${basedir}/ivysettings.xml"/>
    <ivy:install settingsRef="copy.settings" 
          organisation="org.apache.velocity" module="velocity" revision="1.5" 
          from="libraries" to="my-repository"
                    overwrite="true" transitive="true"/>
</target>
And let's create the ivysettings.xml!
<ivysettings>
    <settings defaultCache="${ivy.cache.dir}/no-namespace" 
                 defaultResolver="libraries"
                 defaultConflictManager="all" />  <!-- in order to get all revisions without any eviction -->
    <resolvers>
        <ibiblio name="libraries" m2compatible="true" />
        <filesystem name="my-repository">
            <ivy pattern="${dest.repo.dir}/no-namespace/[organisation]/[module]/ivys/ivy-[revision].xml"/>
            <artifact pattern="${dest.repo.dir}/no-namespace/[organisation]/[module]/[type]s/[artifact]-[revision].[ext]"/>
        </filesystem>
    </resolvers>
</ivysettings>
Here the point is that we create a so called resolver, that is connected to the standard Maven repository and to our shared repository. In the install task the dependent lib is named and with the attributes set up this way we gather the transitive dependecies as well.

Conclusion
Wasn't it easy? I did not dare to think of it before - it seemed so many things to do and all these things seemed to be so complex. Fortunately Ivy is gentle, it just adopts to your pace and you do not have to customize more than what necessary is. I was only scratching its coat, but it was completly working already. It was worth to invest some time and gather some Experience by Doing!

Friday, March 11, 2011

Grok Your Code

I just have realized how easy it is to put up a service for searching my own codes.
During an architect-meeting the code reusing issue came up. Some of the colleagues wanted to create a storage for all written codes that must be tagged somehow to be searchable. Others wanted to generate and share documentation in html format, and search that one.
The problems with these solutions are obvious:

  • people under pressure do not care for putting their code into searchable archive 
  • poorly documented code cannot be searched
  • overdocumented code breaks the DRY principle
  • programmers speak programming languages - so it easier for them to search that one

One of my friend threw in that Google is quicker and probably smarter than any kind of archive that we could put together. Maybe he was right, but I was eager to find some possibility to combine the two, and put up an in-house code search site.
Actually I thought that there are tons of apps like this out there in the Open Source world. But it came up that the only relevant candidate is OpenGrok. I tried it, and I was completely satisfied. It is not only making a fulltextsearch on the code, but it uses ctags, to get some insight knowledge on the structure. It can even browse VCS history, which might came in handy when studying the meaning of the code.

Putting up the OpenGrok is easy:

  • unzip the archive
  • create a basedir for projects
  • checkout the projects from the VCS
  • setup the etc/configuration.xml (mainly just setting the path)
  • run bin/OpenGrok index
  • build the webapp
  • deploy under tomcat

Now what you might need automate is that the projects should be updated on regular basis and the indexing must be called afterwards. But that one does not require a master degree.

However when I presented this to the architects it turned out that I was not the only guy who worked on this item. One of the fellows showed us a nice wiki page on comparison of documentation generators.
The point why I cite this is that I want to emphasize the difference between reading some docs or doing something relevant.
I still believe in Experience By Doing...

Monday, December 27, 2010

Feeling Groovy

For some reason I had to test a service through http. The parameters should have been passed in as http header values. As I prefer to use small and simple tools my first implementation was made as a bash script using curl to call the service. Although it worked perfectly it was hard to use, as the company prefers the proprietary OS, and users were forced to install cygwin.
Recently I was reading books (Programming Groovy and Groovy Recipes) about Groovy and recognized the potential in it. Using the JVM as a carrier makes the scripts available on all relevant platforms.
Actually it took some time to get the thing working, so I believe it might help others to check out my solution on curl in Groovy:
def curl(url, headers, fileName, outputFileName = 'out.log') {
 def connection = new URL(url).openConnection()
 connection.setRequestMethod(fileName ? 'POST' : 'GET')
 headers.each() { key, value ->
  connection.setRequestProperty(key ,value)
 }
 connection.doOutput = true
 if (fileName) {
  if (isPdf(fileName)) { // ***
   connection.outputStream << new File(fileName).readBytes()
   connection.outputStream.flush()
   connection.outputStream.close()
  } else {
   connection.outputStream.withWriter() { writer ->
    new File(fileName).eachByte() { writer.write it }
   }
  }
 }
 connection.connect()

 new File(outputFileName).withOutputStream{ out ->
  connection.content.eachByte() {out.write it}
 }

 connection.headerFields
}
The point is in the conditional statement marked with stars: according to the type of the file (binary or  text) I had to handle posted data different ways.

Sunday, December 19, 2010

Let's get started

OK, so what is the point in starting yet another blog?

Frankly I say: I do not know.

I just needed a place to gather some draft feelings on technologies, some guidance to tasks that I have to repeat, points of lectures and cornerstones of my profession.

Hope you will enjoy it!

And I wish the same for myself!