Τρίτη, 18 Δεκεμβρίου 2012

Karaf and CXF DOSGi updated

The Aniketos Runtime platform is based on Apache Karaf OSGi container. Another key technology for the platform is Apache CXF DOSGi. This is an implementation of OSGi Remote Services that allows services in different containers to be discovered and exchange information. In a simple scenario, CXF DOSGi can also be used to provide a SOAP WSDL service from inside an OSGi bundle. This service can be consumed from either an OSGi client in another container or a common WSDL client.

As 2012 approaches to the end good news appeared for both these technologies. Apache Karaf has been updated to 2.3.0 version. Although Apache Karaf received regular updates throughout the year, this is a milestone for the project, as it marks the swift from OSGi 4.2 to OSGi 4.3 specification. This new version of Karaf supports Felix Framework 4.0 and Eclipse Equinox 3.8. This was a long awaited feature as Karaf has fallen behind from the latest advances in core OSGi containers. In practical matters this change will make both development and deployment easier. Since most developers use recent versions of OSGi containers in their development environment, they were previously often caught by surprise when trying to deploy a bundle. Especially in situations when more than 100 bundles need to be installed, the differences in OSGi implementations could have a significant impact. It is always good to develop in the same environment that you are going to deploy. I have also found that recent versions of the OSGi containers produce more meaningful messages in the case a bundle fails to install.

Apache CXF DOSGi is also going to be updated to version 1.4.0 soon. This is the first update from the 1.3.1 release in April 2012. The highlights for this release are very interesting:
  • 30 issues resolved (see jira)
  • Karaf feature for easy installation in Apache Karaf
  • Zookeeper discovery now supports automatic reconnects and Cluster configuration
  • DOSGi is now independent of spring dm
  • Custom intents are now created by publishing e.g. CXF Features as services
  • Big refactorings make the code much easier to understand
We have experienced problems running CXF DOSGi together with some other bundles and we hope that the new version will solve most of them. The provision of Karaf feature is of course happily welcomed. We still haven't set up a Zookeeper server, but we definitely plan to do in the future. The new version will make this task easier. Anyway it is very good to see that this project is evolving, as before version 1.3 it was stalled for over a year.




Τετάρτη, 7 Νοεμβρίου 2012

Declarative Services in Apache Karaf

In this post I would like to demonstrate how easy it is to use Declarative Services in Karaf. The source code will be based on the Converter service and client that were used in a previous example about remote services. This time however both the service and the client will reside in the same OSGi container and instead of Activators Declarative Services will be used. The source code of this example is available in GitHub. The following projects are implemented:
  • converter (interface of a conversion service - same as in previous example)
  • converter-impl (implementation of conversion service using DS)
  • converter-client (client of conversion service using DS)
  • converter-ds-feature (maven host project for converter ds feature)


Converter Service

The converter interface is exact same as in the previous example not using Declarative Services. It simply provides the interface of the service. The service implementation however is modified. We no longer need the Activator. The ServiceImpl class is the only thing needed. In addition to the Interface implementations methods, two more methods are added:
protected void activate(ComponentContext context) {
    System.out.println("*** Activating Service");
} 

protected void deactivate(ComponentContext context) {
    System.out.println("*** Deactivating Service");
}

These methods are called for the Declarative Services runtime, when the service is activated and de-activated. In this simple scenario they do nothig but printing a message. In order for the DS runtime to work, we need to add a service component file. This file is declared in the MANIFEST.MF. In our case we modify the pom.xml to make the appropriate line appear in the manifest file:
<configuration>
    <instructions>
        <Bundle-SymbolicName>${project.artifactId}</Bundle-SymbolicName>
        <Bundle-Name>${project.artifactId}</Bundle-Name>      
        <Bundle-Description>Implementation of a service that converts Celcius temperatures to Fahrenheit and vice-versa</Bundle-Description>
        <Bundle-SymbolicName>${project.artifactId}</Bundle-SymbolicName>
        <Export-Package>${bundle.export.package}</Export-Package>
        <Import-Package>${bundle.import.package}</Import-Package>
        <Service-Component>OSGI-INF/component.xml</Service-Component>
    </instructions>
</configuration>

Notice that a Service-Component entry was added and that the Bundle-Activator was removed.

The last thing we need to do is to actually create the service component file in resources/OSGI-INF/component.xml:
<?xml version="1.0" encoding="UTF-8"?>
<scr:component xmlns:scr="http://www.osgi.org/xmlns/scr/v1.1.0" 
               name="gr.atc.aniketos.demos.converter.impl">
   <implementation class="gr.atc.aniketos.demos.converter.impl.ConverterImpl"/>
   <service>
      <provide interface="gr.atc.aniketos.demos.converter.Converter"/>
   </service>
</scr:component>

How this file works should be self-explanatory. We define which service we provide and which is the implementation class.


Converter Client

The service client follows a similar approach as the service. The Activator is removed and the pom.xml file is modified to add the service component entry. The compoment.xml looks like this:
<?xml version="1.0" encoding="UTF-8"?>
<scr:component xmlns:scr="http://www.osgi.org/xmlns/scr/v1.1.0" 
               name="gr.atc.aniketos.demos.converter.client">
   <implementation class="gr.atc.aniketos.demos.converter.client.ConverterClient"/>
   <reference name="Converter" policy="static" cardinality="1..1" 
              interface="gr.atc.aniketos.demos.converter.Converter" 
              bind="setService" unbind="unsetService"/>
</scr:component>

In this file we define which service we want to look up. We search for services based on interfaces and not concrete implementations. We also define which methods to call, when the service is set and unset by the DS runtime. This methods are implemented by our class:
public class ConverterClient {

 private ConverterDialog converterDialog;

 private Converter service;

    protected void activate(ComponentContext context) {
     System.out.println("*** Activating Client");
    } 

    protected void deactivate(ComponentContext context) {
     System.out.println("*** Deactivating Client");
        
  service = null;
  if (converterDialog != null) {
         java.awt.EventQueue.invokeLater(new Runnable() {
             public void run() {
                 converterDialog.setVisible(false);
                 converterDialog.dispose();
                 converterDialog = null;
             }
         });
  }        
    }  
    
    public synchronized void setService(Converter _service) {
        System.out.println("Converter Service was set. !");
        this.service = _service;
        
     converterDialog = new ConverterDialog(new ConverterDialog.ConverterDialogListener() {
   @Override
   public double onCelciusToFahrenheit(double celcius) {
                double fahrenheit = service.toFahrenheit(celcius);
                System.out.println("Celcius " + celcius + " => Fahrenheit " + fahrenheit);
                return fahrenheit;
   }
            
   @Override
   public double onFahrenheitToCelcius(double fahrenheit) {
                double celcius = service.toCelcius(fahrenheit);
                System.out.println("Fahrenheit " + fahrenheit + " => Celcius " + celcius);
                return celcius;
   }            
  });
        
        java.awt.EventQueue.invokeLater(new Runnable() {
            public void run() {
                converterDialog.setVisible(true);
            }
        });        
    }

    public synchronized void unsetService(Converter service) {
        System.out.println("Converter Service was unset.");
        if (this.service == service) {
            this.service = null;
        }
    } 
}

Deploying in Karaf

We are now ready to deploy in Karaf. Although a feature is provided, I would like to install the bundles one by one to demonstrate a subtle point:
karaf&root> install -s mvn:org.apache.felix/org.apache.felix.scr/1.6.0
Bundle ID: 50
karaf&root> install -s mvn:gr.atc.aniketos.demos.converterservice/converter/1.0.0
Bundle ID: 51
karaf&root> install mvn:gr.atc.aniketos.demos.converterservice/converter/1.0.0
Bundle ID: 52
karaf&root> install mvn:gr.atc.aniketos.demos.converterservice/client/1.0.0
Bundle ID: 53

The org.apache.felix.scr dependency is the Felix implementation of the Service Component Runtime. If you are testing this in Equinox, you need the following dependencies:

org.eclipse.osgi.services
org.eclipse.equinox.util
org.eclipse.equinox.ds

I provide links to these bundles from the Spring Enterprise Bundle Repository. There is also a commented out section with them at the features.xml file.

Now let's start the service and the client:
karaf&root> start 52
karaf&root> start 53
*** Activating Service
Converter Service was set. !
*** Activating Client
karaf&root> stop 53
*** Deactivating Client
Converter Service was unset.
*** Deactivating Service

Notice that when we start the service nothing happens. The activate method isn't called. This method isn't a replacement of the Activator's start method. It isn't called, when the bundle is started. It is called by the DS runtime, when it is needed to. This only happens, when a client with a reference to this service is started. So, everything happens when the client is started. Notice the sequence of the print-out messages. First the service activate method is called. Then the set and the activate method of the client follow. When we stop the client, both the client and the service are de-activated. Of course the service wouldn't be de-activated, if there was a reference to it from another bundle.


Παρασκευή, 26 Οκτωβρίου 2012

OSGi Remote Services

Remote Services is a feature introduced in the 4.2 version of the OSGi specification. It provides a way for accessing OSGi services from remote OSGi containers. CXF DOSGi is an open source implementation of Remote Services.

An example for running an OSGi service in one container and accessing it from another is included in the DOSGi samples. I would like to present here another example, based on Apache Karaf features. The source code of this example is available in GitHub.

The source code consists of the following projects:
  • converter (interface of a conversion service)
  • converter-impl (implementation of conversion service)
  • converter-client (client of conversion service)
  • converter-server-feature (maven host project for converter server feature)
  • converter-client-feature (maven host project for converter client feature)


Temperature Convertion Service

The example implements a temperature conversion feature. The interface of the service is as follows:
public interface Converter {
    double toCelcius(double fahrenheit);
    double toFahrenheit(double celcius);
}

And the implemetation:
public class ConverterImpl implements Converter {
    
    @Override
    public double toCelcius(double fahrenheit) {
        double celcius = ((fahrenheit - 32) * 5) / 9;
        System.out.println("Fahrenheit " + fahrenheit + 
                           " => Celcius " + celcius);
        return celcius;
    }
    
    @Override
    public double toFahrenheit(double celcius) {
        double fahrenheit = 32 + ((celcius*9)/5);
        System.out.println("Celcius " + celcius + 
                           " => Fahrenheit " + fahrenheit);
        return fahrenheit;
    }
}

The implementation bundle requires an Activator that will register the service.
public class Activator implements BundleActivator {
    private ServiceRegistration registration;

    public void start(BundleContext bc) throws Exception {
        Dictionary props = new Hashtable();

        props.put("service.exported.interfaces", "*");
        props.put("service.exported.configs", "org.apache.cxf.ws");
        props.put("org.apache.cxf.ws.address", 
                  "http://localhost:9090/converter");
        
        registration = bc.registerService(Converter.class.getName(), 
                                          new ConverterImpl(), props);
    }

    public void stop(BundleContext bc) throws Exception {
        registration.unregister();
    }
}

This is how a simple OSGi service will be registered. The only extra thing is the CXF DOSGi parameters. If the CXF DOSGi bundle (or bundles - both a single and a multi bundle version are available for download) is not present, then these properties are ignored and the service can still be discovered by bundles in the same container. If however the CXF Distributed OSGi component is installed, then the above properties will have the effect of exporting a SOAP Web Service at the http://localhost:9090/converter endpoint. The generated WSDL file is available at http://localhost:9090/converter?wsdl.

Note: It is very easy to export the OSGi service as a RESTful JAX-RS service as well. The only thing needed is to replace 'ws' with 'rs' in the above property values!

If you have downloaded the source code from GitHub, it will be very easy now to test the service implementation. Build the code with maven, start Karaf and type the following in Karaf's command prompt:
features:addURl mvn:gr.atc.aniketos.demos.converterservice/converter-server-feature/1.0.0/xml
features:install converter-server-feature

The above commands will install the interface and the implementation bundles, as well as the single-bundle distribution of CXF DOSGi (The features.xml points to a file system location of the CXF DOSGi bundle. You need to download the file and modify features.xml to point to the downloaded jar file in your file system. You need to do this before building the projects and running the above commands in Karaf).

After running the commands allow for a minute or so for the bundles to install and then point your browser to http://localhost:9090/converter?wsdl. The WSDL file of the conversion service should appear. You can treat this service as a regular SOAP Web Service and consume it through any SOAP Service client. What is more interesting however, is to access it as a remote OSGi service from a bundle running in a different container.


Service Client

The Service Client project  utilizes an Activator that tries to discover a Consumer Service implementation using ServiceTracker.
public void start(final BundleContext bundleContext) throws Exception {
    tracker = new ServiceTracker(bundleContext, Converter.class.getName(), null) {
        @Override
        public Object addingService(ServiceReference reference) {
            Object result = super.addingService(reference);

            // Do something with the service
            useService(bundleContext, reference);

            return result;
        }
    };
    tracker.open();
}

Insice the useServive method a simpe JFrame dialog is started, which provides a basic interface for entering values for conversion.



There isn't a single line of Java code in the client bundle that relates with DOSGi. You can treat the client bundle as a regular OSGi bundle. You will need of course to install it in a container, where a conversion service implementation is also installed. As soon as a service is discovered, the JFrame dialog will appear and you can start doing conversions. Type a temperature in one of the two text fields and then click "Invoke". The result will appear in the command prompt.

Note: This is a very basic example. Things like the service going away aren't treated.

In order to enable the discovery of a remote service, what is needed is a remote-services.xml file. This file must be present in OSGI-INF/remote-service/remote-services.xml location inside the client bundle. The contents of the file are as follows:
<?xml version="1.0" encoding="UTF-8"?>
<service-descriptions xmlns="http://www.osgi.org/xmlns/sd/v1.0.0">
  <service-description>
    <provide interface="gr.atc.aniketos.demos.converter.Converter"></provide>
    <property name="service.exported.interfaces">*</property>
    <property name="service.exported.configs">org.apache.cxf.ws</property>
    <property name="org.apache.cxf.ws.address">http://localhost:9090/converter</property>
  </service-description>
</service-descriptions>

What we define here is the interface of the service we want to discover and the endpoint to look it up. (We also define if it is about a SOAP or a RESTful service.)

We are now ready to test the client bundle in a different container. Build the projects (change the location of the CXF DOSGi single bundle distribution jar file in converter-client-feature\src\main\resources\features.xml), start a second instance of Karaf and type the following:
features:addURl mvn:gr.atc.aniketos.demos.converterservice/converter-client-feature/1.0.0/xml
features:install converter-client-feature

Note: In order to run two instances of Karaf in the same machine, you need to make some changes in the
etc/org.apache.karaf.management.cfg file. In this file two port numbers for RMI connections are defined. Make sure that the two Karaf instances use different port numbers.

After running the above commands, wait for some seconds for the bundles to install and the remote service to be discovered. Then the conversion dialog will appear. Type a temperature and click "Invoke". The result of the conversion should appear in the command prompt of both instances of Karaf.

You can of course use a different container than Karaf for either the service or the client. In that case however the convenient features mechanism can be used and you will need to find another way for installing the bundles. If you install the service in a different machine than the client, then you need to replace localhost in the org.apache.cxf.ws.address property of the remote-services.xml with the correct address.

Παρασκευή, 12 Οκτωβρίου 2012

Using WRAP protocol to install bundles in Apache Karaf


The WRAP protocol provides a convenient way to install bundles in Apache Karaf in many situations. I will try to describe it here with some real examples.

First let us try to install ecj-3.5.1.jar. This is the Eclipse Compiler for Java bundle and can be downloaded from a Maven repository.

Start Karaf and type:


karaf@root> install http://repo1.maven.org/maven2/org/eclipse/jdt/core/compiler/ecj/3.5.1/ecj-3.5.1.jar
Bundle ID: 162
karaf@root> start 162
karaf@root> list
START LEVEL 100 , List Threshold: 50
   ID   State         Blueprint      Level  Name
[ 162] [Active     ] [            ] [   80] Eclipse Compiler for Java (3.3.0)

The bundle started OK. Now let us uninstall it and try installing it again by copying it in the deploy folder. After we have done so, we try to find out if the new jar is properly installed:


karaf@root> list
START LEVEL 100 , List Threshold: 50
   ID   State         Blueprint      Level  Name

It seems that the bundle didn't install! No messages appeared either. To see what happened we need to have a look at the Karaf log located at data/log/karaf.log:

2012-10-10 13:10:04,423 | ERROR | af-server/deploy | fileinstall                      | 6 - org.apache.felix.fileinstall - 3.2.4 | Failed to install artifact: C:\Tools\Aniketos\apache-karaf-server\deploy\ecj-3.5.1.jar
org.osgi.framework.BundleException: The bundle file:/C:/Tools/Aniketos/apache-karaf-server/deploy/ecj-3.5.1.jar does not have a META-INF/MANIFEST.MF! Make sure, META-INF and MANIFEST.MF are the first 2 entries in your JAR!

Now we can understand what happened. The jar file is malformed. A jar file isn't a simple zip file renamed with a jar extension. It is also specified that both a META-INF and a META-INF/MANIFEST.MF entry must be present and in fact be the first 2 entries in the JAR. This requirement is seldomly mentioned, because it is (almost) nowhere used. However Apache Karaf does respect it and doesn't allow the installation of jar files that don't follow this rule. Well, it actually respects the rule if either a features.xml or deploy folder is used and ignores it in the case of installation in the command prompt. In the case of a features.xml an error message in red, similar to the error message in the log, appears when you try to install such a file. In case of installation through the deploy folder the failure is silent. 

Note: The Apache Karaf log file (available at data/log/karaf.log) should be checked for exceptions. Not all errors make it to the command prompt.

One can verify that there is indeed a problem with the specific jar, by using the jar tool that is available in the JDK:

jar -tf ecj-3.5.1.jar > entries.txt

The first lines in the entries.txt file are the following:

META-INF/
org/
org/eclipse/
org/eclipse/jdt/
org/eclipse/jdt/core/
org/eclipse/jdt/core/compiler/
org/eclipse/jdt/core/compiler/batch/
org/eclipse/jdt/internal/
org/eclipse/jdt/internal/antadapter/
org/eclipse/jdt/internal/compiler/
org/eclipse/jdt/internal/compiler/apt/
org/eclipse/jdt/internal/compiler/apt/dispatch/
org/eclipse/jdt/internal/compiler/apt/model/
org/eclipse/jdt/internal/compiler/apt/util/
org/eclipse/jdt/internal/compiler/ast/
org/eclipse/jdt/internal/compiler/batch/
org/eclipse/jdt/internal/compiler/classfmt/
org/eclipse/jdt/internal/compiler/codegen/
org/eclipse/jdt/internal/compiler/env/
org/eclipse/jdt/internal/compiler/flow/
org/eclipse/jdt/internal/compiler/impl/
org/eclipse/jdt/internal/compiler/lookup/
org/eclipse/jdt/internal/compiler/parser/
org/eclipse/jdt/internal/compiler/parser/diagnose/
org/eclipse/jdt/internal/compiler/problem/
org/eclipse/jdt/internal/compiler/util/
META-INF/MANIFEST.MF

Indeed the META-INF/MANIFEST.MF isn't among the first two. I don't know how this happened in a jar file uploaded in an official Maven repositoty. Perhaps the Jar file was "hand-edited", i.e. someone used a zip utility to alter its contents in some way. This is another reason, why this practice should be avoided.

Note: If you use a zip utility, like 7zip, to modify the contents of a JAR file, then this won't install in Karaf.

What we can do in such cases? One solution is to re-jar the file so that is properly formed. This can be achieved with the jar tool. But let's see what karaf has to offer us. Create the following features.xml file:

<features>
  <feature name="wrap_test" version="1.0">
    <bundle>http://repo1.maven.org/maven2/org/eclipse/jdt/core/compiler/ecj/3.5.1/ecj-3.5.1.jar</bundle>
   </feature>
</features>

Try to install the feature. An error appears. Now let's use the wrap protocol:


<features>
  <feature name="wrap_test" version="1.0">
    <bundle>wrap:http://repo1.maven.org/maven2/org/eclipse/jdt/core/compiler/ecj/3.5.1/ecj-3.5.1.jar</bundle>
   </feature>
</features>

We need to refresh the URL and install again. This situation is depicted in the figure below:



The wrapping did the job for us. Karaf downloaded the file and wrapped it so that is a proper OSGi jar file. This means that is a proper jar of course and that an appropriate OSGi manifest is added.

To demonstrate the addition of OSGi MANIFEST let us try to install activemq-protobuf-1.1.jar. This is ActiveMQ Protocol Buffers Implementation and Compiler jar file and is also available at a standard Maven repository. We add it at the features.xml and refresh the URL. After refreshing a URL, it is always good to issue a features:info command in order to verify that the changes have taken effect. Now the installation of the feature, fails with the following message:


Error executing command: Jar is not a bundle, no Bundle-SymbolicName http://repo1.maven.org/maven2/org/apache/activemq/protobuf/activemq-protobuf/1.1/activemq-p
rotobuf-1.1.jar

Again, if we use the wrap protocol, refresh the URL and install, the jar is successfully deployed.



Note: Apache Karaf 2.2.9 was used for the above examples. Both of the above cases were discovered in a real-life situation (working for Aniketos project), while trying to migrate an application from Equinox in Karaf. Equinox didn't complain for the above two jar files. Karaf on the other side, required these to be specially handled.

Τετάρτη, 10 Οκτωβρίου 2012

Easy installation of bundles in Apache Karaf

Apache Karaf provides many convenient ways for bundles installation:
  • Use of the install command in Karaf's command prompt
  • Copying of bundles to the deploy directory
  • Use of Apache Karaf features
From these three alternatives Karaf features seems to be the most advantageus. It allows the installation and uninstallation of many bundles in one go. No bundles will be installed, if there is a problem even with a single one. This helps you keep a clean container and spot errors easier. It supports mvn, http and file protocols making the installation of bundles from different locations easier. Last, but not least, it can also install non-OSGi jars (jars without an OGSi manifest), with the aid of the wrap protocol.

In a previous post about Spring-JDBC in an OSGi environment the usage of Karaf features with the Maven protocol was presented. A features.xml file pointing to bundles in Maven repositories was created. The features.xml file was hosted by a Maven project, making it possible to add a features Maven URL in Karaf.

However bundles are not always available in Maven repositories. Sometimes all you have is a zip file with all necessary jars or some HTTP URLs. This is especially true in Aniketos project, where there are contributions from many partners, using both Maven and Eclipse RCP for development and there is also some legacy code involved. In these cases Apache Karaf are still useful. 

This is an example of a features.xml file that points to jar files in the local file system:

<features>
  <feature name='greeter_server' version='1.0'>
    <bundle>file:///c:/Tools/Aniketos/bundles/common/org.osgi.compendium-4.2.0.jar</bundle>
    <bundle>file:///c:/Tools/Aniketos/bundles/greeter_sample/cxf-dosgi-ri-singlebundle-distribution-1.3.1.jar</bundle>
    <bundle>file:///c:/Tools/Aniketos/bundles/greeter_sample/cxf-dosgi-ri-samples-greeter-interface-1.3.1.jar</bundle>
    <bundle>file:///c:/Tools/Aniketos/bundles/greeter_sample/cxf-dosgi-ri-samples-greeter-client-1.3.1.jar</bundle>
  </feature>
</features>

These can easily be installed in one go:


karaf@root> features:addURl file:///path/to/features.xml
karaf@root> features:install greeter_server

If you modify one of the jar files and you want to re-install, then you can use:


karaf@root> features:uninstall greeter_server
karaf@root> features:install greeter_server
   
If you want to remove or add a jar file in the features.xml file, then you can refresh the URL:


karaf@root> features:refreshURl file:///path/to/features.xml
karaf@root> features:install greeter_server


In another post I am going to explain how the wrap protocol supported by Karaf can help in certain situations.


 

Τρίτη, 9 Οκτωβρίου 2012

Spring JDBC in OSGi

Spring JDBC provides an excellent alternative to the dreadful Java JDBC API for accessing a database. However, using it in an OSGi container isn't so easy, as there are some pitfalls to avoid. I will try to describe here a solution to this problem.


Test Database

First let's create a test database. We are going to use MySQL. Open a MySQL command prompt and type the following:

CREATE DATABASE `repository`;

USE `repository`;

CREATE TABLE `items` (
  `ItemId` int(11) NOT NULL AUTO_INCREMENT,
  `Name` varchar(255) NOT NULL,
  PRIMARY KEY (`ItemId`)
) ENGINE=InnoDB AUTO_INCREMENT=4 DEFAULT CHARSET=utf8;


insert  into `items`(`ItemId`,`Name`) values (1,'Name 1'),(2,'Name 2'),(3,'Name 3');

Spring-JDBC bundle

Now we are going to create a test bundle to access the data stored in the above table. The following pom.xml contains the appropriate configuration and the required dependencies. Notice that we need spring-core, spring-jdbc and spring-tx (transaction) bundles. These are the necessary compile time dependencies. There are more runtime dependencies that we need to install  in the OSGi container. The exact list of dependencies will be presented later.


<project xmlns="http://maven.apache.org/POM/4.0.0" 
         xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd">
  
    <modelVersion>4.0.0</modelVersion>
    <groupId>aniketos</groupId>
    <artifactId>springjdbc-test-impl</artifactId>
    <packaging>bundle</packaging>
    <version>1.0.0</version>
    <name>springjdbc-test</name>
    <url>http://maven.apache.org</url>
  
    <properties>
        <spring.version>3.1.1.RELEASE</spring.version>
        
  <bundle.require.bundle>org.springframework.jdbc</bundle.require.bundle>
  <bundle.import.package>
 javax.sql,
 org.springframework.core;version="[2.5.6,3.1.2)",
 org.springframework.jdbc;version="[2.5.6,3.1.2)",
 org.springframework.transaction;version="[2.5.6,3.1.2)"        
  </bundle.import.package>
  <bundle.export.package></bundle.export.package>        
        <bundle.dynamicimport.package>*</bundle.dynamicimport.package> 
    </properties>
  
    <build>    
        <plugins>
            <plugin>
                <groupId>org.apache.felix</groupId>
                <artifactId>maven-bundle-plugin</artifactId>
                <version>2.1.0</version>
                <extensions>true</extensions>
                <configuration>
     <instructions>
      <Bundle-SymbolicName>${project.artifactId}</Bundle-SymbolicName>
      <Bundle-Activator>gr.atc.aniketos.springjdbc.MainActivator</Bundle-Activator>
      <Bundle-Name>${project.artifactId}</Bundle-Name>
      <Bundle-Description>A bundle that demonstrates the use of SPRING JDBC in an OSGi environment</Bundle-Description>
      <Bundle-SymbolicName>${project.artifactId}</Bundle-SymbolicName>
      <Require-Bundle>${bundle.require.bundle}</Require-Bundle>
      <Import-Package>${bundle.import.package}</Import-Package>
      <Export-Package>${bundle.export.package}</Export-Package>
                        <DynamicImport-Package>${bundle.dynamicimport.package}</DynamicImport-Package>
     </instructions>
    </configuration>                
            </plugin>
            
            <plugin>
                <artifactId>maven-compiler-plugin</artifactId>
                <version>2.3.2</version>
                <configuration>
                    <source>1.6</source>
                    <target>1.6</target>
                </configuration>
            </plugin>            
        </plugins>
    </build>
  
    <dependencies>        
        <dependency>
            <groupId>org.osgi</groupId>
            <artifactId>org.osgi.core</artifactId>
            <version>4.3.0</version>
        </dependency>    
  <dependency>
   <groupId>org.springframework</groupId>
   <artifactId>spring-core</artifactId>
   <version>${spring.version}</version>
  </dependency>
  <dependency>
   <groupId>org.springframework</groupId>
   <artifactId>spring-tx</artifactId>
   <version>${spring.version}</version>
  </dependency>
  <dependency>
   <groupId>org.springframework</groupId>
   <artifactId>spring-jdbc</artifactId>
   <version>${spring.version}</version>
  </dependency>        
    </dependencies>
  
</project>

The Java code is simple as we only want to demonstrate that the access to the database is succesfull.


public class MainActivator implements BundleActivator {

    public void start(BundleContext context) throws Exception {
        System.out.println("Started...");
        
     String databaseUrl = "jdbc:mysql://localhost:3306/repository";
     String databaseUser = "root";
     String databasePassword = "";

        try {
            Class.forName("com.mysql.jdbc.Driver").newInstance();
        } catch (Exception ex) {
         System.out.println(ex.getMessage());
            throw new IllegalStateException(
                    "Could not load JDBC driver class", ex);
        }

     DriverManagerDataSource dataSource = new DriverManagerDataSource();
     dataSource.setUrl(databaseUrl);
     dataSource.setUsername(databaseUser);
     dataSource.setPassword(databasePassword);

     JdbcTemplate jdbcTemplate = new JdbcTemplate(dataSource);
        
        Collection<String> names = jdbcTemplate.query(
                          "SELECT `items`.`Name` FROM `items`", 
                          new Object [0], new ServiceNameMapper());

        for(String name: names) {
            System.out.println(name);
        }
    }

    private final class ServiceNameMapper implements RowMapper<String> {
     public String mapRow(ResultSet rs, int rowNum) throws SQLException {
            return rs.getString("Name");
        }
    }    
    
    public void stop(BundleContext context) throws Exception {

    }
}


mysql-springjdbc-fragment bundle

In order to test the above bundle you of course need to install a MySQL driver to your OSGi container. However this isn't enough. The same class loader needs to be used from both Spring-JDBC and the driver. This isn't the case though, as these two are in different bundles and have separate class loaders. The solution is to use a fragment bundle. This can be an empty bundle that all it does is to attach the MySQL driver to Spring-JDBC, so that these two can share the same classloaders. This pom.xml creates the fragment bundle:


<build>  
  <plugins>
    <plugin>
      <groupId>org.apache.felix</groupId>
      <artifactId>maven-bundle-plugin</artifactId>
      <version>2.1.0</version>
      <extensions>true</extensions>
      <configuration>
        <instructions>
          <Bundle-SymbolicName>${project.artifactId}</Bundle-SymbolicName>
          <Fragment-Host>org.springframework.jdbc</Fragment-Host>
          <Import-Package>com.mysql.jdbc</Import-Package>
        </instructions>
      </configuration>        
    </plugin>
  </plugins>
</build>

Warning: Maven (bnd tool internally) doesn't let you build a completely empty jar. Just place a text file in the META-INF folder to overcome this.


Running everything in Apache Karaf

Now let's see how can we easily install everything in Apache Karaf. We are going to make use of Apache Karaf features. In order for this to work Maven needs to be installed and the M2_HOME variable properly set.

Karaf features allows you to group a bundle and its dependencies and install everything in one go. All bundles are defined in a features.xml file. Since Karaf supports the mvn protocol, we can use mvn URLs. We can use both remote and local repositories.


<features>
  <feature name="springjdbc_test" version="1.0">    
    <bundle>mvn:org.apache.commons/com.springsource.org.apache.commons.logging/1.1.1</bundle>
    <bundle>mvn:org.springframework/spring-core/3.1.1.RELEASE</bundle>
    <bundle>mvn:org.springframework/spring-asm/3.1.1.RELEASE</bundle>
    <bundle>mvn:org.springframework/spring-beans/3.1.1.RELEASE</bundle>
    <bundle>mvn:org.springframework/spring-context/3.1.1.RELEASE</bundle>
    <bundle>mvn:org.springframework/spring-tx/3.1.1.RELEASE</bundle>
    <bundle>mvn:org.springframework/spring-jdbc/3.1.1.RELEASE</bundle>
    
    <bundle>mvn:aniketos/mysql-springjdbc-fragment/1.0.0</bundle>
    <bundle>mvn:com.mysql.jdbc/com.springsource.com.mysql.jdbc/5.1.6</bundle>    
    <bundle>mvn:aniketos/springjdbc-test-impl/1.0.0</bundle>    
  </feature>
</features>

You can easily install the springjdbc_test feature by copying the above features.xml file in your file system and then typing in Apache Karaf:


karaf@root> features:addURl file:///path/to/features.xml
karaf@root> features:install springjdbc_test

However we can also create a Maven project to hold the features.xml file. The features.xml needs to go under src/main/resources. The pom.xml for building the project is the following:


<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" 
     xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
     xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">

  <modelVersion>4.0.0</modelVersion>

  <groupId>aniketos</groupId>
  <artifactId>springjdbc-feature</artifactId>
  <version>1.0.0</version>
  <packaging>pom</packaging>
  <name>springjdbc-feature</name>
  
  <build>
    <resources>
      <resource>
        <directory>src/main/resources</directory>
        <filtering>true</filtering>
      </resource>
    </resources>
    <plugins>
    <plugin>
      <groupId>org.apache.maven.plugins</groupId>
      <artifactId>maven-resources-plugin</artifactId>
      <executions>
       <execution>
        <id>filter</id>
        <phase>generate-resources</phase>
        <goals>
          <goal>resources</goal>
        </goals>
       </execution>
      </executions>
    </plugin>
      <plugin>
        <groupId>org.codehaus.mojo</groupId>
        <artifactId>build-helper-maven-plugin</artifactId>
        <executions>
          <execution>
            <id>attach-artifacts</id>
            <phase>package</phase>
            <goals>
              <goal>attach-artifact</goal>
            </goals>
            <configuration>
              <artifacts>
                <artifact>
                  <file>target/classes/features.xml</file>
                  <type>xml</type>
                </artifact>
              </artifacts>
            </configuration>
          </execution>
        </executions>
      </plugin>
    </plugins>
  </build>

</project>

Now the feature can be install by adding a URL with the mvn protocol:


karaf@root> features:addURl mvn:aniketos/springjdbc-feature/1.0.0/xml
karaf@root> features:install springjdbc_test

Summary

In this post we presented how Spring-JDBC can be used in an OSGi environment. A fragment bundle that will attach the SQL driver to the Spring-JDBC bundle is needed. The client bundle only needs to import spring-jdbc and javax.sql packages. However, a dynamic import is also necessary.

We also showed how the Apache Karaf features can be used to easily install a bundle and its dependencies.

The code of this example is available in GitHub.