Using Hazelcast as an OSGi blueprint service

Posted: May 4th, 2014 | Author: | No Comments »

During the last years I have been involved in a very innovative product to manage security policies in networks. The premises for the product were that the product should be scalable and to have high availability capabilities. Besides the product should be modular, enabling extensions through plugins. For the second requirement we decided to use OSGi with Blueprint (specially, Gemini and Spring) and Eclipse Virgo with Jetty as the server. For the HA requirement, finally we decided to use Hazelcast to allow sharing data between nodes. Hazelcast is a powerful technology that provides an easy API and with almost zero-configuration you can distribute data structure like user session in a cluster.

By the other hand, the benefits of using OSGi are well-know and its programming model improves the decoupling of components due to the differentiation between interface and implementation through differents bundles. This splitting allows technology and framework abstraction through service-oriented paradigm. In our case, Hazelcast is used as a custom implementation of a CacheManagerService that provides easy access to backed maps.

The CacheManagerService looks like:

public interface CacheManagerService {

         Map getCache(String name);
}

This simply interface provides a method to get a named map that will be a distributed map thanks to Hazelcast. The implementation (in another bundle) looks like:

public class HazelcastCacheManagerService implements CacheManagerService {

        private HazelcastInstance instance;

        public void setInstance(HazelcastInstance instance) {
                this.instance = instance;
        }

        public HazelcastInstance getInstance() {
                return instance;
        }

        @Override
        public  Map getCache(String name) {
                return getInstance().getMap(name);
        }
}

As the snippet shows the use of Hazelcast is very easy, delegating the way of retrieve the map in this framework. You can create the Hazelcast instance in another Spring context, configurating the name, IPs, etc, and setting this instance when you create this bean.

So far, the idea is quite simple, to use this service the first step is to create the bean, expose as an OSGi blueprint service and consuming in another bundle by importing just the CacheManagerService API bundle. With this approach the Hazelcast implementation maintain hidden because Blueprint and Spring use Java proxies to inject the service implementation in the consumer beans.

What’s wrong with this approach?

Ok, if you have some experience with OSGi, the classloading problem is known for you. Sometimes, it’s hard to deal with import-package directive in MANIFEST.MF, especially when the classes you have to import aren’t used by you directly. With Hazelcast (or another library that needs reflection) the problem is maybe a bit more complicated because when Hazelcast needs to serialize objects, it needs the definition class in its classloader, so the bundle with Hazelcast implementation should have as many import-package as it needs to serialize. This strategy doesn’t look too good, because we are losing extensibility and modularity due to the Hazelcast bundle needs to import each class we want to put in a distributed map. The MANIFEST.MF file of this bundle will be changed in accordance we add new bundles that consume the CacheManagerService. This sounds really bad :(

Solution

Googling a little bit, it seems it’s a common problem when dealing with this kind of libraries. A solution could be use the directive DynamicImport-Package, but with this solution you lose the control of the package version you want to import. So, taking advantage the Hazelcast allows us to control its classloader through its instance, we can extend this classloader at will. To do that, we inject a custom classloader to Hazelcast instance, after that we inject this instance to HazelcastCacheManagerService. The custom classloader is a CompositeClassLoader like this, that aggregates different classloaders from each bundle that consumes the cache service.

Finally, we need to control how the different classloaders are added to this CompositeClassLoader. A trick that Gemini Blueprint offers is to access to the bundle context that invokes the method and get its classloader.

LocalBundleContext.getInvokerBundleContext().getBundle().adapt(BundleWiring.class).getClassLoader();

Once this has been done, Hazelcast can serialize every class the consumer bundle can access. The code of HazelcastCacheManagerService looks like:

public class HazelcastCacheManagerService implements CacheManagerService {

        // ... previous code

        public void init() {
                addInvokerClassLoader(getClass().getClassLoader());
        }
        @Override
        public  Map getCache(String name) {
                addInvokerClassLoader(getInvokerClassLoader());
                return getInstance().getMap(name);
        }
        protected ClassLoader getInvokerClassLoader() {
                return LocalBundleContext.getInvokerBundleContext().getBundle().adapt(BundleWiring.class).getClassLoader();
        }

        protected void addInvokerClassLoader(ClassLoader cl) {
                ((CompositeClassLoader) getInstance().getConfig().getClassLoader()).add(cl);
        }
        // ...
}

The method init add the own bundle classpath to the CompositeClassLoader and should be declared as init-method in Spring configuration (or annotated with @PostConstruct). Then, for each getCache invocation the classloader of the client will be added to the CompositeClassLoader. To prevent duplicated classloader entries in the CompositeClassLoader we change the List for Set as a backed collection.


Tags: , , , , ,

How to avoid if/else statements

Posted: July 24th, 2011 | Author: | No Comments »

When I’m programming I always try to follow good practices and dedicate some time to review my own code. Mainly, I do this practice for two reasons: to improve as programmer, and to make easier the life of the person will deal with my code in the future. Techniques like code review help to the organization (and its developers) to get better the quality of its applications; one day I heard that if somebody knows that his code will be read by another one, the motivation to improve the code grows. I really agree with this assertion.

Conditional statements

One way to improve our code is cleaning the parts that seem hard to cope when the requirements are changeable. The nested conditional statements usually are one of these parts of code that should be cleaned.

Conditional statements are essential in any programming language and there isn’t any problem to use them, but sometimes when the code has many nested conditionals can lead to have a code very dirty and with many lines.

if ("value1".equals(value)) {
  doSomething();
} else if ("value2".equals(value)) {
  doSomethingButDifferent();
} else if ("value3".equals(value)) }
  doWhatYouWant();
} else {
  doNothing();
}

The code above is typical; performing an action depending on a value of a variable. In this moment, you can think to avoid this if/else by using of switch/case (Java 7 already supports with string) but it would be to pass the problem back and forth. Even using an enum with the possibles values is not enough, although the code will be cleaner.

Next, I will propose some techniques to refactor your conditional statements:

Avoid instanceof clause

Instanceof is a Java keyword that it should be used occasionally. Code like this shows the abuse of instanceof:

if (obj instanceof Technical) {
  doSomethingWithTechnical();
} else if (obj instanceof Chief) {
  doSomethingWithChief();
} else if (obj instanceof Secretary) }
  doSomethingWithSecretary();
}

// ...

void doSomethingWithTechnical() { ... }
void doSomethingWithChief() { ... }
void doSomethingWithSecretary() { ... }

As you can see, this code will grow when we need more methods for more kinds of employees (for each new employee this pattern needs an if/else statements and its concrete doSomething() implementation). This code can be fixed using OOP benefits, in this case the inheritance and polymorphism help us to avoid this if/else statements. We just have to have an abstract superclass with the method signature we want to call for each subtype.

abstract class Employee {
  abstract void doSomething();
}

class Technical extends Employee {
  void doSomething() {
    // put here the specific code for technical employee
  }
}

class Chief extends Employee {
  void doSomething() {
    // put here the specific code for chief employee
  }
}

class Secretary extends Employee {
  void doSomething() {
    // put here the specific code for secretary employee
  }
}

// Use of superclass type instead of if/else and instanceof
Employee obj;
// ...
obj.doSomething();

Moreover, there is a design pattern named Visitor that correctly applied can be useful when you have different ways for doing something for each type.

Take advantage of the enums

Sometimes the conditional statements are involve in a more complex scenarios than the instanceof example. An example could be the showed in a introduction of this post: performing a action depending of value of a string. As I said, the fact of replacing these statements for switch/case or enums as a container of constants doesn’t mitigate the effect we want to correct. The solution would be to use enums that joins the execution of an action with the implementation of the criteria that says whether action must be executed or not. This technique is really useful when the condition is not a simple “equals”.

    public enum AlgorithmExecutor {

        FIRST {
            boolean match(String condition) {
                // Put here the implementation for this algorithm will be triggered
            }

            Result execute() {
                // Put here the implementation of this first algorithm
            }
        },
        SECOND {
            boolean match(String condition) {
                // Put here the implementation for this algorithm will be triggered
            }

            Result execute() {
                // Put here the implementation of this second algorithm
            }
        },
        DEFAULT {
            boolean match(String condition) {
                return true;
            }
            Result execute() {
                // Put here the implementation of this default algorithm
            }
        };

        public static AlgorithmExecutor getAlgorithmFromCondition(String condition) {
            for(AlgorithmType c : values()) {
                if(c.match(condition)) {
                    return c;
                }
            }

            return DEFAULT;
        }

        abstract boolean match(String condition);

        abstract Result execute();
    }

In this code, you are forcing for each literal inside of enum that implements the methods match and execute. The first one returns a boolean that indicates if the string given in the parameter matches with certain condition. Note that in this method you can use non-trivial comparisons. In addition, the parameter could be encapsulated in an interface that allowed another kinds of objects, not only strings. The second method is for the algorithm implementation.

Finally, the code for calling the component would be:

Result res = AlgorithmExecutor.getAlgorithmFromCondition(condition).execute();

Use reflection

A particular case of the using of refactoring to clean if/else statements is when the conditional body returns an object, and each body returns a different implementation of an interface. It’s a common idiom used in a factory class. An example of this type could be:

public static DocumentType createDocument(String contentType)
  throws UnsupportedDocumentTypeException {

  if (contentType.equals("application/pdf")) {
    return new PDFDocument();
  } else if(contentType.equals("application/msword")) {
    return new WordDocument();
  } else if(contentType.equals("image/jpeg")) {
    return new JPEGDocument();
  }
  throw new UnsupportedDocumentTypeException();
}

In this case, you can use the enums one more time linking the value of each type with the class you have to return. Then with reflection it’s easy to create a new instance of this class and return it. An example is describe below:

    enum ContentType {

        PDF("application/pdf", PDFDocument.class),
        WORD("application/msword", WordDocument.class),
        JPEG("image/jpeg", JPEGDocument.class);

        private String value;

        private Class clazz;

        ContentType(String value, Class clazz) {
            this.value = value;
            this.clazz = clazz;
        }

        public static DocumentType getDocumentFromMimetype(String value)
                throws UnsupportedDocumentTypeException, InstantiationException, IllegalAccessException {
            for(ContentType c : values()) {
                if(c.value.equals(value)) {
                    return (DocumentType) c.clazz.newInstance();
                }
            }
            throw new UnsupportedDocumentTypeException();
        }
    }

This example looks pretty good (at least, for me :) ) because the association between the mimetype and its DocumentType implementation is focused on a enum without using conditional blocks. The client of this component just has to do the following calling:

DocumentType type = ContentType.getDocumentTypeFromMimetype(contentType);

Your own criteria over the design patterns

Although I think the explained patterns in this post help to clean your code, you should to be aware when the application of these techniques are fitting in your design. Maybe, when you have a conditional block with two or three if/else statements, the effort involved may be excessive in relation to the final design gain.


Tags: , , , ,

Effective pattern for data access with JPA

Posted: May 5th, 2011 | Author: | 4 Comments »

Since Java EE 6 is out, many of patterns and blueprints that were described with J2EE are deprecated. The main reason is due to the new architecture offers new components that simplify the development. One of these components is EntityManager, which can be consider as the implementation of DAO pattern. This object provides us common functions to operate with our model in a relational database. Some of these functions are save, retrieve, update and delete entities, that is, tipical CRUD operations.

Where is the problem?

It all depends on how you layerizer your application. Many of code that you can see about data access with JPA uses the EntityManager injected directly in the business objects or web controllers. In my opinion, with this approach couples too much the business logic with the technology used in the persistence layer. Think for a moment that an entire application developed by using this way has serious performance problems and the reponsible determines to move the persistence layer to NoSql approach, this decision would mean having to change this reference from every class. As you can see, it would mean an unbelievable effort.

Creating the indirection

First of all, the idea of using a facade to access JPA looks mandatory if we want to uncouple the technology with the data access. In this line, the next interface is proposed:

public interface PersistenceService<K, E> {

	E save(E entity);

	E update(E entity);

	void remove(E entity);

	E findById(K id);

	List<E> findAll();

	Long getTotalResult();
}

By using Java Generics this interface provides the common operation with entities. Each class that implements this interface have to indicate what the type of entity it manages E and the type of identificator K. Obviously, this class will have to implement the methods too :D

Now, we have to define abstract classes either for the entities or for persistence services. These abstract classes help us to put in the same location all the common code, leaving the concrete classes that they implements only the specific code.

In JPA to model entities, we need a classes with Entity annotation, but there is some code that is mandatory in each entity, for example the identificator. In addition, we use named queries for those queries that are typical in any scenario, so the entities need the constants for these named queries. The abstract entity is as follow:

@MappedSuperclass
@Inheritance(strategy = InheritanceType.JOINED)
public abstract class AbstractEntity implements Serializable {

    public static final String FIND_ALL = "Entity.findAll";

    public static final String TOTAL_RESULT = "Entity.totalResult";

    private static final long serialVersionUID = 1L;

    @Id
    @GeneratedValue(strategy = GenerationType.AUTO)
    protected Long id;

    // Getters and Setters
}

With @MappedSuperclass and @Inheritance we indicates that this class doesn’t have to be mapped to a table and the constants will be used in the named queries. Due to JPA 2.0 doesn’t allow inheritance for named queries, these two queries will be defined in each specific entity.

Now, the code for the abstract persistence service. This class performs the CRUD operations through the EntityManager. The code is:

public abstract class AbstractJPAPersistenceService<K, E extends AbstractEntity> implements PersistenceService<K, E> {

    protected Class<E> entityClass;

    @PersistenceContext
    protected EntityManager em;

    @PostConstruct
    public void init() {
        ParameterizedType genericSuperclass = (ParameterizedType) getClass().getGenericSuperclass();
        this.entityClass = (Class<E>) genericSuperclass.getActualTypeArguments()[1];
    }

    @Override
    public E save(final E entity) {
        em.persist(entity);
        return entity;
    }

    @Override
    public E update(final E entity) {
        return em.merge(entity);
    }

    @Override
    public void remove(final E entity) {
        em.remove(em.merge(entity));
    }

    @Override
    public E findById(final K id) {
        return em.find(entityClass, id);
    }

    @Override
    public List<E> findAll() {
        return em.createNamedQuery(AbstractEntity.FIND_ALL).getResultList();
    }

    @Override
    public Long getTotalResult() {
        return (Long) em.createNamedQuery(AbstractEntity.TOTAL_RESULT).getSingleResult();
    }

}

With this class, each subclass just implements its specific method for queries. Perhaps, the most important part is in the method init() where the entity class is created by using reflection through generic values. Due to the idea is to expose the persistence services as EJB, this method is annotated with @PostConstruct that is called each time its subclass is injected. Furthermore, the queries are implemented with the constants declared in the AbstractEntity, the query will be defined in the specific entity.

Implementing an example

Ok, the skeleton for persistence services is ready, just need to use in a real scenario. In this case, to show how easy it is to use this “pattern”, the example is about the Book entity.

@Entity
@NamedQueries({
    @NamedQuery(name = Book.FIND_ALL, query = "select b from Book b"),
    @NamedQuery(name = Book.TOTAL_RESULT, query = "select count(b) from Book b"),
    @NamedQuery(name = Book.FIND_BY_TITLE, query = "select b from Book b where b.title = :title"),
    @NamedQuery(name = Book.FIND_BY_ISBN, query = "select b from Book b where b.isbn = :isbn")
})
public class Book extends AbstractEntity {

    public static final String FIND_BY_TITLE = "Book.findByTitle";

    public static final String FIND_BY_ISBN = "Book.findByISBN";

    private String title;

    private String description;

    private String isbn;

    // Getters and Setters

The code is quite basic: declaration of attributes (fields in the table), specific queries and named queries with @NamedQueries.

Finally, the implementation of PersistenceService to manage books is as follow:

@Stateless
public class BookPersistenceService extends AbstractJPAPersistenceService<Long, Book> {

    public List<Book> findByTitle(String title) {
        return em.createNamedQuery(Book.FIND_BY_TITLE).setParameter("title", title).getResultList();
    }

    public List<Book> findByISBN(String isbn) {
        return em.createNamedQuery(Book.FIND_BY_ISBN).setParameter("isbn", isbn).getResultList();
    }
}

As you can see, the code is very light because the complexity is in AbstractJPAPersistenceService. To use this service, just inject it in your business logic.


Tags: , , , ,

Java EE 7 – Promises and wishes

Posted: March 7th, 2011 | Author: | No Comments »

Oracle has approved the JSR-342 specification where the new features of this platform will be described. This spec will focus on cloud scenarios and therefore, I think that many of its improvements are related with scalability, high performance and massive data treatment.

Some changes that will be included in this spec are:

  • HTML5 support – improve of Servlet by including WebSocket API.
  • Updated some technologies already included – such as CDI, EJB, JPA, JSF, JMS, JAX-RS, JAX-WS, etc.
  • Inclusion of JCache – it was time to have a standard to cache in Java EE.
  • Pruning of some deprecated technology – such as EJB Entity Beans or JAX-RPC 1.1.

And in my wishlist for this specification I propose the next ones:

  • NoSQL support – currently JPA just covers the relational databases and with the increase of this kind of storage (non-relational and distributed systems) I think that JEE7 should play in this league by providing an ORM for systems as Cassandra, MongoDB, CouchDB, etc.
  • Business process - the usage of process modeling with PDL, BPMN or BPEL is well known in the software development and Java EE doesn’t provide any solution to tackle with these laguages. So far, tools like JBPM make easy the integration between process or workflows and enterprise software, JEE7 would have to say something about this.
  • Modular development – WAB support enhancements (allowing OSGi bundles with entities that automatically are mapped to database tables) something like JBoss Modules.
  • Web layer – page flow manager like Seam system, cool uris like PrettyFaces, improvements such as Comet in JSF or HTML5 WebSQL support.

Tags: , , ,

Custom constraints with Bean Validation

Posted: February 28th, 2011 | Author: | No Comments »

One of the more interesting components that Java EE 6 brings us is Bean Validation JSR-303, which helps you to define constraints on your model making possible to divide the model validation and the business logic. Besides, this component can be used in any layer you are working, for instance, JSF uses Bean Validation to validate the data in a web forms and JPA uses to validate data before storing the entity in the database.

The specification comes with some validations that you can use directly by using their annotations: @NotNull, @Min, @Max, @Pattern, etc. Despite these annotations are very useful, they don’t cover all your model constraints and sometimes it’s necessary to define custom constraints according to real world.

Creating custom constraints

Imagine you are developing a typical product order where you have two customer types (normal and premium) and two product types (normal and exclusive) and your model has the following constraint:

“Normal customer can buy two exclusive products in his order. On the other hand, premium customer doesn’t have any limitation about this.”

First of all, you have these entities of your model:

public class Product {

    public Product () { }

    public Product(String name, Integer cost, ProductType type) {
        this.name = name;
        this.cost = cost;
        this.type = type;
    }

    private String name;

    private Integer cost;

    private ProductType type;

    public enum ProductType { NORMAL, EXCLUSIVE };

    // Getters and Setters

}

public class Order {

    public Order() { }

    public Order(Customer customer) {
        this.customer = customer;
    }

    private List<OrderLine> lines = new LinkedList<OrderLine>();

    private Customer customer;

    // Getters and Setters

}

public class OrderLine {

    public OrderLine() { }

    public OrderLine (Product product, Integer quantity){
        this.product = product;
        this.quantity = quantity;
    }

    private Product product;

    private Integer quantity;

    // Getters and Setters

}

Customer class will be defined soon. The next step is to create the annotation and put in the correct place. The annotation have to have the form:

@Retention(RUNTIME)
@Target({TYPE, FIELD, METHOD})
@Constraint(validatedBy = ExclusiveProductsValidator.class)
public @interface ExclusiveProducts {

    String message() default "Exclusive products constraint has been violated";

    Class<?>[] groups() default {};

    Class<? extends Payload>[] payload() default {};

    int max() default 0;
}

The meaning of attributes are:

  • message – Indicates what is the message returned if the bean doesn’t satisfied the constraint. You can also use bundle JSF message here.
  • groups – Array of groups to filter the validating according a criteria.
  • payload – Indicates the severity of validation.
  • max – This attribute indicates the maximun quantity of exclusive products a customer can order, in this example, the value will be two.

It just need to know where to put this annotation. As you can think, constraint is associated to list of OrderLine in Order class, so this class would be as follows:

public class Order {

    // ...

    @ExclusiveProducts(max = 2)
    private List<OrderLine> lines = new LinkedList<OrderLine>();

    // ...
}

At this moment you are thinking “ok, I’ve created the annotation and I’ve put it in the bean, but where is the validation code?”. As you can see in the definition of annotation, it uses another annotation to declare what class should validate the bean, in this case this class is ExclusiveProductValidator. The code of this class is:

public class ExclusiveProductsValidator implements ConstraintValidator<ExclusiveProducts, List<OrderLine>> {

    int max;

    @Override
    public void initialize(ExclusiveProducts constraintAnnotation) {
        max = constraintAnnotation.max();
    }

    @Override
    public boolean isValid(List<OrderLine> value, ConstraintValidatorContext context) {
        int count = 0;
        for (OrderLine ol : value) {
            if (ol.getProduct().getType().equals(ProductType.EXCLUSIVE)) {
                count += ol.getQuantity();
            }
        }

        if (count > max) {
            return false;
        }

        return true;
    }
}

This code is easy, the methodinitialize simply stores the max value in an internal attribute and then with isValid validates the OrderLine list by returning false if the quantity of exclusive products is more than the value, otherwise returns true.

Validation groups

So far, the validation doesn’t distinguish between any kind of customer, and our requirement says that this restriction is only applied to normal customer. To this end, we need to create validation groups. Groups allow us to categorize beans to apply a kind of validation or another one, or apply a validation with different values depending the groups. To create a group, you need to declare an interface that extends Default and different subtypes.

public interface CustomerType extends Default { }

public interface NormalCustomer extends CustomerType { }

public interface PremiumCustomer extends CustomerType { }

At this point, the definition of Customer is very clear, this entity will have its own attributes (username, email, etc) and it have to indicate what kind of customer it is. An example of Customer is:

public class Customer {

    public Customer () { }

    public Customer(String username, Class<? extends CustomerType> type) {
        this.username = username;
        this.type = type;
    }

    private String username;

    private Class<? extends CustomerType> type;

    // Getters and Setters

}

And we have to rewrite the validation to use this feature:

public class Order {

    // ...

    @ExclusiveProducts(max = 2, groups = NormalCustomer.class)
    private List<OrderLine> lines = new LinkedList<OrderLine>();

    // ...
}

With this clause we are saying to Bean Validation that this constraints is just applied to order of normal customers.

Testing the constraint

The custom constraint is created and what you need is to know how to test this behaviour and how to validate beans. I’m going to use a simple test case by using JUnit like that:

public class ExclusiveProductValidationTest {

    private static Validator validator;
    private static Customer normalCustomer;
    private static Customer premiumCustomer;
    private static Product normalProduct;
    private static Product exclusiveProduct;

    @BeforeClass
    public static void setUpClass() throws Exception {
        ValidatorFactory factory = Validation.buildDefaultValidatorFactory();
        validator = factory.getValidator();

        premiumCustomer = new Customer("usertest", PremiumCustomer.class);
        normalCustomer = new Customer("usertest", NormalCustomer.class);

        normalProduct = new Product("Normal product", 12, ProductType.NORMAL);
        exclusiveProduct = new Product("Exclusive product", 25, ProductType.EXCLUSIVE);
    }

    /**
     * Creating some order lines with 2 normal products and 2 exclusive products for
     * a normal customer. Validator shouldn't indicate any constraint violation.
     */
    @Test
    public void testWithTwoExclusiveProductsAndNormalCustomer() {

        OrderLine ol1 = new OrderLine(normalProduct, 2);
        OrderLine ol2 = new OrderLine(exclusiveProduct, 2);

        Order order = new Order(normalCustomer);
        order.getLines().add(ol1);
        order.getLines().add(ol2);

        Set<ConstraintViolation<Order>> violations = validator.validate(order, order.getCustomer().getType());
        assertEquals("It shouldn't have any violation in this case", 0, violations.size());

    }

    /**
     * Creating some order lines with 2 normal products and 3 exclusive products for
     * a normal customer. Validator should indicate the constraint violation.
     */
    @Test
    public void testWithThreeExclusiveProductsAndNormalCustomer() {

        OrderLine ol1 = new OrderLine(normalProduct, 2);
        OrderLine ol2 = new OrderLine(exclusiveProduct, 3);

        Order order = new Order(normalCustomer);
        order.getLines().add(ol1);
        order.getLines().add(ol2);

        Set<ConstraintViolation<Order>> violations = validator.validate(order, order.getCustomer().getType());
        assertEquals("It should have one violation in this case", 1, violations.size());
    }

    /**
     * Creating an order lines with 5 exclusive products for a premium customer.
     * Validator shouldn't indicate any constraint violation.
     */
    @Test
    public void testWithFewExclusiveProductsAndPremiumCustomer() {

        OrderLine ol = new OrderLine(exclusiveProduct, 5);

        Order order = new Order(premiumCustomer);
        order.getLines().add(ol);

        Set<ConstraintViolation<Order>> violations = validator.validate(order, order.getCustomer().getType());
        assertEquals("It shouldn't have any violation in this case", 0, violations.size());

    }
}

In setUpClass we create a validator through of ValidatorFactory, and we use this validator in each test. Remember that if you use Bean Validation in a JSF or JPA environment, the validation of your beans will be automatic and you won’t need an explicit validator (althought you can deactivate it and so, to have more control).


Tags: , ,

Dynamic http responses with OSGi

Posted: February 15th, 2011 | Author: | No Comments »

The benefits of OSGi are well-known and there are a lot of articles explaining how this technology can improve a system: modular development, pluggable architecture, dynamic services, etc.

I’m going to try to explain the basic concepts of OSGi by using a simple scenario. The idea of this example is very easy: to take a http request and to print the result applying a transformer over a text message. Dynamic behaviour appears with the differents transformers are not defined at the same time and in the same component of the core system. If you know how an OSGi container works, you’ve probably assumed that the example have at least two bundles (or plugins): one to accept the request and return the response, and another one to carry out its transformation over the message.

The example

As I said before, the example is very easy but I think that it represents the common uses of this technology. The system consists of two distinct components:

  1. Main bundle that captures http request, for example http://localhost/transform?to=xml, extracts the transformation type (xml, json, text, etc) from the request, looks for the appropiate transformer service that works with this type in the container and if it exists then returns the response transformed. For the http conversation this component uses a servlet through an extension provided by Equinox. Thanks to this feature it is very easy to program a servlet that listens a request by using Jetty as a servlet container. In addition, this component provides the interface that all transformers have to implement.
  2. Specific implementation of the transformer. When this bundle is actived  it sets to container what kind of transformation it can do, it’s like this bundle will say something similar to “hey! I can transform text to XML, so if you want this functionality, please call me :) ”.

Creating the project

The first step of this tutorial is to create the project using Eclipse. In this example I use a simple installation of Eclipse Helios and Equinox as an OSGi container. First of all, we’re going to create the first plugin which will have the responsability of dealing with http request. The steps are easy: File > New > Project … > Plug-in Project

As we can see in the image, the OSGi framework chosen is Equinox. We could choice easily another container such as Felix or Knopflerfish, but we need extensions for the servlet and this feature is only provided by Equinox.

Accept the wizard and the plugin project will be created in our workspace.

Setting dependencies

When the wizard is finished, we can explore the structure of this project. All OSGi bundles have similar structure. The most important file in an OSGi bundle is MANIFEST.MF, because this file stores the bundle name and the dependencies information and exported classes to be used for another bundles. If you click over this file, a special page shows how to configure the different options. At the moment we just need to set what dependencies this bundle needs: click to Dependencies tab and add the following ones:
org.eclipse.equinox.http.jetty, org.eclipse.equinox.http.servlet, org.eclipse.equinox.registry, org.eclipse.equinox.registry

Defining and exporting TransformerService interface

To allow the differents bundles with transformation services can integrate with this system, it’s necessary to define an interface with the methods that the custom transformers have to implement. The interface code is:

public interface TransformerService {

	/**
	 * Performs the transformation over message passed by parameter and returns the result
	 * @param message
	 * @return
	 */
	public String transform(String message);

	/**
	 * Indicates the type of transformation
	 * @return
	 */
	public String getType();

}

Methods are self explanatory :) . The next step is to export this interface to outside so the others bundle can see. This is configured in MANIFEST.MF file, in Runtime part. In this tab, in the Exported Packages section, you can specify what packages you want the others bundles can access. A good practice is to isolate the own functionallity of the bundle (in our case, the servlet class) and to export the classes and interfaces that they are part of the public API. About this, the book OSGi and Equinox propose to divide the package into subpackage named “api” and “internal” to separate exported packages of internal packages.

Activator and servlet

Eclipse generates a default class named Activator that is configured in the MANIFEST.MF file. This class has two important methods (start and stop) that they are called when the container activates (or deactivate) this bundle. Basicly these method receive the context and register (or unregister) the bundle. In this example, this bundle doesn’t provide any service to others, so you can remove this class and unset it in the Overview tab of MANIFEST.MF.

The next step is to implement the servlet. At this point, Equinox gives us a powerful way to declare dynamic services. This feature is named as Extension and, in this case, it allows us avoiding the web.xml file to declare our servlets classes. To configure it, in the Extensions tab we need to specify that our bundle will implement a servlet that requires a servlet container, so to perform this configuration we have to add the extension named as org.eclipse.equinox.http.registry.servlets and to fill the options:

Alias indicates which part of the url the servlet will be activated with, and class indicates what the servlet class will map this alias.

The next step is to create the servlet, the code is as a follow:

public class Servlet extends HttpServlet {

	/**
	 * Serial version UID
	 */
	private static final long serialVersionUID = 1L;

	/**
	 * Retrieves the transformer service that matches with the filter
	 * passed by parameter
	 * @param type
	 * @return TransformerService
	 */
	private TransformerService lookupService(String type) {
		BundleContext context = Activator.getContext();
		ServiceReference[] references;
		String filter = "(type=" + type + ")";
		try {
			references = context.getAllServiceReferences(TransformerService.class.getName(), filter);
		} catch (InvalidSyntaxException e) {
			return null;
		}

		if(references != null){
			return (TransformerService) context.getService(references[0]);
		} else {
			return null;
		}
	}

	/*
	 * (non-Javadoc)
	 * @see javax.servlet.http.HttpServlet#doGet(javax.servlet.http.HttpServletRequest, javax.servlet.http.HttpServletResponse)
	 */
	@Override
	protected void doGet(HttpServletRequest req, HttpServletResponse resp)
			throws ServletException, IOException {

		PrintWriter out = resp.getWriter();
		String type = req.getParameter("to");

		TransformerService service = lookupService(type);
		if(service != null){
			out.append(service.transform("Testing message"));
		} else {
			out.append("Sorry, but there isn't any services to transform to " + type);
		}
	}

}

The doGet method captures the value of parameter “to” (for example “xml” for http://localhost/transform?to=xml) and calls to method lookupService with this value. If there isn’t any instance of this service the servlet prints an error message or prints the result of calls the transform method with test message if there is.
The lookupService method obtains the context through Activator class and returns the TransformerService instance in the container that matches with the condition “type=xml”. Pay attention to how an instance is filtered according to their properties; the method getAllServiceReferences take the interface name and a string with the filter of its properties. The filter syntax is based on LDAP search, in this link you can get more information about this.

First transformer bundle

So far, we have been working in the same project, which take the request and look for the right transformer, but where is the transformer code? This is precisely the power of OSGi: the service definition can be separted of its implementation using differents bundles, even the bundles can be in differents machines (distributed OSGi). According to this development model, we’re going to create a new plugin project for this first transformer following the steps defined above. In this case, the transformer will take the message and it will transform it to simple XML.

Once the project is created, the next step is to configure the MANIFEST.MF by setting the previous project (which contains the servlet) as a required plug-in:

Thanks to this dependency, we can access to TransformerService interface from this bundle. So, we have to implement our version of the transformer for xml.

public class XMLTransformerService implements TransformerService {

	@Override
	public String transform(String message) {
		return "<message>" + message + "</message>";
	}

	@Override
	public String getType() {
		return "xml";
	}

}

This transformation is too ligth, simply wraps the message using a tag. Now, the last step is to define how this bundle will register its transformation services. To this end, just write the registration code into start method in Activator class and unregister the service in stop method. The Activator code is something like this:

public class Activator implements BundleActivator {

	private ServiceRegistration registration;

	public void start(BundleContext bundleContext) throws Exception {
		TransformerService service = new XMLTransformerService();
		Dictionary<String, String> props = new Hashtable<String, String>();
		props.put("type", service.getType());

		registration = bundleContext.registerService(TransformerService.class.getName(), service, props);
	}

	public void stop(BundleContext bundleContext) throws Exception {
		registration.unregister();
	}

}

The start method creates an instance of XMLTransformerService and registers with a property. This property is created by using a dictionary where the type of transformation is specified, and is useful when the other bundle wants to obtain an implementation that match with the type it is looking for.

Running the example

Ok, that’s all, you have created a simple dynamic scenario where OSGi can show you its potential. The last step is to run the example into Equinox container. This step is very easy, just click Run As – Run Configurations… through the contextual menu over any project. Then click new OSGi Framework and select what bundle you want to start up. Finally, you have to specify what bundles the container needs to, so click in Add Required Bundle and in the target area a few bundles are checked automatically.

If you are working with Linux you should add a VM parameter, because by default Jetty starts on port 80, and in this operating system this port is protected.
The parameters is -Dorg.eclipse.equinox.http.jetty.http.port=8080 and you have to put it in the VM Arguments, in Arguments tabs.

When you click Run button, Equinox will start and the bundles will be loaded inside the container, invoking each start method in its Activator class (the order of the loading of each bundle is also configurable). Once Equinox is running, you can interact with the container shell through the console view. This console allows you to start bundles, to see what services are availables, to extract some information from the containet, etc. If you type “help” in the console, the list basic commands will be shown.

Now, you can perform a http request with your favourite browser to http://localhost:8080/transform?to=xml and see what is the response. Also you can try another request, for example http://localhost:8080/transform?to=json and see how the system can’t find a transformer service for this type.

At this point, it’s very interesting that you are able to create a new bundle with new implementation transformer service and to play around with differentes commands to start bundles and stop them and to understand the real dynamic scenario. :)


Tags: , , , , ,