Wednesday, 30 October 2019

Moving to Medium

For various reasons I started writing blogs in Medium. Please have a look into my profile for the latest articles that I publish in recent past.

Thursday, 26 September 2019

Developing Microservices? Points to Ponder. Part - 1

Developing Microservices?

Yes(either by you or in your organization), might be the answer from most of you reading this blog. As you could understand from this title I'm going to share my experience in developing microservices based architecture in series of posts each post concentrating on one particular area. As I have been building services for a while using Spring Boot do bare with the solutions that are concentrated in and around with the frameworks provided by Spring EcoSystem.

Why there is a buzz around Microservices? 

A collection of loosely coupled services (Fig-1 is simple deception of Microservices architecture) that structures an application. In a microservices architecture, services are fine-grained, the protocols are lightweight and modular making the application easier to understand, develop, test and resilient to architecture erosion. It also facilitates parallelizes development, deploy, scale services independently, refactor individual service architectures and finally facilitate continuous delivery and deployment. There art 'N' articles out in the web that gives the proof for the above points but, there are very few of them that gives us the points to ponder while developing Microservices.


Fig -1 Microservices architecture

The following are few areas (in no particular order) in which I would like to share my experience while developing applications in Microservices architecture because, I firmly believe when we have all the following in place it would really help us reap the real benefits of Microservices architecture.

  • Configuration Management
  • Logging
  • API gateway
  • Service Discovery
  • Circuit Breaker
  • Authentication
  • Database Communication & Migration
  • Inter Service Communication
  • Integration testing
  • Deployment 
  • Monitoring
In this part - 1 of the blog, I'll be going through the evolution of the path in which we were handling the configuration management of services from monolithic ages to the current era.

Configuration Management

Software configuration management plays a significant role while developing any services. A small mistake with configuration dependency system might lead to potential business loss (can be in the form of monetary or reputation of the organization). I have seen or seeing the evolution of various approaches in handling configuration management and here is a glimpse of few approaches that are adopted in the projects in which I worked:
  • Shell Scripts
  • Environment Variables
  • Environment Profiles
  • Configuration Management tools (Puppet, Vagrant,....)
  • Configuration Server (Spring Cloud Config Server, Consul,...)

Shell Scripts?

Yes, shell scripts for configuration management. Gone, are those days were operations team used to have shell scripts to update/modify the configuration files while deploying each and every builds in production environment. The problem with this approach are dangling configuration files, dangling shell scripts basically due to lack of synchronization between the dev and operations teams. I'm not getting much into the details of how the scripts looked in this blog or post.

Environment Variables

As the configuration related to application does not change from environment to environment few people thought to leverage Environment variables for the attributes that change quite often and it's up to the operations team to make sure the variables are exposed to the application at the time of deployments. Still, few issues were not solved due the synchronization issues.

Environment Profiles

As a work around developers started to ship configuration files for different environment along with the deliverables to avoid issues due to lack of synchronization. But, it exposed few credentials to the team who has access to code repository and making security endpoints closed with keys left in the lock.

Configuration Management Tools

Then with the arise of multiple configuration management tools like puppet, vagrant and similar tools, IT infrastructure management was made easy addressing cross cutting concerns like provisioning, patching, configuration, and management of operating system and application components. Though, these tools helped the community significantly in the era of virtual machines and did not got the significance in the era of micro-services. 

Configuration Servers

As the era of micro-services was picking up things have changed drastically with respect to the configuration management and we have started moving towards servers for persisting configurations with different profile. I have used the following two services to persist my configurations:
  1. Spring Cloud Config Server
  2. Hashicorp Consul
  3. Hashicorp Vault
While the later has a numerous out of box features that can be used in micro-services architecture the adoption of it slightly higher that the former. Also, integrating Consul with micro-services is made easy through library.

Like Consul, Vault is also a distributed external configuration system with additional features to manage secrets and protects sensitive data. Few applications might have to maintain seed data confidentially when it's exposed in distributed configuration system it might be open so Hashicorp has built a Vault to cater the needs of applications that maintains data sensitive also, like Consul integrating Vault with spring applications would be a cake walk through the spring cloud vault.

Please, feel free to leave your comments and suggestions to improve my post. Also, let me know if you're looking for more details.

Monday, 19 August 2019

Programming a team


I have been registering few of my technical experience in my blog whenever I'm not lazy ๐Ÿ˜‚ this time I wanted to convey my perspective on a non-technical subject yes, from the title you might have guessed it as Team Building. The core aspect of team building is turn a group of individual contributors into a cohesive team. Have a look into the following pic depicts teamwork (I grabbed it from online just to portray teamwork and not to cut a tree๐Ÿ˜‰).

Most of the organization irrespective of size and nature they don't fail to allocate a portion of budget towards team building activities.

But, why do companies allocate budget for team building?

Most of the management guru's believed that team building enables better communication, better relationships that ultimately increases the productivity of the group (do note the management believes it increases productivity of the group which might or might on increase the productivity of individuals in the team).

How the budget is been used quite often?

Unlike other budgets, most of the organizations have some upper limit towards the expenditure that's been involved for the purpose of team building and these budgets are allocated on a quarterly basis or on a annual basis. Based, on the budget and based on the group/team lead/head/manager it's been used by choosing either one of the following:

  • Hire Team Building Coach (typically a day of formalities revolving around few boredom activities)
  • Team Lunch/Dinner
  • Outings
Personally, I feel all options does not add the right essence required for team building (despite after meticulous planning) because of the following reasons:
  • Poor turn around
  • Lack of interest 
  • Not once again
  • Non-collocated members
Also, I would like to highlight the trend that's prevailed these days in the name of team building, guess you have got it, yes, it's team lunch/dinner. Where, all in the team gets into the action by concentrating what's there in menu and my choice for the day and gets out as we went inside.

So, how we can effectively achieve the target of team building? 

The following are purely based on my experience and it may vary based on the culture, circumstances and other external or internal factors.
  • Try to get consensus on event day (make sure all including remote workers also takes part)
  • Play a motivator role so that all are involved the activities
  • Share your past or related experiences
  • Don't stick to same venue
  • Have a mixed bag of activities (physical & mental activities)
  • During team lunch/dinner try to share with others as much as possible
  • While traveling to/from venue please don't speak anything about technical/work
  • Try not to force anything on anyone
  • Keep yourself busy and others as much as possible

Monday, 8 July 2019

Reducing your Node Application Docker Image Size

Recently I happen to encounter memory/space issues quite often with a server that hosts Nexus (a repository manager that almost has universal support for all formats). On digging into the issue the prima facie evidence that we got was Docker Image size of our node applications are at alarming
high (~2.5 GB).

REPOSITORYTAGIMAGE IDCREATEDSIZE
app-static/ts1248f0e845f533 week ago2.47GB
node114051f768340f3 week ago904MB


Though we're certainly aware of "while architecting Docker applications, keeping your images as lightweight as possible has a lot of practical benefits. It makes things faster, more portable, and less prone to breaks. They’re less likely to present complex problems that are hard to troubleshoot, and it takes less time to share them between builds."  But, we had missed this point when it came to micro-services that are running in node.

We wanted to dig further to understand the areas that constituted major chunk/portion to bloated the size of our docker image. Our first step was to check the size of folders inside docker images. In our check we found the following:

local                  -     631 MB
application     -     704 MB  (contains node modules)
lib                        -     481 MB
share                 -     241 MB
.
.
.
Initially/never, we had suspected about the size of node modules as one of our primary developer (so as node fraternity out there in the universe) felt that it's quite normal with the node modules.


Now we had less options available for us and we wanted to focus on why we resulted with bloated images. I started to concentrate fully around docker this time and have the image built locally. First and foremost I wanted to analyze docker image layers as the base layer was just around 900MB. With the "docker history" command I could get the preview of layers that are built in the process of building the application:


On, seeing the history of layers (8 layers has been added on top of base node image) for the first time I opened the application docker file and found we had duplicated few lines (no one to be blamed. as it's sticking there for a while). I have fixed my docker file RUN section as follows to eliminate additional layers created while building docker image:

RUN npm install yarn -g && yarn install

Though number of layers reduced, the output for the same did nor turn positive as expected. This time again, I'm sticking with the docker history for analysis. Once, again I got some clues with the docker history.

  • COPY still creating leaving an significant impact (~500MB)
  • RUN is the major contributor on the impact (~800MB)
 COPY - ~500MB? Our application code in source control was no way near to few hundred MB, what's wrong here was the first thought that I got. As an answer I found that I had a copy of node modules in host and that been copied to container along the source code.

RUN - ~800MB? now that I have identified node modules in host just takes ~400MB why would the docker layer require 800MB? From, the senior developer I understood that we're handling node modules with source rather than distributes.

Ignoring either of one should help to reduce the size and circumventing second would give the best deal among the option that we had but that will have it's own side effects. To fix all the issues we did the following:
  • run `yarn install` in host
  • copy the source & node modules to the image
  • rebuild node modules to avoid target environment mismatch (in my case OSX was my host and node base image was in linux flavor thanks to the senior developer who cautioned/forseen  this issue)
Finally, my docker file looked as follows:

FROM node:11
WORKDIR /usr/application
COPY . /usr/application
RUN npm rebuild node-sass
HEALTHCHECK --timeout=1s --interval=1s --retries=3 \
  CMD curl -s --fail http://localhost:3000/ || exit 1
CMD ["yarn", "deploy"]
Now, my docker image is making lesser impression (now it's ~1.4 GB) when compared to what I had while starting this problem.

Monday, 10 September 2018

Ways to Analyse/Understand Quality Assurance Practices

For the first time in my learning dump I would like to post how I did a case study in understanding Quality Assurance (QA)  Practices in my current organization. Unlike, tech giants where software engineers do both development as well as testing my current org still follows DEV & QA teams hierarchy with developer to QA ratio on an average 5:1 ( there might be N arguments for or against any developers to QA ration let me not get into that.) All I wanted or what's expected out of me is to understand the current practices that's been followed in the organization. As a pretty new member of org it was a challenging to figure out all the QA folks and get some time from them (not all though) as I could see the a slender fear factor with lots of questions when I reached out (Back to the topic).

To understand QA practices first I need to know the list of areas that I need to cover before I could have the discussion with any of the QA engineers. The following are the two areas in which I wanted to scope my discussions:

  • Testing Methods
  • Testing Strategies 
Testing Methods

 In testing methods I had concentrated on how we are doing the testing.  Initially I was in a tight corner whether should I really need to understand or even speak about Testing methods but then (after speaking with couple of folks), I got a firm belief that I should speak about the testing methodologies that's been followed in their respective projects. Areas that I covered in testing methods can be grouped into two as follows:

  • Box Approach
  • Static & Dynamic Approach

Box Approach

One might have a big question why would one be very much interested whether I'm doing a Black/White/Grey box testing. But, I wanted to understand whether QA folks has the real essence of what they have been testing. During my discussions all those who are into automation said they are into White box testing up front on further questions like "do we really go through the code that's been delivered by developers" then they have changed the mindset from White box to Grey box testing. Whereas in the case of functional test engineers they said it's black box and we are moving towards White box or automation. What I have understood from all is most of QA engineers has an assertion that when we do automation then we are doing white box testing.

Static & Dynamic Approach

There are more than few hundred tools that helps us to ensure whether we are delivering a quality product or not and each of these tools has it's own approach to identify the bugs. Some belong to static and the most belong to dynamic analysis. As we could see there is a predominant discrimination in number tools that's been available in market and I could see the same kind of  discrimination in mentality of QA engineers as well, most of them are really focused on dynamic testing it's one of the area where we could stress up on so that we could avoid a significant number of hidden bugs in the code.


Testing Strategies

After having a brief discussion on testing methods my focus to understand QA practices was to dig further to understand all the strategies that's been followed. I had some areas in my mind which seems to be must cover to understand about testing, the following are group of areas that I wish to cover in my discussions:

  • Test Levels
  • Test Coverage
  • Testing Tools
  • Security Testing
  • Load / Stress Testing
  • Risks & Mitigation
  • Test Schedule
  • Regression Approach
  • Test Status Collections & Reporting
  • Test Records
  • Requirements Traceability Matrix
  • Test Summary
Test Levels

In the test levels I wanna see how we are actually engaging with our testing in the following levels:

  • Unit Testing
  • Functional Testing
  • Integration Testing
Unit Testing

The reason why I wish to touch upon Unit Testing as most of us were practices Grey Box or Black Box testing. With no surprise all QA folks had their concentration on other levels of testing and they had a firm belief that UT's are to be handled by developers and QA does not need to engage in UT whereas, my intention was to see if QA are really aware of Code Coverage of the application through UT (Will cover more about Code Coverage in a little while).

Functional Testing

Functional testing is the major level of testing that's been concentrated by all the QA folks across the org. The method or the way in which testing is carried out has been covered in Testing Methods i.e. in the first few paragraphs of this blog.

Integration Testing

Integration testing can be viewed in two aspects the first one deals with the upstream systems and the second one deal with the downstream systems. Most of the time upstream system integration's issues or bugs are uncovered in functional testing. Whereas, in the case of downstream systems we might not be aware of the consequence of updating the existing flows. There is a common view among most of the Quality Assurance folks that it's the responsibility of downstream system team to take care of integration issues. Ideally, it's the responsibility of upstream system to make sure that there is no issues while upgrading the flows or we should inform them regarding expected outage or breakage when there is a change.

Test Coverage

Test coverage is one of the metric which has been perceived by QA folks as an area that needs to be covered by developers with unit tests. In reality the metric has to be maintained by QA team that includes dynamic coverage as well not much are really aware of how get the code coverage in dynamically and this is the area that needs a strong awareness.

Testing Tools

There is new testing tool that might be built or released by the time you are reading this blog but, how many ever tools that's been rolled out we should do a legitimate study or analysis before we pick up them in the stack of tools or technologies that's been in use already.

Security Testing

Not much of QA folks are aware of tools that help with testing security loop holes or the kind of security issues that might pop-up in the projects. I firmly believe that every one in the organization should have a certain knowledge on security testing practices.

Load Testing / Stress Testing

A Testing area which is also seen as a specialist job. And there are few so called specialist have presumption that the load generated is directly proportional to the number users that's been configured or used in the settings.

Requirement Traceability Matrix
Test Schedule
Test Records
Test Summary
Risk & Mitigations

All above areas are getting endangered as we have moved to so called agile life cycle.



Wednesday, 29 August 2018

Quick overview on Schema Change Management / Migration Tool

I have gathered or collated some information on Schema Change Migration/Management tool Liquibase please go through it.

Wednesday, 1 August 2018

Swagger 2 ASCII DOC Markup Converter

Recently I have been given a task to document my application API docs in ASCII doc format. Thanks to Swagger which helped me in generating ASCII doc as my application was built in Spring Boot. But, that doesn't stop there I need to convert the Swagger generated API doc to ASCII.

By the time when I wanted to generate my API doc my application already had about 30+ paths each having it's own CRUD REST operations leading to a couple more than 100 REST endpoints. That was a nightmare for me to convert all the JSON format API doc to ASCII format.

I was thinking should I quickly write a code so that it converts my JSON to ASCII doc or should I handle it some other means. Before, I started with anything like other problems I thought for a while is this problem is specific to myself or someone else had the same before.

As usual, this problem is generic most of us had been faced this issue. So, I was looking for the best way on how this issue is solved by others after a while, I came to know about swagger2markup where the main objective is to covert Swagger doc into ASCII doc and it's as simple as to use.

The following is the snippet chisel that broke the iceberg which stood in front of me:

//local file where I stored my swagger API doc
Path localSwaggerFile = Paths.get("swagger.json");
//the dir in which I need to store the ASCII DOC
Path outputDirectory = Paths.get("build/asciidoc");
 
//Magic wand that did all the tricks in no time :) 
Swagger2MarkupConverter.from(localSwaggerFile)
        .build()
        .toFolder(outputDirectory);


Tuesday, 24 April 2018

Gathering MetaData of A Table through the JDBC

When we are dealing with the ORM we don't even turn our eyes towards the table meta-data. But, on one fine day you would be waking up just to handle the tables through the meta-data. The tables can be accessed only through the JDBC interfaces and no other layer is crafted for you to do CURD operations. Yes, I faced the same scenario and the following are the few snippets that really helped me to gather meta-data from the DB schema.

The following are few aspects of Meta-data on which I'm interested in:
  • Table organization
    • Column Name
    • Data type of a column
  • Constraints 
    • Non Null-able constraints 
    • Check Constraints 
  • Primary Key
  • Child Tables Meta-data
I had the following POJO that was holding the data's that I required:

public class ColumnMetaData {

    private String columnName;
    private String dataType;
    private boolean nullable;
    private boolean autoIncrement;
}





public class TableMetaData {

    private String tableName;
    private Map<String, ColumnMetaData> columns;
    private String primaryKey;
    private boolean nonIDPrimaryKey;
    private Set<String> nonNullableColumns;
    private Map<String, ChildTableMetaData> childTables;
}

And the following are classes that I have used from java.sql package:

private Connection connection;
private DatabaseMetaData metadata;

above objects are set accordingly:

connection = jdbcTemplate.getDataSource().getConnection();
metadata = connection.getMetaData();

I would be running through the code snippets that helped me to collect the data on which I was interested.

Table Organization & Nullable Constraints: 

ResultSet columnsMetaData = metadata.getColumns(null, "VIVEK", "DEMO", null); **
 
while (columnsMetaData.next()) {

    ColumnMetaData metaData = new ColumnMetaData();
    String columnName = columnsMetaData.getString("COLUMN_NAME");
    metaData.setColumnName(columnName);
    metaData.setDataType(columnsMetaData.getString("DATA_TYPE"));
    metaData.setNullable(columnsMetaData.getBoolean("NULLABLE"));
    //nullableColumns are processed / used in 
TableMetaData     
    if (!metaData.isNullable()) {
        nullableColumns.add(metaData.getColumnName());
    }
}

Since, I'm aware of what data that I wanted to read form the ResultSet I inferred those data directly with getString/getBoolean. Probably you need to get the metadata of resultSet if you are interested in some thing else. 

Primary Key :

ResultSet tablePrimaryKey = metadata.getPrimaryKeys(null, "VIVEK", "DEMO"); **

while (tablePrimaryKey.next()) {

    primaryKey = tablePrimaryKey.getString("COLUMN_NAME");
    log.debug("{} is primary key for the table {}", primaryKey, table);

    //as we don't support composite columns for primary    break;
}


Child Table MetaData:
 
ResultSet exportedKeys = metadata.getExportedKeys(null, "VIVEK", "DEMO"); **

Map<String, ChildTableMetaData> childTablesMetaData = new HashMap<>();
while (exportedKeys.next()) {

    ChildTableMetaData childTableMetaData = new ChildTableMetaData();
    String childTableName = exportedKeys.getString("FKTABLE_NAME");
    childTableMetaData.setTableName(childTableName);
    childTableMetaData.setFkColumnName(exportedKeys.getString("FKCOLUMN_NAME"));
    childTableMetaData.setPkColumnName(exportedKeys.getString("PKCOLUMN_NAME"));
    childTablesMetaData.put(childTableName, childTableMetaData);
} 


** in this snippet "VIVEK" is the schema that I'm connecting and "DEMO" is the table name for
which I'm collecting the data.

Friday, 20 April 2018

JDBC ResultSet to JSON transformation.


With a bunch ORM frameworks(especially for JVM languages) out there and each one of us sticking to our favorite ORM in the applications that we develop and when there is a necessary or need to handle the data in JDBC level even small stuff like converting ResultSet to Json seems to be a complex task. Here I'll be giving a gist on how to convert ResultSet to JSON object.

While querying the data through JDBC we either look for one tuple or a list of tuple (i.e. one or N rows).  In other words, We either query for Map (key representing column name and value representing the actual value in the table) or a List of Map. In technical terms we would invoke queryForList or queryForMap. The ResultSet can then be transformed to a Map from which we can easily transform to JSON object.

The following is the code I have implemented to convert the ResultSet List to a JSON:

List<Map<String, Object>> mapperList = new ArrayList<Map<String, Object>>();
List<Map<String, Object>> transformObject = (List) resultSet;

// Result Set might be empty so validate it before processing 
if (transformObject.size() < 1) {
    log.warn("No Results found");
    throw new NoEntityFoundException("Data not found");
}

//Iterate through each row in resultSet (Basically a Map) 
transformObject.forEach(result -> {

    Map<String, Object> transformMap = new HashMap<String, Object>();
    transformData(result, transformMap);
    mapperList.add(transformMap);
});
//print the transformed data 
System.out.println(mapper.writeValueAsString(mapperList)); 


The following is the Object Mapper configuration:

ObjectMapper objectMapper = new ObjectMapper();
//as we don't need to send NULL values in the JSON response
objectMapper.setSerializationInclusion(JsonInclude.Include.NON_NULL);
objectMapper.configure(SerializationFeature.WRITE_NULL_MAP_VALUES, false);

Monday, 8 May 2017

Java Reflection, Synthetic Members and Unit Testing.

Recently I was working on developing a Custom Serializer With Custom Annotation using Java reflection you can find more about the same here. Everything seems to be good at the end. But, while extending the project with adding Unit Test cases things seems to be screwing up badly for unknown reason. While debugging UT's I found that the issue seems to be because of Synthetic members that is been define in the class. I had no clues about it. So I dig further to understand the world of Synthetic Members in Java world.

What is Synthetic Members?
Any constructs introduced by the compiler that do not have a corresponding construct in the source code must be marked as synthetic, except for default constructors and the class initialization method.

Why Synthetic Members issues occurring in Junit?  
To collect execution data JaCoCo instruments the classes under test which adds two members to the classes: A private static field $jacocoData and a private static method $jacocoInit(). Both members are marked as synthetic. Since, these members are added to the class and the Serializer that is been developed included those two fields as part of Serialization and test cases failed in the end.

How to fix Synthetic Members issues?  
As a general or good practice we should ignore Synthetic members particularly while dealing with reflection. As per Java Doc we can identify a member whether it's Synthetic or not through isSynthetic() method. By adding a check to the Serialize method the issue is solved and the following is code after the change.
 
public void serialize(Object o, JsonGenerator jsonGenerator, 
SerializerProvider serializerProvider) throws IOException {

    jsonGenerator.writeStartObject();
    jsonGenerator.writeStringField("name", o.getClass()
                .getSimpleName());
    jsonGenerator.writeArrayFieldStart("definition");

    Field[] fields = o.getClass().getDeclaredFields();
    for (Field field : fields) {
        if (!field.isSynthetic()) {
            jsonGenerator.writeStartObject();
            jsonGenerator.writeStringField("name",
            StringUtils.join(StringUtils
             .splitByCharacterTypeCamelCase(field.getName()), '-')
             .toString().toLowerCase());
            addAnnotationInJson(field, jsonGenerator);
            try {
                addTypeInJson(field, jsonGenerator);
            }
            catch (Exception e) {
                //log.info(e.getMessage());            }
            jsonGenerator.writeEndObject();
        }
    }
    jsonGenerator.writeEndArray();
    jsonGenerator.writeEndObject();
} 

 

Friday, 5 May 2017

Generate Java Client Library For NodeJS using Swagger Codegen


This blog will give you a overview on generating Java client library for NodeJS project using Swagger Codegen.

Step-by-step guide

Add the steps involved:
  1. Install Swagger Codegen in your machine as detailed in page
  2. Make sure that we have endpoint in NodeJS project that exposes Swagger JSON Definition (/api-docs is exposed with Swagger Definition)
  3. Validate the Swagger Definition with help of Swagger Editor 
  4. Invoke Code using the following command:
    • swagger-codegen generate -i http://localhost:18138/api-docs -l java -o flex/executionconfig -c config.json --api-package com.ooyala.flex --artifact-id executionConfiguration --artifact-version 0.0.1
  5.  Swagger Codegen options that is been used are described as follows:
    • -i --> Spec file (JSON in our case)
    • -l --> language for which the client library has to generated
    • -o --> output directory
    • -c --> custom configuration file location
    • --api-package --> respective package name
    • --artifact-id --> respective artifact ID
    • --artifact-version --> version for the client      
  6.  The following custom config.json file that is used
    {
      "library":"feign",
    }
  7. codegen will create a src for client library similar to the following:
  8. Generated code will have README.md which will help us with the next steps
  9. To generate java client library and deploy to repository use any of the following based on the need
    • mvn install
    • man deploy
  10. Once the package/library is created we can use it in our java project by injecting the dependency

Custom Java Serializer for a POJO with Custom Annotation

Main goal that this blog gonna detail about writing a Custom Serializer for a POJO which has a Custom Annotation attributes. As a prerequisites would request users to have quick brushing on how to implement a Custom Annotation and write a Custom Serializer.  The following are few links that will give you quick overview on Custom Annotation and Custom Serializer respectively:
 The Main requirement that's been given to me is to develop a custom serializer and the class will have the following different kind of attributes:

  • String attribute with & without custom annotation
  • Primitive attribute with & without custom annotation
  • Primitive Wrapper attribute with & without custom annotation
  • POJO attribute with & without custom annotation 
The following is the Class that is been used to implement the Serializer:

public class CustomAnnotation {

    //String attribute with Annotation 
    @ConfigField(displayName = "AnnotationForString"
    description = "config field anotation Test", required = true,
    multiplicity = Multiplicity.ZERO_TO_MANY, expressionEnabled = false)
    private String testField = "something";

    //primitive attribute with Annotation     
    @ConfigField(displayName = "Long-field-test"
    description = "config field long", required = true,
    multiplicity = Multiplicity.ONE_TO_MANY,expressionEnabled = true)
    private long longField;

    //String attribute without Annotation 
    private String stringNoAnnotation;

    //no Annotation for primitive attribute     
    private long longFieldNoAnnotation;

    //Primitive Wrapper with Annotation     
    @ConfigField(displayName = "AnnotationForPrimitiveWrapper"
    description = "config field with primitive Wrapper"
    required = true, multiplicity = Multiplicity.ONE_TO_MANY,expressionEnabled = true)
    private Integer intField;

    //PrimitveWrapper without Annotation     
    private Integer primitiveWrapperNoAnnotation;

    //POJO with Annotation 
    @ConfigField(displayName = "pojo-annotation"
    description = "config field POJO", required = true,
    multiplicity = Multiplicity.SINGLE,expressionEnabled = false)
    private CustomChild customChild;

    //POJO with out Annotation     
    private CustomChild customChildNoAnnotation;

}

In the above code @ConfigField is a the Custom Annotation that is been developed and the following is the implementation of the same:

@Target({ ElementType.FIELD })
@Retention(RetentionPolicy.RUNTIME)
public @interface ConfigField {
    String displayName() default "";

    String description() default "";

    boolean required() default false;

    Multiplicity multiplicity() default SINGLE;

    boolean expressionEnabled() default false;
}

In the above code Multiplicity is a enum and it has the following definition:

public enum Multiplicity {
    SINGLE("1"),
    ZERO_TO_ONE("0..1"),
    ZERO_TO_MANY("0..*"),
    ONE_TO_MANY("1..*");

    private String value;

    public String getValue() {
        return value;
    }

    private Multiplicity(String value) {
        this.value = value;
    }
}
 
The following is the POJO(CustomChild) that is been used in the CustomAnnotation class:
 
public class CustomChild {

    private Integer childName;

    @ConfigField(displayName = "Custom child Annotation int"
    description = "Custom Child attribute with Annotation", required = false,
    multiplicity = Multiplicity.ZERO_TO_MANY,expressionEnabled = true)
    private int childInt;
}
 
Now we move on to the core part on how the custom Serializer is been implemented.
 
I have used jackson StdSerializer for writing the custom Serializer. 
I have leveraged reflection to get the declared fields of the class. And further used 
reflection to get the annoation and it's field.
 
The following is the code snippet for the same:
 
public void serialize(Object o, JsonGenerator jsonGenerator, 
SerializerProvider serializerProvider) throws IOException {

    jsonGenerator.writeStartObject();
    jsonGenerator.writeStringField("name", o.getClass().getSimpleName());
    jsonGenerator.writeArrayFieldStart("definition");

    Field[] fields = o.getClass().getDeclaredFields();
    for(Field field : fields) {
        jsonGenerator.writeStartObject();
        jsonGenerator.writeStringField("name"
        StringUtils.join(StringUtils.splitByCharacterTypeCamelCase(field.getName()), '-')
             .toString().toLowerCase());
        addAnnotationInJson(field, jsonGenerator);
        try {
            addTypeInJson(field, jsonGenerator);
        } catch (Exception e) {

        }
        jsonGenerator.writeEndObject();
    }
    jsonGenerator.writeEndArray();
    jsonGenerator.writeEndObject();

} 

private void addTypeInJson(Field field, JsonGenerator jsonGenerator) throws IOException, ClassNotFoundException {
    if(!ClassUtils.isPrimitiveOrWrapper(field.getType()) && !field.getType().getSimpleName().equalsIgnoreCase
            ("string")) {
        jsonGenerator.writeStringField("type", "complex");
        jsonGenerator.writeFieldName("children");
        jsonGenerator.writeStartArray();
        addChildInJson(field, jsonGenerator);
        jsonGenerator.writeEndArray();
    } else {
        jsonGenerator.writeStringField("type", field.getType().getSimpleName());
    }
}
 
 
With these code in place the following the Seralized output for the class that we have seen:
 
{
  "name" : "CustomAnnotation",
  "definition" : [ {
    "name" : "test-field",
    "displayName" : "AnnotationForString",
    "description" : "config field anotation Test",
    "required" : true,
    "multiplicity" : "0..*",
    "expressionEnabled" : false,
    "type" : "String"
  }, {
    "name" : "long-field",
    "displayName" : "Long-field-test",
    "description" : "config field long",
    "required" : true,
    "multiplicity" : "1..*",
    "expressionEnabled" : true,
    "type" : "long"
  }, {
    "name" : "string-no-annotation",
    "displayName" : "string no annotation",
    "type" : "String"
  }, {
    "name" : "long-field-no-annotation",
    "displayName" : "long field no annotation",
    "type" : "long"
  }, {
    "name" : "int-field",
    "displayName" : "AnnotationForPrimitiveWrapper",
    "description" : "config field with primitive Wrapper",
    "required" : true,
    "multiplicity" : "1..*",
    "expressionEnabled" : true,
    "type" : "Integer"
  }, {
    "name" : "primitive-wrapper-no-annotation",
    "displayName" : "primitive wrapper no annotation",
    "type" : "Integer"
  }, {
    "name" : "custom-child",
    "displayName" : "pojo-annotation",
    "description" : "config field POJO",
    "required" : true,
    "multiplicity" : "1",
    "expressionEnabled" : false,
    "type" : "complex",
    "children" : [ {
      "name" : "child-name",
      "displayName" : "child name",
      "type" : "Integer"
    }, {
      "name" : "child-int",
      "displayName" : "Custom child Annotation int",
      "description" : "Custom Child attribute with Annotation",
      "required" : false,
      "multiplicity" : "0..*",
      "expressionEnabled" : true,
      "type" : "int"
    } ]
  }, {
    "name" : "custom-child-no-annotation",
    "displayName" : "custom child no annotation",
    "type" : "complex",
    "children" : [ {
      "name" : "child-name",
      "displayName" : "child name",
      "type" : "Integer"
    }, {
      "name" : "child-int",
      "displayName" : "Custom child Annotation int",
      "description" : "Custom Child attribute with Annotation",
      "required" : false,
      "multiplicity" : "0..*",
      "expressionEnabled" : true,
      "type" : "int"
    } ]
  } ]
} 
 
 Working code can be found in https://github.com/vivek-dhayalan/customSerializer/