Wednesday, 30 October 2019
Thursday, 26 September 2019
Developing Microservices? Points to Ponder. Part - 1
Developing Microservices?
Yes(either by you or in your organization), might be the answer from most of you reading this blog. As you could understand from this title I'm going to share my experience in developing microservices based architecture in series of posts each post concentrating on one particular area. As I have been building services for a while using Spring Boot do bare with the solutions that are concentrated in and around with the frameworks provided by Spring EcoSystem.
Why there is a buzz around Microservices?
A collection of loosely coupled services (Fig-1 is simple deception of Microservices architecture) that structures an application. In a microservices architecture, services are fine-grained, the protocols are lightweight and modular making the application easier to understand, develop, test and resilient to architecture erosion. It also facilitates parallelizes development, deploy, scale services independently, refactor individual service architectures and finally facilitate continuous delivery and deployment. There art 'N' articles out in the web that gives the proof for the above points but, there are very few of them that gives us the points to ponder while developing Microservices.
Fig -1 Microservices architecture
The following are few areas (in no particular order) in which I would like to share my experience while developing applications in Microservices architecture because, I firmly believe when we have all the following in place it would really help us reap the real benefits of Microservices architecture.
- Configuration Management
- Logging
- API gateway
- Service Discovery
- Circuit Breaker
- Authentication
- Database Communication & Migration
- Inter Service Communication
- Integration testing
- Deployment
- Monitoring
Configuration Management
Software configuration management plays a significant role while developing any services. A small mistake with configuration dependency system might lead to potential business loss (can be in the form of monetary or reputation of the organization). I have seen or seeing the evolution of various approaches in handling configuration management and here is a glimpse of few approaches that are adopted in the projects in which I worked:
- Shell Scripts
- Environment Variables
- Environment Profiles
- Configuration Management tools (Puppet, Vagrant,....)
- Configuration Server (Spring Cloud Config Server, Consul,...)
Shell Scripts?
Yes, shell scripts for configuration management. Gone, are those days were operations team used to have shell scripts to update/modify the configuration files while deploying each and every builds in production environment. The problem with this approach are dangling configuration files, dangling shell scripts basically due to lack of synchronization between the dev and operations teams. I'm not getting much into the details of how the scripts looked in this blog or post.Environment Variables
As the configuration related to application does not change from environment to environment few people thought to leverage Environment variables for the attributes that change quite often and it's up to the operations team to make sure the variables are exposed to the application at the time of deployments. Still, few issues were not solved due the synchronization issues.Environment Profiles
As a work around developers started to ship configuration files for different environment along with the deliverables to avoid issues due to lack of synchronization. But, it exposed few credentials to the team who has access to code repository and making security endpoints closed with keys left in the lock.
Configuration Management Tools
Then with the arise of multiple configuration management tools like puppet, vagrant and similar tools, IT infrastructure management was made easy addressing cross cutting concerns like provisioning, patching, configuration, and management of operating system and application components. Though, these tools helped the community significantly in the era of virtual machines and did not got the significance in the era of micro-services.
Configuration Servers
As the era of micro-services was picking up things have changed drastically with respect to the configuration management and we have started moving towards servers for persisting configurations with different profile. I have used the following two services to persist my configurations:
- Spring Cloud Config Server
- Hashicorp Consul
- Hashicorp Vault
Like Consul, Vault is also a distributed external configuration system with additional features to manage secrets and protects sensitive data. Few applications might have to maintain seed data confidentially when it's exposed in distributed configuration system it might be open so Hashicorp has built a Vault to cater the needs of applications that maintains data sensitive also, like Consul integrating Vault with spring applications would be a cake walk through the spring cloud vault.
Please, feel free to leave your comments and suggestions to improve my post. Also, let me know if you're looking for more details.
Monday, 19 August 2019
Programming a team
I have been registering few of my technical experience in my blog whenever I'm not lazy ๐ this time I wanted to convey my perspective on a non-technical subject yes, from the title you might have guessed it as Team Building. The core aspect of team building is turn a group of individual contributors into a cohesive team. Have a look into the following pic depicts teamwork (I grabbed it from online just to portray teamwork and not to cut a tree๐).
Most of the organization irrespective of size and nature they don't fail to allocate a portion of budget towards team building activities.
But, why do companies allocate budget for team building?
Most of the management guru's believed that team building enables better communication, better relationships that ultimately increases the productivity of the group (do note the management believes it increases productivity of the group which might or might on increase the productivity of individuals in the team).
How the budget is been used quite often?
Unlike other budgets, most of the organizations have some upper limit towards the expenditure that's been involved for the purpose of team building and these budgets are allocated on a quarterly basis or on a annual basis. Based, on the budget and based on the group/team lead/head/manager it's been used by choosing either one of the following:
- Hire Team Building Coach (typically a day of formalities revolving around few boredom activities)
- Team Lunch/Dinner
- Outings
Personally, I feel all options does not add the right essence required for team building (despite after meticulous planning) because of the following reasons:
- Poor turn around
- Lack of interest
- Not once again
- Non-collocated members
Also, I would like to highlight the trend that's prevailed these days in the name of team building, guess you have got it, yes, it's team lunch/dinner. Where, all in the team gets into the action by concentrating what's there in menu and my choice for the day and gets out as we went inside.
So, how we can effectively achieve the target of team building?
The following are purely based on my experience and it may vary based on the culture, circumstances and other external or internal factors.
- Try to get consensus on event day (make sure all including remote workers also takes part)
- Play a motivator role so that all are involved the activities
- Share your past or related experiences
- Don't stick to same venue
- Have a mixed bag of activities (physical & mental activities)
- During team lunch/dinner try to share with others as much as possible
- While traveling to/from venue please don't speak anything about technical/work
- Try not to force anything on anyone
- Keep yourself busy and others as much as possible
Monday, 8 July 2019
Reducing your Node Application Docker Image Size
Recently I happen to encounter memory/space issues quite often with a server that hosts Nexus (a repository manager that almost has universal support for all formats). On digging into the issue the prima facie evidence that we got was Docker Image size of our node applications are at alarming
high (~2.5 GB).
Though we're certainly aware of "while architecting Docker applications, keeping your images as lightweight as possible has a lot of practical benefits. It makes things faster, more portable, and less prone to breaks. They’re less likely to present complex problems that are hard to troubleshoot, and it takes less time to share them between builds." But, we had missed this point when it came to micro-services that are running in node.
We wanted to dig further to understand the areas that constituted major chunk/portion to bloated the size of our docker image. Our first step was to check the size of folders inside docker images. In our check we found the following:
Now we had less options available for us and we wanted to focus on why we resulted with bloated images. I started to concentrate fully around docker this time and have the image built locally. First and foremost I wanted to analyze docker image layers as the base layer was just around 900MB. With the "docker history" command I could get the preview of layers that are built in the process of building the application:
On, seeing the history of layers (8 layers has been added on top of base node image) for the first time I opened the application docker file and found we had duplicated few lines (no one to be blamed. as it's sticking there for a while). I have fixed my docker file RUN section as follows to eliminate additional layers created while building docker image:
Though number of layers reduced, the output for the same did nor turn positive as expected. This time again, I'm sticking with the docker history for analysis. Once, again I got some clues with the docker history.
RUN - ~800MB? now that I have identified node modules in host just takes ~400MB why would the docker layer require 800MB? From, the senior developer I understood that we're handling node modules with source rather than distributes.
Ignoring either of one should help to reduce the size and circumventing second would give the best deal among the option that we had but that will have it's own side effects. To fix all the issues we did the following:
high (~2.5 GB).
REPOSITORY | TAG | IMAGE ID | CREATED | SIZE |
app-static/ts | 1 | 248f0e845f53 | 3 week ago | 2.47GB |
node | 11 | 4051f768340f | 3 week ago | 904MB |
Though we're certainly aware of "while architecting Docker applications, keeping your images as lightweight as possible has a lot of practical benefits. It makes things faster, more portable, and less prone to breaks. They’re less likely to present complex problems that are hard to troubleshoot, and it takes less time to share them between builds." But, we had missed this point when it came to micro-services that are running in node.
We wanted to dig further to understand the areas that constituted major chunk/portion to bloated the size of our docker image. Our first step was to check the size of folders inside docker images. In our check we found the following:
Initially/never, we had suspected about the size of node modules as one of our primary developer (so as node fraternity out there in the universe) felt that it's quite normal with the node modules.local - 631 MBapplication - 704 MB (contains node modules)lib - 481 MBshare - 241 MB...
Now we had less options available for us and we wanted to focus on why we resulted with bloated images. I started to concentrate fully around docker this time and have the image built locally. First and foremost I wanted to analyze docker image layers as the base layer was just around 900MB. With the "docker history" command I could get the preview of layers that are built in the process of building the application:
On, seeing the history of layers (8 layers has been added on top of base node image) for the first time I opened the application docker file and found we had duplicated few lines (no one to be blamed. as it's sticking there for a while). I have fixed my docker file RUN section as follows to eliminate additional layers created while building docker image:
RUN npm install yarn -g && yarn install
Though number of layers reduced, the output for the same did nor turn positive as expected. This time again, I'm sticking with the docker history for analysis. Once, again I got some clues with the docker history.
- COPY still creating leaving an significant impact (~500MB)
- RUN is the major contributor on the impact (~800MB)
RUN - ~800MB? now that I have identified node modules in host just takes ~400MB why would the docker layer require 800MB? From, the senior developer I understood that we're handling node modules with source rather than distributes.
Ignoring either of one should help to reduce the size and circumventing second would give the best deal among the option that we had but that will have it's own side effects. To fix all the issues we did the following:
- run `yarn install` in host
- copy the source & node modules to the image
- rebuild node modules to avoid target environment mismatch (in my case OSX was my host and node base image was in linux flavor thanks to the senior developer who cautioned/forseen this issue)
FROM node:11Now, my docker image is making lesser impression (now it's ~1.4 GB) when compared to what I had while starting this problem.
WORKDIR /usr/application
COPY . /usr/application
RUN npm rebuild node-sass
HEALTHCHECK --timeout=1s --interval=1s --retries=3 \
CMD curl -s --fail http://localhost:3000/ || exit 1
CMD ["yarn", "deploy"]
Subscribe to:
Posts (Atom)