Multiple Github Accounts with One Computer

Steps on how to use multiple github accounts under one computer

  1. Create your SSH Key (make sure you register this key to your github account)
ssh-keygen -t rsa -C "" 

2. Assign proper access rights to the file in order to add them later on to ssh-add

chmod 600 kyel_valarisolutions

3. Once you are able to do so open/create a config file located under your ./ssh and add the record for your each account identity per record.

#Default account/ personal account
     User git
     IdentityFile ~/.ssh/id_rsa

 # Account 2 (work or personal) - the config we are adding
Host github-valariskyeljmd
     User git
     IdentityFile ~/.ssh/kyel_valarisolutions

3. Add them to your ssh-agent

ssh-add kyel_valarisolutions

Event Sourcing for the Impatient


Event Sourcing is a design pattern that ensures that all changes to the application state are recorded as a series of events.  Meaning instead of storing the current state of an entity/application we store all of the past events along with their data that led to its present value.


  1. With Event Sourcing  we have strong audit log trail that will allow us to replay it the events to replicate a the state of the application or entity at a given point in time
  2. It also solves the Object Table Impedance mismatch




  • For Every state change we store that Event along with its data on an Event Store/Event Storage
  • The Event store is usually an RDBMS but it can also be NoSQL storage. You can leverage AWS DynamoDBs event triggers to publish events to different microservices.
  • The Event Storage has then the opportunity to send to another microservice or system(See CQRS)







| type                  |          version    |   Data     |  published    |


|OrderPlaced      |            1            |   { … }      |     False         |

|OrderAccepted |            1            |   { … }      |     False         |


Once the events has been stored in the DB it  can then be published to different systems via Message Brokers like Apache Kafka and the subcribers can store those data in whatever they want. It can be a materialized view optimized for Reading  as one example

Event Sourcing vs Traditional



 Fig 1 shows examples of how we store data without event sourcing.  We store it’s state not the events that led to it.

Screenshots are taken from

Event Sourcing


Fig 1. Order Service where each event is saved in an Event Store(Can be any DBMS). Changes are published as events and other services are subscribed to that service for their own processing. Services that subscribe to events can be done via Kafka, or if the underlying event store has support for publish events/streaming

Screenshots are taken from

Event Sourcing Frameworks and Libraries for Each Languages

1.) Java –

2.) Python –

More Details:


Software Versioning: Semantic Versioning and Calendar Versioning in a nutshell

Semantic Versioning and Calendar Versioning


A.) Semantic Versioning or SemVer

Semantic versioning has a format as follows

  • MAJOR version when you make incompatible API changes, Additional Microservice that introduces new business domain and logic,
  • MINOR version when you add functionality in a backwards-compatible manner, or updated existing infrastructure components.
  • PATCH when you make backwards compatible bug fixes.

The definition of the above has been slightly altered to match the existing architecture of software using a micorservice architecture

Example Schemes.


Represents MAJOR.MINOR.PATCH. where 1 is the majoir release with breaking API changes, while minor version is 2 where we have added and updated existing apis. and 4 have addressed  bugs found on minor version

B.) Calendar Versioning or CalVer

Calendar Versioning has similar format as Semantic versionings. The only difference is the definitions and constructs of what makes a MAJOR, MINOR, and MICRO. It has format as follows

  • MAJOR  – The major segment is the most common calendar based component
  • MINOR – The second number in version.
  • MICRO – Like the semantic versioning it is referred to as Patch.

Example Schemes.


represents a three segment CalVer Scheme with a short year and zero padded month. YY.0M.MICRO. It represents that release version was released last October 10, 2004 hence 4.10.0


Represents a three segment CalVer Scheme including full year, zero padded month, and a zero padded day. YYYY.0M.0D where M represents the month and D is the day. Meaning the releases has been made  on May 5, 2005

Locally Debugging AWS Lambdas written in Node.JS

I have recently joined a project that utilizes the Serverless architecture leveraging the whole AWS ecosystem(all the bells and whistles).

Your development workflow on a serverless architecture is not the same as your microservice based architecture, or just plain monolith.

The typical workflow is as follows

  • Write your fix
  • Compile it
  • Add a debugger
  • replicate the error

However that is not the case when it comes serverless. With Serverless you do the following

  • Write your fix
  • commit your fix
  • Wait for your fix to be uploaded on AWS lambda
  • Replicate the error
  • Check the logs/Cloudwatch for error messages

(By design you should always add essential logging to your application)

The workflow may not seem bad at first but can you imagine if you’re going to lengthy debugging process just to check what the code is actually doing, not to mention it may sometimes take a minute for your lambda to update. This shouldn’t be an issue if you have well defined abstractions to your lambda, and/or have  well written unit tests as you can easilly mock aws specific calls but really, we do not live in such world

Hence, my search ways to essentially speed up my development workflow with lambdas.


  • Node.js 8.9.1
  • Node Package Manager(npm) 5.5.1
  • IDE(For this example, we will use Visual studio code)
  • Lambda-local

We start off by installing lambda-local

npm install -g lambda-local

Then we write our small lambda


'use strict';

// A simple hello world Lambda function
exports.handler = (event, context, callback) => {
    console.log('LOG: Name is ';
    callback(null, "Hello ";



This will be the request that we will be passing to the lambda

That’s basically it.

Thanks to lambda-local we can easily run our aws lambda without actually running it inside the aws ecosystem by executing the following inside the directory where our index.js, package.json, and event.json is located

lambda-local -l index.js -h handler -e event.json

Upon executing the command we should see something similar to this one

info: START RequestId: 99fc1844-880d-84c2-1cbd-7ba34e8e1cad
LOG: Name is Kyel
info: End - Message
info: ------
info: Hello Kyel
info: ------
info: Lambda successfully executed in 41ms.



debugging it is the same as running, the only difference this time is we execute the following.

node --inspect "<path to where the lambda-local is installed>" -l index.js -h handler -e event.json

Normally, if you are like me who is on a windows machine.

You will run it as follows

node --inspect-brk %USERPROFILE%\AppData\Roaming\npm\node_modules\lambda-local\bin\lambda-local -l index.js -h handler -e event.json

Upon running expect to see something similar to this


Debugger listening on ws://
For help see

The log will only progress once you are successfully able to attach your ide’s debugger. Once you are able to, you can now add breakpoints and slowly navigate your code

Bonus if you are on Visual studio code

If you are running on VSCODE what you can also do is change the launch.json file to this

         "name":"Launch Program",

Upon running you will now be able to see your breakpoints

 Alternatives to this approach:
AWS SAM  – it is till in beta but it is worth looking into. As of writing I am experiencing problems with AWS sam running on a windows 10 machine. hence went with the above mentioned approach.

Deploying Spring Boot Applications on RedHat’s OpenShift

I’ve been deploying majority of my Spring Boot powered applications on RedHat’s OpenShift for it’s simplicity and flexibility.

Based on the Spring Boot Documentation, Deploying to OpenShift should be easy as 1,2,3 .

**Taken From The Documentation**

Ensure Java and your build tool are installed remotely, e.g. using a pre_build hook (Java and Maven are installed by default, Gradle is not)

Use a build hook to build your jar (using Maven or Gradle), e.g.

mvn package -s $OPENSHIFT_DATA_DIR/settings.xml -DskipTests=true

Add a start hook that calls java -jar …​

nohup java -jar target/*.jar --server.port=${OPENSHIFT_DIY_PORT} --server.address=${OPENSHIFT_DIY_IP} &

Use a stop hook (since the start is supposed to return cleanly), e.g.

PID=$(ps -ef | grep java.*\.jar | grep -v grep | awk '{ print $2 }')
if [ -z "$PID" ]
    client_result "Application is already stopped"
    kill $PID

However, before we configure our hooks above. We must first reconfigure our M2_repository to a writable directory.

echo -e  "<settings>\n  <localRepository>$OPENSHIFT_DATA_DIR</localRepository>\n</settings>\n" > settings.xml
 mvn install -s $OPENSHIFT_DATA_DIR/settings.xml

and voila. You’re done. You can now easily deploy your Spring Boot Powered Applications on OpenShifts DIY Cartridge.

Microservices with Spring

Purpose of this article is to provide examples and to demonstrate building a microservice applications using common patterns with Spring Boot and Spring Cloud Netflix OSS(Zuul, Eureka, and Feign), Hibernate, and JJWT

The Project has been taken from one of my previous projects I’ve built as a Monolith. I will not be including the whole application. Only some components of it.

Source code can be found here






All services will have their own database( Identity management service, Ticketing Service, and Customer Service)

Building Our Services — Functional Services

Functional Services are services that provides the core business logic of our application.

Identity Management Service (authentication-service)

The Identity management service that handles token issuance, persisting user information such as roles, username, password, and etc. We can roll up our own or use an existing identity management API like Auth0.

We start off by placing an annotation on our Main class

public class FriflowAdminApplication {
   public static void main(String[] args) {, args);

The annotation  @EurekaClient specifies that our application is a subscriber of an Existing Eureka Server. Where it will automatically register itself as a service.

We can configure it’s settings with our application.yml

      defaultZone: ${}/eureka/

Side Note:

As long as Spring Cloud Netflix and Eureka Core are on the classpath any Spring Boot application with @EnableEurekaClient will try to contact a Eureka

The eureka.client.serviceUrl.defaultZone is the address of our Service Registry where our EurkeClient, which is the Identity Management Service will automatically register itself.

To name our service we specify it on our bootstrap.yml

    name: authentication-service

Core Business Logic

As the core logic of this service lies in this package


Once we’ve validated the user who’s requesting an to our api. We would then issue an Authentication token using JWT which can be found inside the JwtTokenIssuerService

public String issueToken(String userName) {

    final long nowMillis = System.currentTimeMillis();
    final long expMillis = nowMillis + (ONE_MINUTE_IN_MILLIS * TOKEN_DURATION_IN_MIN);

    byte[] apiKeySecretBytes = DatatypeConverter.parseBase64Binary(key);
    Key signingKey = new SecretKeySpec(apiKeySecretBytes, signatureAlgorithm.getJcaName());

    return Jwts
            .setIssuedAt(new Date(nowMillis))
            .setExpiration(new Date(expMillis))
            .signWith(signatureAlgorithm, signingKey).compact();


You can know more about JWT here, and JJWT here.

And then we have a base REST Controllers that performs our CRUD and some little information processing within these given packages


Workflow Management Service (ticketing-service)

We can think of the ticketing service as a Ticketing System or a small workflow management system that issues various types tickets such as quotation tickets which contains product inquiries, pricing, materials used. This where most or business processing occurs/execute.

We start off again with our basic configuration

@EnableJpaRepositories(basePackages = {"org.brightworks.friflow.repo"})
@EntityScan(basePackages =
public class Application {

    public static void main(String[] args) {
        new SpringApplicationBuilder(Application.class).run();

Ticketing Service/Workflow Management Service Domain ERD


Some Our API Endpoints/Controllers are specified under this package


Building our Services — Infrastructure Services

There are several patterns in distributed systems that can aid us in making our Functional/Core services work together. Spring cloud provides those tools to implement some of those Patterns.

Service Registry and Service Discovery

We will be using Eureka as our Service registry, Where all of our services will be self-registered. Another way to think about Service Registry is a phone-book of our existing services.

It’s now easier to set up our Service Discovery Code thanks to Spring Cloud Eureka.

public class EurekaServerApplication {
   public static void main(String[] args) {, args);

We configure it as follows.

  port: ${PORT:8761}

    registerWithEureka: false
    fetchRegistry: false
    waitTimeInMsWhenSyncEmpty: 0

We’re simply saying we will register this Eureka Server at Port 8761. and eureka.client.registerWithEureka means Eureka Server is not going to register itself as a client.

All of our services will be self-registered. No need to add more configurations to what we did above. Upon running you’ll see all of the services registered with Eureka.



Building our Services — Infrastructure Services. Putting it all together

Edge Service/Api Gateway

Edge service will act as the main entry point of clients. It’s primary purpose is to aggregate results from different services, act as a proxy, and perform authorization using JJWT. We will be using Feign, Zuul, and Ribbon for this purpose.

It is suggested that we implement oauth2 with JJWT but For simplicity, we will be only using JJWT and use a filter component to implement authorization.

We start first by specifying our Application config

public class FriflowApiGatewayApplication {

   private String jwtKey;

   public static void main(String[] args) {
FriflowApiGatewayApplication.class, args);

   public FilterRegistrationBean filterApiBean() {
      FilterRegistrationBean registrationBean = new   FilterRegistrationBean();
      ApiAccessFilter securityFilter = new ApiAccessFilter(jwtKey);
      return registrationBean;

@EnableFeignClients — Annotation responsible for scanning interface if they are annotated with @FeignClient

@EnableDiscoveryClient — Annotation responsible for activating whichever Discovery client available in our classpath(In this case, Netflix Eureka Discovery Client)

@EnableZuulProxy — Will turn the this application as a reverse proxy that forwards requests to other services.

The filterApiBean is the bean we use to perform filtering of unauthorize requests. It basically checks if it has a JJWT token, and if it’s still valid.

Forwarding the requests to appropriate services — Identity Management Api

Our Api Gateway will now be the main entry point of our clients(e.g mobile devices. another webapp and etc)

In order for us to to forward requests to ticketing-service. We would need first to retrieve an access token from our identity management service.


In the configuration above we will be proxying all requests that’s coming from /authentication-service/ to authentication-service. Notice how we did not specify the url our authentication-serivce(identity management api) Thanks to eureka and ribbon it will automatically forward the requests to the existing/available server

To retrieve an access token we can send a posts request to http://localhost:8082/authentication-service/login



once we have entered a valid username and password. We will receive a token.

Forwarding the requests to appropriate services — Ticket Management API

For this example. Although we can use the Zuul proxy as we did above. We will use Feign client. This can be useful if we want to aggregate results/get results from different services.

public interface QuotationClient {

    @RequestMapping(method = RequestMethod.GET,value = "/quotations/dummy")
    QuotationDTO getDummy();

    @RequestMapping(method = RequestMethod.GET,value = "/quotations/{ticketNo}")
    QuotationDTO getByTicket(@PathVariable("ticketNo") String ticketNo);

    @RequestMapping(method = RequestMethod.POST,value = "/quotations")
    QuotationDTO save(@RequestBody QuotationDTO quotation);

    @RequestMapping(method = RequestMethod.PUT,value = "/quotations")
    QuotationDTO update(@RequestBody QuotationDTO quotation);


The instructions for running each individual service, and code is available at Github

There’s still a lot to improve on this sample project (e.g security, and oauth2, and etc but hopefully this article provided you a ground up on migrating/building your applications using the Microservice design with spring boot.

Side Note: The code was taken from one of my old project that’s been built as monolithic. Some coding conventions/approach might be outdated, and had incurred technical debt. and I will try to update and clean up the code as soon as I can.

Improvements and suggestions are welcome 🙂

references and further readings:

Reducing Java Boilerplate code with Lombok (With Eclipse Installation)

I was looking for  ways to  reduce my class LOC. most of the time, setters and getters lengthens my code length and gives me a hard time to look at the important parts of it. so I decided to on a journey to reduce my boilerplate code.

During this quest, I found . Project Lombok.

In a nutshell project lombok replaces your common  java boilerplate code with a simple annotations.

One of the things I like about Project Lombok is their @Setter and @Getter annotations.

it shortens your class LOC, say good bye to those (getBar(),setBar(String bar))

Installation In Eclipse. 

Adding the jar the alone is not enough for lombok to work.  what we need to do is.

is to ‘Install’ Lombok.

java -jar lombok.jar

We run this command in our CMD, make sure that when you are going to run/execute lombok.jar, make you are in the directory on where it is located. upon running that command a screen will show up.


Click ‘Install/update’.

Right after that check your IDE’s .ini file or configuration file. the following parameters should be  added (below -vmargs for Eclipse)


Once it is done. we can now already add lombok.jar in our projects.

A Simple Example Using Lombok.

here’s  a simple example of Lombok (taken from my personal project)


As you can see on our Outline, the Annotations @Getter and @Setter created the setters and getters for our class instance variables.

Other Concerns And references:


I just found out that they have support for IntelliJ Idea. Checkout the plugin here

Reversed Binary (The Spotify Tech Puzzle Reggae Quiz)

I was lurking around reddit yesterday and I found this. to quote the task was.

Your task will be to write a program for reversing numbers in binary. For instance, the binary representation of 13 is 1101, and reversing it gives 1011, which corresponds to number 11.

I would have to admit, I did a little refreshing of my binary skills(it has been 1 year and a half since I did some fiddling with binaries).

So here is my solution, I know it is not that elegant but it did the solve the problem(Yes, Spotify’s automated system checked it and it is considered valid)

public class ReverseBinary {

	public  String reversedIntToBinary(int val) {
		int value = val;
		StringBuilder bldr = new StringBuilder();

		while (value != 0) {
			int remainder = value % 2;
			value = value / 2;
		return bldr.toString();

	public  int toDecimal(String bin) {
		char[] binString = bin.toCharArray();

		int starting = 0;

		for (int i = 0; i < binString.length; i++) {
			int tempoVal = starting * 2
					+ Character.getNumericValue(binString[i]);
			starting = tempoVal;
		return starting;

	public  int reversedBinary(int val){
		String bin =reversedIntToBinary(val);
		int result = toDecimal(bin);
		return result;

Now why am I posting this? I am encouraging each and everyone of you to try out this kind of things, one thing I liked about this challenged is that they have an automated testing, which will automatically know what your errors are and give you a clue on what went wrong.

Side Note:
It is amazing how huge amount of information that the internet has, it paves way for us to learn, developed and discover new things all on our own.

Ugghhh Palindrome!

In my local programming group(consist mostly of freshman and sophomore computer science people) Palindrome has been such an issue. Yes, palindrome is already an old exercise for programming but for them this is new (I have no idea why! they are probably day dreaming when it was discussed in their class!). To keep the long story short, they asked me how would create a simple Palindrome checker, since I do not really want to entertain a lot of questions on my facebook account and Garena account I’ve decided to post it here!

Posting it here would also be hitting two birds with one stone Why? I am preparing for my Internship Interview it would be beneficial for me to post it here as a reviewer form me in case I forget or I need a refresher about it.

Okay, I’ve whipped up a 10 minute solution for you to create a palindrome checker(5 minute per approach, I’ve created 2 approaches)

First Approach: Reverse the Word and compare it to the original word!

Just by checking this code out will already be understandable for you.

	private static String reverseWord(String word){
		char[] original = word.toCharArray();
		StringBuilder reverse = new StringBuilder();
		for(int i = original.length - 1; i >= 0; i--){
		return reverse.toString();

	public static boolean isWordAPalindrome(String word){
		return (word.equals(reverseWord(word)));

So what are we doing? We are simply passing the Argument word from the method isWordAPalindrome to reverseWord method and then we check if the reverse word(that came from the reverseWord(word) is equal to the original word.

Second Approach: Traversing the word forward and backward!

	public static boolean isPalindome(char[] word){
		int forward = 0;
		int backward = word.length - 1;

			if(word[forward] != word[backward]){
				return false;

		return true;

Now please have some effort understanding this 😉

I hope I’ve shed a light on your problem. Cheers!

Alternatives for Hibernate buildSessionFactory()

With the Hibernate 4 the


method is already deprecated. The


still works but if you are one of those people who tries to avoid using deprecated or old libraries I found an alternative for you to create your Session Factory.

In Hibernate 4 the

buildSessionFactory(ServiceRegistry serviceRegistry)

is the replacement for the deprecated method


. I tried googling, searching the documentation of Hibernate 4 for the implementation of

buildSessionFactory(ServiceRegistry serviceRegistry)

but sadly I cannot find any, The getting started guide for it still uses the old buildSessionFactory() method.

Here is an alternative for deprecated buildSessionFactory()

public static SessionFactory configureSessionFactory() {

    try {
        Configuration configuration = new Configuration();


        serviceRegistry = new ServiceRegistryBuilder().applySettings(


        sessionFactory = configuration.buildSessionFactory(serviceRegistry);
    } catch (HibernateException hbe) {



    return sessionFactory;

This code will work providing the your hibernate.cfg.xml is in the same directory of this code. However if you placed your hibernate.cfg.xml in another directory all you have to do is replace the