Multiple Github Accounts with One Computer

Steps on how to use multiple github accounts under one computer

  1. Create your SSH Key (make sure you register this key to your github account)
ssh-keygen -t rsa -C "kyel@valarisolutions.com" 

2. Assign proper access rights to the file in order to add them later on to ssh-add

chmod 600 kyel_valarisolutions

3. Once you are able to do so open/create a config file located under your ./ssh and add the record for your each account identity per record.

#Default account/ personal account
Host github.com
     HostName github.com
     User git
     IdentityFile ~/.ssh/id_rsa

 # Account 2 (work or personal) - the config we are adding
Host github-valariskyeljmd
     HostName github.com
     User git
     IdentityFile ~/.ssh/kyel_valarisolutions

3. Add them to your ssh-agent


ssh-add kyel_valarisolutions

Event Sourcing for the Impatient

What

Event Sourcing is a design pattern that ensures that all changes to the application state are recorded as a series of events.  Meaning instead of storing the current state of an entity/application we store all of the past events along with their data that led to its present value.

Why

  1. With Event Sourcing  we have strong audit log trail that will allow us to replay it the events to replicate a the state of the application or entity at a given point in time
  2. It also solves the Object Table Impedance mismatch

 

How:

 

  • For Every state change we store that Event along with its data on an Event Store/Event Storage
  • The Event store is usually an RDBMS but it can also be NoSQL storage. You can leverage AWS DynamoDBs event triggers to publish events to different microservices.
  • The Event Storage has then the opportunity to send to another microservice or system(See CQRS)

 

 

 

 

 

+————————————————————————————-+

| type                  |          version    |   Data     |  published    |

+————————————————————————————–+

|OrderPlaced      |            1            |   { … }      |     False         |

|OrderAccepted |            1            |   { … }      |     False         |

 

Once the events has been stored in the DB it  can then be published to different systems via Message Brokers like Apache Kafka and the subcribers can store those data in whatever they want. It can be a materialized view optimized for Reading  as one example

Event Sourcing vs Traditional

Traditional

state

 Fig 1 shows examples of how we store data without event sourcing.  We store it’s state not the events that led to it.

Screenshots are taken from Eventuate.io

Event Sourcing

es

Fig 1. Order Service where each event is saved in an Event Store(Can be any DBMS). Changes are published as events and other services are subscribed to that service for their own processing. Services that subscribe to events can be done via Kafka, or if the underlying event store has support for publish events/streaming

Screenshots are taken from Eventuate.io

Event Sourcing Frameworks and Libraries for Each Languages

1.) Java – https://axoniq.io/

2.) Python – https://eventsourcing.readthedocs.io/en/stable/topics/introduction.html

More Details:

https://docs.microsoft.com/en-us/azure/architecture/patterns/event-sourcing

 

Software Versioning: Semantic Versioning and Calendar Versioning in a nutshell

Semantic Versioning and Calendar Versioning

 

A.) Semantic Versioning or SemVer

Semantic versioning has a format as follows

MAJOR.MINOR.PATCH
  • MAJOR version when you make incompatible API changes, Additional Microservice that introduces new business domain and logic,
  • MINOR version when you add functionality in a backwards-compatible manner, or updated existing infrastructure components.
  • PATCH when you make backwards compatible bug fixes.

The definition of the above has been slightly altered to match the existing architecture of software using a micorservice architecture

Example Schemes.

1.2.4

Represents MAJOR.MINOR.PATCH. where 1 is the majoir release with breaking API changes, while minor version is 2 where we have added and updated existing apis. and 4 have addressed  bugs found on minor version

B.) Calendar Versioning or CalVer

Calendar Versioning has similar format as Semantic versionings. The only difference is the definitions and constructs of what makes a MAJOR, MINOR, and MICRO. It has format as follows

MAJOR.MINOR.MICRO
  • MAJOR  – The major segment is the most common calendar based component
  • MINOR – The second number in version.
  • MICRO – Like the semantic versioning it is referred to as Patch.

Example Schemes.

4.10.0

represents a three segment CalVer Scheme with a short year and zero padded month. YY.0M.MICRO. It represents that release version was released last October 10, 2004 hence 4.10.0

5.05.25

Represents a three segment CalVer Scheme including full year, zero padded month, and a zero padded day. YYYY.0M.0D where M represents the month and D is the day. Meaning the releases has been made  on May 5, 2005

Locally Debugging AWS Lambdas written in Node.JS

I have recently joined a project that utilizes the Serverless architecture leveraging the whole AWS ecosystem(all the bells and whistles).

Your development workflow on a serverless architecture is not the same as your microservice based architecture, or just plain monolith.

The typical workflow is as follows

  • Write your fix
  • Compile it
  • Add a debugger
  • replicate the error

However that is not the case when it comes serverless. With Serverless you do the following

  • Write your fix
  • commit your fix
  • Wait for your fix to be uploaded on AWS lambda
  • Replicate the error
  • Check the logs/Cloudwatch for error messages

(By design you should always add essential logging to your application)

The workflow may not seem bad at first but can you imagine if you’re going to lengthy debugging process just to check what the code is actually doing, not to mention it may sometimes take a minute for your lambda to update. This shouldn’t be an issue if you have well defined abstractions to your lambda, and/or have  well written unit tests as you can easilly mock aws specific calls but really, we do not live in such world

Hence, my search ways to essentially speed up my development workflow with lambdas.

Pre-requisites

  • Node.js 8.9.1
  • Node Package Manager(npm) 5.5.1
  • IDE(For this example, we will use Visual studio code)
  • Lambda-local

We start off by installing lambda-local

npm install -g lambda-local

Then we write our small lambda

index.js

'use strict';

// A simple hello world Lambda function
exports.handler = (event, context, callback) => {
    console.log('LOG: Name is '+event.name);
    callback(null, "Hello "+event.name);
}

event.json

{
  "name":"Kyel"
}

This will be the request that we will be passing to the lambda

That’s basically it.

Thanks to lambda-local we can easily run our aws lambda without actually running it inside the aws ecosystem by executing the following inside the directory where our index.js, package.json, and event.json is located

lambda-local -l index.js -h handler -e event.json

Upon executing the command we should see something similar to this one

info: START RequestId: 99fc1844-880d-84c2-1cbd-7ba34e8e1cad
LOG: Name is Kyel
info: End - Message
info: ------
info: Hello Kyel
info: ------
info: Lambda successfully executed in 41ms.

 

Debugging

debugging it is the same as running, the only difference this time is we execute the following.

node --inspect "<path to where the lambda-local is installed>" -l index.js -h handler -e event.json

Normally, if you are like me who is on a windows machine.

You will run it as follows

node --inspect-brk %USERPROFILE%\AppData\Roaming\npm\node_modules\lambda-local\bin\lambda-local -l index.js -h handler -e event.json

Upon running expect to see something similar to this

 

Debugger listening on ws://127.0.0.1:9229/39da7fef-b5fb-4c88-a393-311677c6aa98
For help see https://nodejs.org/en/docs/inspector

The log will only progress once you are successfully able to attach your ide’s debugger. Once you are able to, you can now add breakpoints and slowly navigate your code

Bonus if you are on Visual studio code

If you are running on VSCODE what you can also do is change the launch.json file to this

 
   "version":"0.2.0",
   "configurations": 
       
         "type":"node",
         "request":"launch",
         "name":"Launch Program",
         "program":"C:/Users/Kyel/AppData/Roaming/npm/node_modules/lambda-local/bin/lambda-local",
         "cwd":"${workspaceFolder}",
         "args": 
            "-l",
            "${workspaceFolder}\\index.js",
            "-e",
            "${workspaceFolder}\\event.json"
         ]
      }
   ]
}

Upon running you will now be able to see your breakpoints

 Alternatives to this approach:
AWS SAM  – it is till in beta but it is worth looking into. As of writing I am experiencing problems with AWS sam running on a windows 10 machine. hence went with the above mentioned approach.

Deploying Spring Boot Applications on RedHat’s OpenShift

I’ve been deploying majority of my Spring Boot powered applications on RedHat’s OpenShift for it’s simplicity and flexibility.

Based on the Spring Boot Documentation, Deploying to OpenShift should be easy as 1,2,3 .http://docs.spring.io/spring-boot/docs/current/reference/html/cloud-deployment-openshift.html

**Taken From The Documentation**

Ensure Java and your build tool are installed remotely, e.g. using a pre_build hook (Java and Maven are installed by default, Gradle is not)

Use a build hook to build your jar (using Maven or Gradle), e.g.

#!/bin/bash
cd $OPENSHIFT_REPO_DIR
mvn package -s $OPENSHIFT_DATA_DIR/settings.xml -DskipTests=true

Add a start hook that calls java -jar …​

#!/bin/bash
cd $OPENSHIFT_REPO_DIR
nohup java -jar target/*.jar --server.port=${OPENSHIFT_DIY_PORT} --server.address=${OPENSHIFT_DIY_IP} &

Use a stop hook (since the start is supposed to return cleanly), e.g.

#!/bin/bash
source $OPENSHIFT_CARTRIDGE_SDK_BASH
PID=$(ps -ef | grep java.*\.jar | grep -v grep | awk '{ print $2 }')
if [ -z "$PID" ]
then
    client_result "Application is already stopped"
else
    kill $PID
fi

However, before we configure our hooks above. We must first reconfigure our M2_repository to a writable directory.

cd $OPENSHIFT_DATA_DIR
echo -e  "<settings>\n  <localRepository>$OPENSHIFT_DATA_DIR</localRepository>\n</settings>\n" > settings.xml
 mvn install -s $OPENSHIFT_DATA_DIR/settings.xml

and voila. You’re done. You can now easily deploy your Spring Boot Powered Applications on OpenShifts DIY Cartridge.

Microservices with Spring

Purpose of this article is to provide examples and to demonstrate building a microservice applications using common patterns with Spring Boot and Spring Cloud Netflix OSS(Zuul, Eureka, and Feign), Hibernate, and JJWT

The Project has been taken from one of my previous projects I’ve built as a Monolith. I will not be including the whole application. Only some components of it.

Source code can be found here

Architecture

 

1-QnEUoCAR7T-7VFaguGFj-w

 

Notes:

All services will have their own database( Identity management service, Ticketing Service, and Customer Service)

Building Our Services — Functional Services

Functional Services are services that provides the core business logic of our application.

Identity Management Service (authentication-service)

The Identity management service that handles token issuance, persisting user information such as roles, username, password, and etc. We can roll up our own or use an existing identity management API like Auth0.

We start off by placing an annotation on our Main class

@SpringBootApplication
@EnableEurekaClient
public class FriflowAdminApplication {
   public static void main(String[] args) {
      SpringApplication.run(FriflowAdminApplication.class, args);
   }
}

The annotation  @EurekaClient specifies that our application is a subscriber of an Existing Eureka Server. Where it will automatically register itself as a service.

We can configure it’s settings with our application.yml

eureka:
  client:
    serviceUrl:
      defaultZone: ${vcap.services.eureka-service.credentials.uri:http://127.0.0.1:8761}/eureka/

Side Note:

As long as Spring Cloud Netflix and Eureka Core are on the classpath any Spring Boot application with @EnableEurekaClient will try to contact a Eureka

The eureka.client.serviceUrl.defaultZone is the address of our Service Registry where our EurkeClient, which is the Identity Management Service will automatically register itself.

To name our service we specify it on our bootstrap.yml

spring:
  application:
    name: authentication-service

Core Business Logic

As the core logic of this service lies in this package

1-Qw2umexvL2rYkMoQOJ20RA

Once we’ve validated the user who’s requesting an to our api. We would then issue an Authentication token using JWT which can be found inside the JwtTokenIssuerService

@Override
public String issueToken(String userName) {

    final long nowMillis = System.currentTimeMillis();
    final long expMillis = nowMillis + (ONE_MINUTE_IN_MILLIS * TOKEN_DURATION_IN_MIN);

    byte[] apiKeySecretBytes = DatatypeConverter.parseBase64Binary(key);
    Key signingKey = new SecretKeySpec(apiKeySecretBytes, signatureAlgorithm.getJcaName());

    return Jwts
            .builder()
            .setIssuedAt(new Date(nowMillis))
            .setExpiration(new Date(expMillis))
            .setSubject(userName)
            .setIssuer(issuer)
            .signWith(signatureAlgorithm, signingKey).compact();

}

You can know more about JWT here, and JJWT here.

And then we have a base REST Controllers that performs our CRUD and some little information processing within these given packages

1-_ATnYL8n6XMaa7f-Pctc3A

Workflow Management Service (ticketing-service)

We can think of the ticketing service as a Ticketing System or a small workflow management system that issues various types tickets such as quotation tickets which contains product inquiries, pricing, materials used. This where most or business processing occurs/execute.

We start off again with our basic configuration

@SpringBootApplication
@EnableJpaRepositories(basePackages = {"org.brightworks.friflow.repo"})
@EntityScan(basePackages =
        {"org.brightworks.friflow.domain",
         "org.brightworks.friflow.domain.process"
        })
@ConditionalOnClass({SpringSecurityDialect.class})
@EnableEurekaClient
@EnableDiscoveryClient
public class Application {

    public static void main(String[] args) {
        new SpringApplicationBuilder(Application.class).run();
    }
}

Ticketing Service/Workflow Management Service Domain ERD

1-jQS-oewiCjxbPgwADatmAw

Some Our API Endpoints/Controllers are specified under this package

1-aURVKJDEGpGMBV_6LFz5Xw

Building our Services — Infrastructure Services

There are several patterns in distributed systems that can aid us in making our Functional/Core services work together. Spring cloud provides those tools to implement some of those Patterns.

Service Registry and Service Discovery

We will be using Eureka as our Service registry, Where all of our services will be self-registered. Another way to think about Service Registry is a phone-book of our existing services.

It’s now easier to set up our Service Discovery Code thanks to Spring Cloud Eureka.

@SpringBootApplication
@EnableEurekaServer
public class EurekaServerApplication {
   public static void main(String[] args) {
      SpringApplication.run(EurekaServerApplication.class, args);
   }
}

We configure it as follows.

server:
  port: ${PORT:8761}

eureka:
  client:
    registerWithEureka: false
    fetchRegistry: false
  server:
    waitTimeInMsWhenSyncEmpty: 0

We’re simply saying we will register this Eureka Server at Port 8761. and eureka.client.registerWithEureka means Eureka Server is not going to register itself as a client.

All of our services will be self-registered. No need to add more configurations to what we did above. Upon running you’ll see all of the services registered with Eureka.

 

1-PEyvuEfs62WnNg4LCCSzkQ

Building our Services — Infrastructure Services. Putting it all together

Edge Service/Api Gateway

Edge service will act as the main entry point of clients. It’s primary purpose is to aggregate results from different services, act as a proxy, and perform authorization using JJWT. We will be using Feign, Zuul, and Ribbon for this purpose.

It is suggested that we implement oauth2 with JJWT but For simplicity, we will be only using JJWT and use a filter component to implement authorization.

We start first by specifying our Application config

@SpringBootApplication
@EnableFeignClients
@EnableDiscoveryClient
@EnableZuulProxy
public class FriflowApiGatewayApplication {

   @Value("${jwt.security.key}")
   private String jwtKey;

   public static void main(String[] args) {
      SpringApplication.run(
FriflowApiGatewayApplication.class, args);
   }

   @Bean
   public FilterRegistrationBean filterApiBean() {
      FilterRegistrationBean registrationBean = new   FilterRegistrationBean();
      ApiAccessFilter securityFilter = new ApiAccessFilter(jwtKey);
      registrationBean.setFilter(securityFilter);
      registrationBean.addUrlPatterns("/api/*");
      return registrationBean;
   }
}

@EnableFeignClients — Annotation responsible for scanning interface if they are annotated with @FeignClient

@EnableDiscoveryClient — Annotation responsible for activating whichever Discovery client available in our classpath(In this case, Netflix Eureka Discovery Client)

@EnableZuulProxy — Will turn the this application as a reverse proxy that forwards requests to other services.

The filterApiBean is the bean we use to perform filtering of unauthorize requests. It basically checks if it has a JJWT token, and if it’s still valid.

Forwarding the requests to appropriate services — Identity Management Api

Our Api Gateway will now be the main entry point of our clients(e.g mobile devices. another webapp and etc)

In order for us to to forward requests to ticketing-service. We would need first to retrieve an access token from our identity management service.

zuul.routes.authentication.path=/authentication-service/**
zuul.routes.authentication.serviceId=authentication-service
ribbon.eureka.enabled=true

In the configuration above we will be proxying all requests that’s coming from /authentication-service/ to authentication-service. Notice how we did not specify the url our authentication-serivce(identity management api) Thanks to eureka and ribbon it will automatically forward the requests to the existing/available server

To retrieve an access token we can send a posts request to http://localhost:8082/authentication-service/login

 

1-nN7Vo0LHmsPAnXl8N0qUng

once we have entered a valid username and password. We will receive a token.

Forwarding the requests to appropriate services — Ticket Management API

For this example. Although we can use the Zuul proxy as we did above. We will use Feign client. This can be useful if we want to aggregate results/get results from different services.

@FeignClient("ticketing-service")
public interface QuotationClient {

    @RequestMapping(method = RequestMethod.GET,value = "/quotations/dummy")
    QuotationDTO getDummy();

    @RequestMapping(method = RequestMethod.GET,value = "/quotations/{ticketNo}")
    QuotationDTO getByTicket(@PathVariable("ticketNo") String ticketNo);

    @RequestMapping(method = RequestMethod.POST,value = "/quotations")
    QuotationDTO save(@RequestBody QuotationDTO quotation);

    @RequestMapping(method = RequestMethod.PUT,value = "/quotations")
    QuotationDTO update(@RequestBody QuotationDTO quotation);
}

 

The instructions for running each individual service, and code is available at Github

There’s still a lot to improve on this sample project (e.g security, and oauth2, and etc but hopefully this article provided you a ground up on migrating/building your applications using the Microservice design with spring boot.

Side Note: The code was taken from one of my old project that’s been built as monolithic. Some coding conventions/approach might be outdated, and had incurred technical debt. and I will try to update and clean up the code as soon as I can.

Improvements and suggestions are welcome 🙂

references and further readings:

http://microservices.io/
https://player.oreilly.com/videos/9781491944615
http://www.oreilly.com/programming/free/microservices-vs-service-oriented-architecture.csp http://shop.oreilly.com/product/0636920033158.do
https://herbertograca.com/2017/01/26/microservices-architecture/

Reducing Java Boilerplate code with Lombok (With Eclipse Installation)

I was looking for  ways to  reduce my class LOC. most of the time, setters and getters lengthens my code length and gives me a hard time to look at the important parts of it. so I decided to on a journey to reduce my boilerplate code.

During this quest, I found . Project Lombok.

In a nutshell project lombok replaces your common  java boilerplate code with a simple annotations.

One of the things I like about Project Lombok is their @Setter and @Getter annotations.

it shortens your class LOC, say good bye to those (getBar(),setBar(String bar))

Installation In Eclipse. 

Adding the jar the alone is not enough for lombok to work.  what we need to do is.

is to ‘Install’ Lombok.

java -jar lombok.jar

We run this command in our CMD, make sure that when you are going to run/execute lombok.jar, make you are in the directory on where it is located. upon running that command a screen will show up.

Untitled

Click ‘Install/update’.

Right after that check your IDE’s .ini file or configuration file. the following parameters should be  added (below -vmargs for Eclipse)


-javaagent:lombok.jar
-Xbootclasspath/a:lombok.jar

Once it is done. we can now already add lombok.jar in our projects.

A Simple Example Using Lombok.

here’s  a simple example of Lombok (taken from my personal project)

Untitled

As you can see on our Outline, the Annotations @Getter and @Setter created the setters and getters for our class instance variables.

Other Concerns And references: 

http://stackoverflow.com/questions/2866084/is-project-lombok-suitable-for-large-java-projects

http://stackoverflow.com/questions/3852091/is-it-safe-to-use-project-lombok

UPDATE

I just found out that they have support for IntelliJ Idea. Checkout the plugin here

Reversed Binary (The Spotify Tech Puzzle Reggae Quiz)

I was lurking around reddit yesterday and I found this. to quote the task was.

Your task will be to write a program for reversing numbers in binary. For instance, the binary representation of 13 is 1101, and reversing it gives 1011, which corresponds to number 11.

I would have to admit, I did a little refreshing of my binary skills(it has been 1 year and a half since I did some fiddling with binaries).

So here is my solution, I know it is not that elegant but it did the solve the problem(Yes, Spotify’s automated system checked it and it is considered valid)

public class ReverseBinary {

	public  String reversedIntToBinary(int val) {
		int value = val;
		StringBuilder bldr = new StringBuilder();

		while (value != 0) {
			int remainder = value % 2;
			value = value / 2;
			bldr.append(remainder);
		}
		return bldr.toString();
	}

	public  int toDecimal(String bin) {
		char[] binString = bin.toCharArray();

		int starting = 0;

		for (int i = 0; i < binString.length; i++) {
			int tempoVal = starting * 2
					+ Character.getNumericValue(binString[i]);
			starting = tempoVal;
		}
		return starting;
	}

	public  int reversedBinary(int val){
		String bin =reversedIntToBinary(val);
		int result = toDecimal(bin);
		return result;
	}
}

Now why am I posting this? I am encouraging each and everyone of you to try out this kind of things, one thing I liked about this challenged is that they have an automated testing, which will automatically know what your errors are and give you a clue on what went wrong.

Side Note:
It is amazing how huge amount of information that the internet has, it paves way for us to learn, developed and discover new things all on our own.

Ugghhh Palindrome!

In my local programming group(consist mostly of freshman and sophomore computer science people) Palindrome has been such an issue. Yes, palindrome is already an old exercise for programming but for them this is new (I have no idea why! they are probably day dreaming when it was discussed in their class!). To keep the long story short, they asked me how would create a simple Palindrome checker, since I do not really want to entertain a lot of questions on my facebook account and Garena account I’ve decided to post it here!

Posting it here would also be hitting two birds with one stone Why? I am preparing for my Internship Interview it would be beneficial for me to post it here as a reviewer form me in case I forget or I need a refresher about it.

Okay, I’ve whipped up a 10 minute solution for you to create a palindrome checker(5 minute per approach, I’ve created 2 approaches)

First Approach: Reverse the Word and compare it to the original word!

Just by checking this code out will already be understandable for you.

	private static String reverseWord(String word){
		char[] original = word.toCharArray();
		StringBuilder reverse = new StringBuilder();
		for(int i = original.length - 1; i >= 0; i--){
			reverse.append(original[i]);
		}	
		return reverse.toString();
	}

	public static boolean isWordAPalindrome(String word){
		return (word.equals(reverseWord(word)));
	}

So what are we doing? We are simply passing the Argument word from the method isWordAPalindrome to reverseWord method and then we check if the reverse word(that came from the reverseWord(word) is equal to the original word.

Second Approach: Traversing the word forward and backward!

	public static boolean isPalindome(char[] word){
		int forward = 0;
		int backward = word.length - 1;

		while(forward<backward){
			if(word[forward] != word[backward]){
				System.out.println(word.toString());
				return false;
			}

			forward++;
			backward--;
		}
		return true;
	}

Now please have some effort understanding this 😉

I hope I’ve shed a light on your problem. Cheers!

Alternatives for Hibernate buildSessionFactory()

With the Hibernate 4 the

buildSessionFactory()

method is already deprecated. The

buildSessionFactory()

still works but if you are one of those people who tries to avoid using deprecated or old libraries I found an alternative for you to create your Session Factory.

In Hibernate 4 the

buildSessionFactory(ServiceRegistry serviceRegistry)

is the replacement for the deprecated method

buildSessionFactory()

. I tried googling, searching the documentation of Hibernate 4 for the implementation of

buildSessionFactory(ServiceRegistry serviceRegistry)

but sadly I cannot find any, The getting started guide for it still uses the old buildSessionFactory() method.

Here is an alternative for deprecated buildSessionFactory()

public static SessionFactory configureSessionFactory() {

    try {
        Configuration configuration = new Configuration();

        configuration.configure();

        serviceRegistry = new ServiceRegistryBuilder().applySettings(

        configuration.getProperties()).buildServiceRegistry();

        sessionFactory = configuration.buildSessionFactory(serviceRegistry);
    } catch (HibernateException hbe) {

        hbe.printStackTrace();

    }

    return sessionFactory;
}

This code will work providing the your hibernate.cfg.xml is in the same directory of this code. However if you placed your hibernate.cfg.xml in another directory all you have to do is replace the

configuration.configure();

with

configuration.configure(configFilePath);