Certificate : Jhipster – Full stack web application generator for Java spring boot and angular/React

JHipster is a very powerful application generation framework to create full stack applications using Java + JavaScript. I have completed a course which has Java spring back end and Angular 4 / React front end without dockers.

jHipster-course-certificateUC-D6XTMEXH

 

This certificate above verifies that Aseem Jain successfully completed the course Angular 4 Java Developers on 09/24/2018 as taught by Dan VegaJohn Thompson on Udemy. The certificate indicates the entire course was completed as validated by the student.

Advertisements

git : pushing a new project to you github repo

Its a good practice to push your pet project to github repository. Below commands will help you.

First, you need to create a project in github repo and copy its repository url which will act has your remote origin, then you need to init you local repository and link them. so that you can push them once its all set.

git init
git add -A
git commit -m 'Added my project'
git remote add origin git@github.com:premaseem/my-new-project.git
git push -u -f origin master

With this, there are a few things to note. The -f flag stands for force. This will automatically overwrite everything in the remote directory. We’re only using it here to overwrite the README that GitHub automatically initialized. If you skipped that, the -f flag isn’t really necessary.

The -u flag sets the remote origin as the default. This lets you later easily just do git push and git pull without having to specifying an origin since we always want GitHub in this case

 

Docker: Building Docker Images with Dockerfiles

In this guide we’ll learn about the Dockerfile. What it is, how to create one, and how to configure the basics to bring up your own Dockerized app.

What is a Dockerfile?

  • A Dockerfile is a text configuration file written in a popular, human-readable Markup Language called YAML.
  • It is a step-by-step script of all the commands you need to run to assemble a Docker Image.
  • The docker build command processes this file generating a Docker Image in your Local Image Cache, which you can then start-up using the docker run command, or push to a permanent Image Repository.

Create a Dockerfile

Creating a Dockerfile is as easy as creating a new file named “Dockerfile” with your text editor of choice and defining some instructions.

#
# Each instruction in this file generates a new layer that gets pushed to your local image cache
#

#
# Lines preceeded by # are regarded as comments and ignored
#

#
# The line below states we will base our new image on the Latest Official Ubuntu
FROM ubuntu:latest

#
# Identify the maintainer of an image
MAINTAINER My Name “myname@somecompany.com”

#
# Update the image to the latest packages
RUN apt-get update && apt-get upgrade -y

#
# Install NGINX to test.
RUN apt-get install nginx -y

#
# Expose port 80
EXPOSE 80

#
# Last is the actual command to start up NGINX within our Container
CMD [“nginx”, “-g”, “daemon off;”]

Run below command to build image from Dockerfile

$ docker build . -t <image tag name>

eg.$ docker build . -t premaseem/dockerimage

 

Dockerfile Commands

  • ADD – Defines files to copy from the Host file system onto the Container
    • ADD ./local/config.file /etc/service/config.file
  • CMD – This is the command that will run when the Container starts
    • CMD [“nginx”, “-g”, “daemon off;”]
  • ENTRYPOINT – Sets the default application used every time a Container is created from the Image. If used in conjunction with CMD, you can remove the application and just define the arguments there
    • CMD Hello World!
    • ENTRYPOINT echo
  • ENV – Set/modify the environment variables within Containers created from the Image.
    • ENV VERSION 1.0
  • EXPOSE – Define which Container ports to expose
    • EXPOSE 80
  • FROM – Select the base image to build the new image on top of
    • FROM ubuntu:latest
  • MAINTAINER – Optional field to let you identify yourself as the maintainer of this image
    • MAINTAINER Some One “someone@xyz.xyz”
  • RUN – Specify commands to make changes to your Image and subsequently the Containers started from this Image. This includes updating packages, installing software, adding users, creating an initial database, setting up certificates, etc. These are the commands you would run at the command line to install and configure your application
    • RUN apt-get update && apt-get upgrade -y && apt-get install -y nginx && rm -rf/var/lib/apt/lists/*
  • USER – Define the default User all commands will be run as within any Container created from your Image. It can be either a UID or username
    • USER docker
  • VOLUME – Creates a mount point within the Container linking it back to file systems accessible by the Docker Host. New Volumes get populated with the pre-existing contents of the specified location in the image. It is specially relevant to mention is that defining Volumes in a Dockerfile can lead to issues. Volumes should be managed with docker-compose or “docker run” commands.
    • VOLUME /var/log
  • WORKDIR – Define the default working directory for the command defined in the “ENTRYPOINT” or “CMD” instructions
    • WORKDIR /home

REST: What is HTTP 409 (conflict) status code?

It is given where where is an error to attempt to update a field in unexpected or conflicting values mentioned in the payload.  

For example, I am trying to update a user for some value, I do a get on it modified few values and pass it as PUT payload. Now by mistake I mis types userid, username or email with some other fields, now server cannot accept payload since it finds inconsistency or conflicts between some readonly fields like email, userid etc. so it will send a 409 Conflict http code.

409 is also thrown in case of version conflict: 

For example, you may get a 409 error if you try to upload a file to the Web server which is older than the one already there – resulting in a version control conflict.

the Robustness Principle states “Be conservative in what you do [send], be liberal in what you accept.” If you agree philosophically with this, then the solution is obvious: Ignore any invalid data in PUT requests. That applies to both immutable data, as in your example, and actual nonsense, e.g. unknown fields.

Lets take an example, I have payload and there are few fields like id, username etc which are non writable. If someone sends them in PUT payload, I have 3 options,

  1. Do validation against read only or non writable fields (or any extra fields) and return 400 validation error. ( breaks backward incompatible )
  2. Follow robustness principle and all the payload with any fields including non writable fields, however if the value does not match ( the id or username in payload does not match with DB data / get on resource )  return HTTP 409 conflict.
  3. Silently ignore any other fields and its values and update only modifiable fields. ( supports any payload and makes it back ward compatible.

I really dislike the equality check with 409 because it invariably requires the client to do a GET in order to retrieve the current data before being able to do a PUT. That’s just not nice and is probably going to lead to poor performance, for somebody, somewhere. I also really don’t like 403 (Forbidden) for this as it implies that the entire resource is protected, not just a part of it. So my opinion is, if you absolutely must validate instead of following the robustness principle, validate all of your requests and return a 400 for any that have extra or non-writable fields.

REST: How should a REST API handle PUT requests to partially-modifiable resources?

Lets say you want to update a document, but you have few fields which are non updatable. How will you handle it in partially modifiable PUT request

  • Should it be required to be missing from the PUT request?
  • Should it be silently discarded?
  • Should it be checked, and if it differs from the old value of that attribute, return a HTTP error code in the response?
  • Or should we use RFC 6902 JSON patches instead of sending the whole JSON?

There is no rule, either in the W3C spec or the unofficial rules of REST, that says that a PUT must use the same schema/model as its corresponding GET.

It’s nice if they’re similar, but it’s not unusual for PUT to do things slightly differently. For example, I’ve seen a lot of APIs that include some kind of ID in the content returned by a GET, for convenience. But with a PUT, that ID is determined exclusively by the URI and has no meaning in the content. Any ID found in the body will be silently ignored.

REST and the web in general is heavily tied to the Robustness Principle“Be conservative in what you do [send], be liberal in what you accept.” If you agree philosophically with this, then the solution is obvious: Ignore any invalid data in PUT requests. That applies to both immutable data, as in your example, and actual nonsense, e.g. unknown fields.

PATCH is potentially another option, but you shouldn’t implement PATCH unless you’re actually going to support partial updates. PATCH means only update the specific attributes I include in the content; it does not mean replace the entire entity but exclude some specific fields. What you’re actually talking about is not really a partial update, it’s a full update, idempotent and all, it’s just that part of the resource is read-only.

A nice thing to do if you choose this option would be to send back a 200 (OK) with the actual updated entity in the response, so that clients can clearly see that the read-only fields were not updated.

There are certainly some people who think the other way – that it should be an error to attempt to update a read-only portion of a resource. There is some justification for this, primarily on the basis that you would definitely return an error if the entire resource was read-only and the user tried to update it. It definitely goes against the robustness principle, but you might consider it to be more “self-documenting” for users of your API.

There are two conventions for this, both of which correspond to your original ideas, but I’ll expand on them. The first is to prohibit the read-only fields from appearing in the content, and return an HTTP 400 (Bad Request) if they do. APIs of this sort should also return an HTTP 400 if there are any other unrecognized/unusable fields. The second is to require the read-only fields to be identical to the current content, and return a 409 (Conflict) if the values do not match.

I really dislike the equality check with 409 because it invariably requires the client to do a GET in order to retrieve the current data before being able to do a PUT. That’s just not nice and is probably going to lead to poor performance, for somebody, somewhere. I also really don’t like 403 (Forbidden) for this as it implies that the entire resource is protected, not just a part of it. So my opinion is, if you absolutely must validate instead of following the robustness principle, validate all of your requests and return a 400 for any that have extra or non-writable fields.

Make sure your 400/409/whatever includes information about what the specific problem is and how to fix it.

Both of these approaches are valid, but I prefer the former one in keeping with the robustness principle. If you’ve ever experienced working with a large REST API, you’ll appreciate the value of backward compatibility. If you ever decide to remove an existing field or make it read-only, it is a backward compatible change if the server just ignores those fields, and old clients will still work. However, if you do strict validation on the content, it is not backward compatible anymore, and old clients will cease to work. The former generally means less work for both the maintainer of an API and its clients.

Git: Pushing to other fork

Objective:

You clone someone else fork and wants to push in your fork

I created a fork (let’s call it myrepo) of another repository (let’s call it orirepo) on GitHub. Later, I cloned orirepo.

git clone https://github.com/original/orirepo.git

I modified about 20 files, then I staged my change and made a commit

git add
git commit

However, when I tried to push

git push

I got this error:

remote: Permission to original/orirepo.git denied to mylogin.
fatal: unable to access 'https://github.com/original/orirepo.git/': The requested URL returned error: 403

 

Solution:

By default, when you clone a repository

then

  • the local config of the resulting clone lists only one remote called origin, which is associated with the URL of the repository you cloned;
  • the local master branch in your clone is set to track origin/master.

Therefore, if you don’t modify the config of your clone, Git interprets

git push

as

git push origin master:origin/master

In other words, git push attempts to push your local master branch to the master branch that resides on the remote repository (known by your clone as origin). However, you’re not allowed to do that, because you don’t have write access to that remote repository.

You need to

  1. either redefine the origin remote to be associated with your fork, by running
    git remote set-url origin https://github.com/RemiB/myrepo.git
    
  2. or, if you want to preserve the original definition of the origin remote, define a new remote (called myrepo, here) that is associated to your fork:
    git remote add myrepo https://github.com/RemiB/myrepo.git
    

    Then you should be able to push your local master branch to your fork by running

    git push myrepo master
    

    And if you want to tell Git that git push should push to myrepo instead of origin from now on, you should run

    git push -u myrepo master
    

instead.

 

Git: Combining the commits

To squash the last 3 commits into one:

git reset --soft HEAD~3
git commit -m "New message for the combined commit"

Pushing the squashed commit

If the commits have been pushed to the remote:

git push origin +name-of-branch

The plus sign forces the remote branch to accept your rewritten history, otherwise you will end up with divergent branches

If the commits have NOT yet been pushed to the remote:

git push origin name-of-branch