Git: Testing Someone Else’s Pull Request on your local dev machine

You do not have to clone every developer’s fork or repository¬† and then checkout their respective branch in order to test the pull request locally on your dev box ūüėČ

This tutorial will show you how to checkout a pull request on your own computer. This can be very helpful when you want to find bugs and test out new features before they get merged into the main project.

That gist does describe the config changes required in .git/config of the project.

Basically you need to add this line in config

fetch = +refs/pull/*/head:refs/remotes/origin/pr/*

which will allow you to checkout pr locally

Obviously, change the github url to match your project’s URL. It ends up looking like this:

[remote "origin"]
    fetch = +refs/heads/*:refs/remotes/origin/*
    url = git@github.com:joyent/node.git
    fetch = +refs/pull/*/head:refs/remotes/origin/pr/*

Now fetch all the pull requests:

$ git fetch origin
From github.com:joyent/node
 * [new ref]         refs/pull/1000/head -> origin/pr/1000
 * [new ref]         refs/pull/1002/head -> origin/pr/1002
 * [new ref]         refs/pull/1004/head -> origin/pr/1004
 * [new ref]         refs/pull/1009/head -> origin/pr/1009
...

To check out a particular pull request:

$git checkout <pr-id>

$ git checkout pr/999
Branch pr/999 set up to track remote branch pr/999 from origin.
Switched to a new branch 'pr/999'
Advertisements

My Book on design patterns is now available on Udemy platform

Friends,

It was my deep down wish to be an instructor on Udemy platform and with Guru’s grace it got fulfilled today.

My video Book on “Learn software design patterns with Java” is now available on Udemy as on line course. Students would also get the certificate of completion once they finish the entire course.

https://www.udemy.com/learn-design-patterns-with-java/

Udemy

 

 

Certificate : Jhipster – Full stack web application generator for Java spring boot and angular/React

JHipster is a very powerful application generation framework to create full stack applications using Java + JavaScript. I have completed a course which has Java spring back end and Angular 4 / React front end without dockers.

jHipster-course-certificateUC-D6XTMEXH

 

This certificate above verifies that Aseem Jain successfully completed the course Angular 4 Java Developers on 09/24/2018 as taught by Dan Vega, John Thompson on Udemy. The certificate indicates the entire course was completed as validated by the student.

git : pushing a new project to you github repo

Its a good practice to push your pet project to github repository. Below commands will help you.

First, you need to create a project in github repo and copy its repository url which will act has your remote origin, then you need to init you local repository and link them. so that you can push them once its all set.

git init
git add -A
git commit -m 'Added my project'
git remote add origin git@github.com:premaseem/my-new-project.git
git push -u -f origin master

With this, there are a few things to note. The¬†-f¬†flag stands for¬†force. This will automatically overwrite everything in the remote directory. We’re only using it here to overwrite the README that GitHub automatically initialized. If you skipped that, the¬†-f¬†flag isn’t really necessary.

The -u flag sets the remote origin as the default. This lets you later easily just do git push and git pull without having to specifying an origin since we always want GitHub in this case

 

Docker: Building Docker Images with Dockerfiles

In this guide we’ll learn about the Dockerfile. What it is, how to create one, and how to configure the basics to bring up your own Dockerized app.

What is a Dockerfile?

  • A Dockerfile is a text configuration file written in a popular, human-readable Markup Language called YAML.
  • It is a step-by-step script of all the commands you need to run to assemble a Docker Image.
  • The¬†docker build¬†command processes this file generating a Docker Image in your Local Image Cache, which you can then start-up using the¬†docker run¬†command, or push to a permanent Image Repository.

Create a Dockerfile

Creating a Dockerfile is as easy as creating a new file named ‚ÄúDockerfile‚ÄĚ with your text editor of choice and defining some instructions.

#
# Each instruction in this file generates a new layer that gets pushed to your local image cache
#

#
# Lines preceeded by # are regarded as comments and ignored
#

#
# The line below states we will base our new image on the Latest Official Ubuntu
FROM ubuntu:latest

#
# Identify the maintainer of an image
MAINTAINER My Name “myname@somecompany.com”

#
# Update the image to the latest packages
RUN apt-get update && apt-get upgrade -y

#
# Install NGINX to test.
RUN apt-get install nginx -y

#
# Expose port 80
EXPOSE 80

#
# Last is the actual command to start up NGINX within our Container
CMD [“nginx”, “-g”, “daemon off;”]

Run below command to build image from Dockerfile

$ docker build . -t <image tag name>

eg.$ docker build . -t premaseem/dockerimage

 

Dockerfile Commands

  • ADD¬†‚Äď Defines files to copy from the Host file system onto the Container
    • ADD¬†./local/config.file¬†/etc/service/config.file
  • CMD¬†‚Äď This is the command that will run when the Container starts
    • CMD¬†[“nginx”, “-g”, “daemon off;”]
  • ENTRYPOINT¬†‚Äď Sets the default application used every time a Container is created from the Image. If used in conjunction with CMD, you can remove the application and just define the arguments there
    • CMD¬†Hello¬†World!
    • ENTRYPOINT¬†echo
  • ENV¬†‚Äď Set/modify the environment variables within Containers created from the Image.
    • ENV¬†VERSION¬†1.0
  • EXPOSE¬†‚Äď Define which Container ports to expose
    • EXPOSE¬†80
  • FROM¬†‚Äď Select the base image to build the new image on top of
    • FROM¬†ubuntu:latest
  • MAINTAINER¬†‚Äď Optional field to let you identify yourself as the maintainer of this image
    • MAINTAINER¬†Some¬†One¬†“someone@xyz.xyz”
  • RUN¬†‚Äď Specify commands to make changes to your Image and subsequently the Containers started from this Image. This includes updating packages, installing software, adding users, creating an initial database, setting up certificates, etc. These are the commands you would run at the command line to install and configure your application
    • RUN¬†apt-get¬†update¬†&&¬†apt-get¬†upgrade¬†-y¬†&&¬†apt-get¬†install¬†-y¬†nginx¬†&&¬†rm¬†-rf/var/lib/apt/lists/*
  • USER¬†‚Äď Define the default User all commands will be run as within any Container created from your Image. It can be either a UID or username
    • USER¬†docker
  • VOLUME¬†‚Äď Creates a mount point within the Container linking it back to file systems accessible by the Docker Host. New Volumes get populated with the pre-existing contents of the specified location in the image. It is specially relevant to mention is that defining Volumes in a Dockerfile can lead to issues. Volumes should be managed with docker-compose or ‚Äúdocker run‚ÄĚ commands.
    • VOLUME¬†/var/log
  • WORKDIR¬†‚Äď Define the default working directory for the command defined in the ‚ÄúENTRYPOINT‚ÄĚ or ‚ÄúCMD‚ÄĚ instructions
    • WORKDIR¬†/home

REST: What is HTTP 409 (conflict) status code?

It is given where where is an error to attempt to update a field in unexpected or conflicting values mentioned in the payload.  

For example, I am trying to update a user for some value, I do a get on it modified few values and pass it as PUT payload. Now by mistake I mis types userid, username or email with some other fields, now server cannot accept payload since it finds inconsistency or conflicts between some readonly fields like email, userid etc. so it will send a 409 Conflict http code.

409 is also thrown in case of version conflict: 

For example, you may get a 409 error if you try to upload a file to the Web server which is older than the one already there – resulting in a version control conflict.

the¬†Robustness Principle¬†states¬†“Be conservative in what you do [send], be liberal in what you accept.”¬†If you agree philosophically with this, then the solution is obvious: Ignore any invalid data in¬†PUT¬†requests. That applies to both immutable data, as in your example, and actual nonsense, e.g. unknown fields.

Lets take an example, I have payload and there are few fields like id, username etc which are non writable. If someone sends them in PUT payload, I have 3 options,

  1. Do validation against read only or non writable fields (or any extra fields) and return 400 validation error. ( breaks backward incompatible )
  2. Follow robustness principle and all the payload with any fields including non writable fields, however if the value does not match ( the id or username in payload does not match with DB data / get on resource )  return HTTP 409 conflict.
  3. Silently ignore any other fields and its values and update only modifiable fields. ( supports any payload and makes it back ward compatible.

I really dislike the equality check with 409 because it invariably requires the client to do a¬†GET¬†in order to retrieve the current data before being able to do a¬†PUT. That’s just not nice and is probably going to lead to poor performance, for somebody, somewhere. I also¬†really¬†don’t like 403 (Forbidden) for this as it implies that the¬†entire¬†resource is protected, not just a part of it. So my opinion is, if you absolutely must validate instead of following the robustness principle, validate¬†all¬†of your requests and return a 400 for any that have extra or non-writable fields.

REST: How should a REST API handle PUT requests to partially-modifiable resources?

Lets say you want to update a document, but you have few fields which are non updatable. How will you handle it in partially modifiable PUT request

  • Should it be required to be missing from the PUT request?
  • Should it be silently discarded?
  • Should it be checked, and if it differs from the old value of that attribute, return a HTTP error code in the response?
  • Or should we use RFC 6902 JSON patches instead of sending the whole JSON?

There is no rule, either in the W3C spec or the unofficial rules of REST, that says that a PUT must use the same schema/model as its corresponding GET.

It’s nice if they’re¬†similar, but it’s not unusual for¬†PUT¬†to do things slightly differently. For example, I’ve seen a lot of APIs that include some kind of ID in the content returned by a¬†GET, for convenience. But with a¬†PUT, that ID is determined exclusively by the URI and has no meaning in the content. Any ID found in the body will be silently ignored.

REST and the web in general is heavily tied to the¬†Robustness Principle:¬†“Be conservative in what you do [send], be liberal in what you accept.”¬†If you agree philosophically with this, then the solution is obvious: Ignore any invalid data in¬†PUT¬†requests. That applies to both immutable data, as in your example, and actual nonsense, e.g. unknown fields.

PATCH¬†is potentially another option, but you shouldn’t implement¬†PATCH¬†unless you’re actually going to support partial updates.¬†PATCH¬†means¬†only update the specific attributes I include in the content; it does¬†not¬†mean¬†replace the entire entity but exclude some specific fields. What you’re actually talking about is not really a partial update, it’s a full update, idempotent and all, it’s just that part of the resource is read-only.

A nice thing to do if you choose this option would be to send back a 200 (OK) with the actual updated entity in the response, so that clients can clearly see that the read-only fields were not updated.

There are certainly¬†some people¬†who think the other way – that it should be an error to attempt to update a read-only portion of a resource. There is some justification for this, primarily on the basis that you would definitely return an error if the¬†entire¬†resource was read-only and the user tried to update it. It definitely goes against the robustness principle, but you might consider it to be more “self-documenting” for users of your API.

There are two conventions for this, both of which correspond to your original ideas, but I’ll expand on them. The first is to prohibit the read-only fields from appearing in the content, and return an HTTP 400 (Bad Request) if they do. APIs of this sort should also return an HTTP 400 if there are any other unrecognized/unusable fields. The second is to require the read-only fields to be identical to the current content, and return a 409 (Conflict) if the values do not match.

I really dislike the equality check with 409 because it invariably requires the client to do a¬†GET¬†in order to retrieve the current data before being able to do a¬†PUT. That’s just not nice and is probably going to lead to poor performance, for somebody, somewhere. I also¬†really¬†don’t like 403 (Forbidden) for this as it implies that the¬†entire¬†resource is protected, not just a part of it. So my opinion is, if you absolutely must validate instead of following the robustness principle, validate¬†all¬†of your requests and return a 400 for any that have extra or non-writable fields.

Make sure your 400/409/whatever includes information about what the specific problem is and how to fix it.

Both of these approaches are valid, but I prefer the former one in keeping with the robustness principle. If you’ve ever experienced working¬†with¬†a large REST API, you’ll appreciate the value of backward compatibility. If you ever decide to remove an existing field or make it read-only, it is a backward compatible change if the server just ignores those fields, and old clients will still work. However, if you do strict validation on the content, it is¬†not¬†backward compatible anymore, and old clients will cease to work. The former generally means less work for both the maintainer of an API and its clients.