Follow below steps to run dynamodb locally# pull image from docker hub provided by AWS – https://hub.docker.com/r/amazon/dynamodb-local/
docker pull amazon/dynamodb-local# run imagedocker run -p 8000:8000 amazon/dynamodb-localHit this url on browser to test dynamodb is working
You do not have to clone every developer’s fork or repository and then checkout their respective branch in order to test the pull request locally on your dev box 😉
This tutorial will show you how to checkout a pull request on your own computer. This can be very helpful when you want to find bugs and test out new features before they get merged into the main project.
That gist does describe the config changes required in .git/config of the project.
Basically you need to add this line in config
fetch = +refs/pull/*/head:refs/remotes/origin/pr/*
which will allow you to checkout pr locally
Obviously, change the github url to match your project’s URL. It ends up looking like this:
[remote "origin"] fetch = +refs/heads/*:refs/remotes/origin/* url = email@example.com:joyent/node.git fetch = +refs/pull/*/head:refs/remotes/origin/pr/*
Now fetch all the pull requests:
$ git fetch origin From github.com:joyent/node * [new ref] refs/pull/1000/head -> origin/pr/1000 * [new ref] refs/pull/1002/head -> origin/pr/1002 * [new ref] refs/pull/1004/head -> origin/pr/1004 * [new ref] refs/pull/1009/head -> origin/pr/1009 ...
To check out a particular pull request:
$git checkout <pr-id> $ git checkout pr/999 Branch pr/999 set up to track remote branch pr/999 from origin. Switched to a new branch 'pr/999'
It was my deep down wish to be an instructor on Udemy platform and with Guru’s grace it got fulfilled today.
My video Book on “Learn software design patterns with Java” is now available on Udemy as on line course. Students would also get the certificate of completion once they finish the entire course.
This certificate above verifies that Aseem Jain successfully completed the course Angular 4 Java Developers on 09/24/2018 as taught by Dan Vega, John Thompson on Udemy. The certificate indicates the entire course was completed as validated by the student.
Its a good practice to push your pet project to github repository. Below commands will help you.
First, you need to create a project in github repo and copy its repository url which will act has your remote origin, then you need to init you local repository and link them. so that you can push them once its all set.
git init git add -A git commit -m 'Added my project' git remote add origin firstname.lastname@example.org:premaseem/my-new-project.git git push -u -f origin master
With this, there are a few things to note. The
-f flag stands for force. This will automatically overwrite everything in the remote directory. We’re only using it here to overwrite the README that GitHub automatically initialized. If you skipped that, the
-f flag isn’t really necessary.
-u flag sets the remote origin as the default. This lets you later easily just do
git push and
git pull without having to specifying an origin since we always want GitHub in this case
In this guide we’ll learn about the Dockerfile. What it is, how to create one, and how to configure the basics to bring up your own Dockerized app.
What is a Dockerfile?
- A Dockerfile is a text configuration file written in a popular, human-readable Markup Language called YAML.
- It is a step-by-step script of all the commands you need to run to assemble a Docker Image.
docker buildcommand processes this file generating a Docker Image in your Local Image Cache, which you can then start-up using the
docker runcommand, or push to a permanent Image Repository.
Create a Dockerfile
Creating a Dockerfile is as easy as creating a new file named “Dockerfile” with your text editor of choice and defining some instructions.
# Each instruction in this file generates a new layer that gets pushed to your local image cache
# Lines preceeded by # are regarded as comments and ignored
# The line below states we will base our new image on the Latest Official Ubuntu
# Identify the maintainer of an image
MAINTAINER My Name “email@example.com”
# Update the image to the latest packages
RUN apt-get update && apt-get upgrade -y
# Install NGINX to test.
RUN apt-get install nginx -y
# Expose port 80
# Last is the actual command to start up NGINX within our Container
CMD [“nginx”, “-g”, “daemon off;”]
Run below command to build image from Dockerfile
$ docker build . -t <image tag name>
eg.$ docker build . -t premaseem/dockerimage
- ADD – Defines files to copy from the Host file system onto the Container
- ADD ./local/config.file /etc/service/config.file
- CMD – This is the command that will run when the Container starts
- CMD [“nginx”, “-g”, “daemon off;”]
- ENTRYPOINT – Sets the default application used every time a Container is created from the Image. If used in conjunction with CMD, you can remove the application and just define the arguments there
- CMD Hello World!
- ENTRYPOINT echo
- ENV – Set/modify the environment variables within Containers created from the Image.
- ENV VERSION 1.0
- EXPOSE – Define which Container ports to expose
- EXPOSE 80
- FROM – Select the base image to build the new image on top of
- FROM ubuntu:latest
- MAINTAINER – Optional field to let you identify yourself as the maintainer of this image
- MAINTAINER Some One “firstname.lastname@example.org”
- RUN – Specify commands to make changes to your Image and subsequently the Containers started from this Image. This includes updating packages, installing software, adding users, creating an initial database, setting up certificates, etc. These are the commands you would run at the command line to install and configure your application
- RUN apt-get update && apt-get upgrade -y && apt-get install -y nginx && rm -rf/var/lib/apt/lists/*
- USER – Define the default User all commands will be run as within any Container created from your Image. It can be either a UID or username
- USER docker
- VOLUME – Creates a mount point within the Container linking it back to file systems accessible by the Docker Host. New Volumes get populated with the pre-existing contents of the specified location in the image. It is specially relevant to mention is that defining Volumes in a Dockerfile can lead to issues. Volumes should be managed with docker-compose or “docker run” commands.
- VOLUME /var/log
- WORKDIR – Define the default working directory for the command defined in the “ENTRYPOINT” or “CMD” instructions
- WORKDIR /home
It is given where where is an error to attempt to update a field in unexpected or conflicting values mentioned in the payload.
For example, I am trying to update a user for some value, I do a get on it modified few values and pass it as PUT payload. Now by mistake I mis types userid, username or email with some other fields, now server cannot accept payload since it finds inconsistency or conflicts between some readonly fields like email, userid etc. so it will send a 409 Conflict http code.
409 is also thrown in case of version conflict:
For example, you may get a 409 error if you try to upload a file to the Web server which is older than the one already there – resulting in a version control conflict.
the Robustness Principle states “Be conservative in what you do [send], be liberal in what you accept.” If you agree philosophically with this, then the solution is obvious: Ignore any invalid data in
PUT requests. That applies to both immutable data, as in your example, and actual nonsense, e.g. unknown fields.
Lets take an example, I have payload and there are few fields like id, username etc which are non writable. If someone sends them in PUT payload, I have 3 options,
- Do validation against read only or non writable fields (or any extra fields) and return 400 validation error. ( breaks backward incompatible )
- Follow robustness principle and all the payload with any fields including non writable fields, however if the value does not match ( the id or username in payload does not match with DB data / get on resource ) return HTTP 409 conflict.
- Silently ignore any other fields and its values and update only modifiable fields. ( supports any payload and makes it back ward compatible.
I really dislike the equality check with 409 because it invariably requires the client to do a
GET in order to retrieve the current data before being able to do a
PUT. That’s just not nice and is probably going to lead to poor performance, for somebody, somewhere. I also really don’t like 403 (Forbidden) for this as it implies that the entire resource is protected, not just a part of it. So my opinion is, if you absolutely must validate instead of following the robustness principle, validate all of your requests and return a 400 for any that have extra or non-writable fields.