what is Web Mashup (web application hybrid)

A mashup, in web development, is a web page, or web application, that uses content from more than one source to create a single new service displayed in a single graphical interface. For example, a user could combine the addresses and photographs of their library branches with a Google map to create a map mashup.[1] The term implies easy, fast integration, frequently using open application programming interfaces (open API) and data sources to produce enriched results that were not necessarily the original reason for producing the raw source data.

webMashup

The main characteristics of a mashup are combination, visualization, and aggregation. It is important to make existing data more useful, for personal and professional use. To be able to permanently access the data of other services, mashups are generally client applications or hosted online.

In the past years, more and more Web applications have published APIs that enable software developers to easily integrate data and functions the SOA way, instead of building them by themselves. Mashups can be considered to have an active role in the evolution of social software and Web 2.0.

The architecture of a mashup is divided into three layers:

Architecturally, there are two styles of mashups: Web-based and server-based. Whereas Web-based mashups typically use the user’s web browser to combine and reformat the data, server-based mashups analyze and reformat the data on a remote server and transmit the data to the user’s browser in its final form.[9]

Mashups appear to be a variation of a façade pattern.[10] That is: a software engineering design pattern that provides a simplified interface to a larger body of code (in this case the code to aggregate the different feeds with different APIs).

Mashups can be used with software provided as a service (SaaS).

After several years of standards development, mainstream businesses are starting to adopt service-oriented architectures (SOA) to integrate disparate data by making them available as discrete Web services. Web services provide open, standardized protocols to provide a unified means of accessing information from a diverse set of platforms (operating systems, programming languages, applications). These Web services can be reused to provide completely new services and applications within and across organizations, providing business flexibility.

web : What is cURL and how to use it

cURL is an incredibly powerful tool when working on the web. It can be thought of as command line alternative for postman which is used to validate the api endpoints. Imagine you are on linux box and wants to test an api’s response, quick solution is just do a curl <api end point> and done 🙂

-o => will save the response in a file
-i => will display response headers with actual response
-I => will display only response and no contents.
-X => is used to specifiy the verb like post, delete
-H => is sued to send the header with request
-d => is used to send post request data

Example : Most often used to validate rest api by checking the response code as 200

  restfulApis git:(master) curl -X GET http://www.google.com -I

HTTP/1.1 200 OK

Date: Sun, 20 Mar 2016 17:30:05 GMT

Expires: -1

Cache-Control: private, max-age=0

Content-Type: text/html; charset=ISO-8859-1

At its very basic, cURL makes requests to URL’s. Normally you want to interact with those URL’s in someway.

curl http://www.google.com

By default, cURL makes an HTTP GET request, so that will fetch the HTML from http://www.google.com. Awesome!

Spitting it out in the terminal isn’t so useful, so let’s save it to a file:

curl -o google.html http://www.google.com

Looking at google.html will show you the same contents as before. This is especially useful when you want to save a JSON response.

Great! Well, interacting with HTML is pretty boring, most of what I do is interacting with API’s. cURL is a great way for quickly testing your PHP script, your JSON endpoint, or what that API actually returns.

I work at CloudMine and we have a decent JSON API, so I’ll use that just as an example. Just keep in mind that the URL isn’t important here, but rather the curl options.

By default, curl makes GET requests, so let’s GET some data!

curl https://api.cloudmine.me/v1/app/928a78ffd73e4ff78383d1d4c06dd5a7/text?keys=all  

Woh! What?

{"errors":["API Key invalid"]}

Oh, CloudMine expects an API Key to be sent as well. As an additional header? Sure, no problem:

curl https://api.cloudmine.me/v1/app/928a78ffd73e4ff78383d1d4c06dd5a7/text \  
-H X-CloudMine-ApiKey:e90ef1aeaadd48de93b45038ed592a06

Response:
{"success":{},"errors":{}}

Excellent! Sending the correct header worked. But the response is rather vague, I’m not sure if it worked and found no data, or it didn’t work and didn’t return an error. Let’s examine the headers on the reply:

curl https://api.cloudmine.me/v1/app/928a78ffd73e4ff78383d1d4c06dd5a7/text \  
-H X-CloudMine-ApiKey:e90ef1aeaadd48de93b45038ed592a06 -i

Response:

HTTP/1.1 200 OK  
Date: Sat, 20 Dec 2014 17:31:10 GMT  
Content-Type: application/json; charset=utf-8  
Transfer-Encoding: chunked  
Status: 200 OK  
X-Request-Id: a9fbdaec-3b3c-4b11-92d8-5af2e9f01e8e  
Cache-Control: max-age=0, private, must-revalidate  
X-Runtime: 0.020247  
X-Rack-Cache: miss  
Access-Control-Allow-Origin: *  
Access-Control-Expose-Headers: X-Request-Id

{"success":{},"errors":{}}

Adding ‘-i’ will return all the headers on the response. If you only want the headers, you can use ‘-I’.

Excellent. But the URL I want to hit is actually an endpoint for consuming data. I want to send information. Let’s make a POST request!

curl -X POST https://api.cloudmine.me/v1/app/928a78ffd73e4ff78383d1d4c06dd5a7/text \  
-H X-CloudMine-ApiKey:e90ef1aeaadd48de93b45038ed592a06 -i
{"errors":[{"code":400,"message":"Invalid payload"}]}

Oops, we forgot to send data. To send inforamtion, we use -d.

curl -X POST https://api.cloudmine.me/v1/app/928a78ffd73e4ff78383d1d4c06dd5a7/text \  
-d '{"myrandomkey":{"name":"ethan"}}' \
-H X-CloudMine-ApiKey:e90ef1aeaadd48de93b45038ed592a06 \
-H "content-type:application/json"

Response:

{"success":{"myrandomkey":"created"},"errors":{}}

CloudMine expects the Content-Type to be explicitely stated, so we add that as a header too. Other common ones are application/xml and application/x-www-form-urlencoded

Cool. Well, I don’t like that object, so let’s delete it.

curl -X DELETE "https://api.cloudmine.me/v1/app/928a78ffd73e4ff78383d1d4c06dd5a7/data?keys=myrandomkey" \  
-H X-CloudMine-ApiKey:e90ef1aeaadd48de93b45038ed592a06

Response:

{"success":{"myrandomkey":"deleted"},"errors":{}}

IDEs on web or online playground for JS

A variety of code playgrounds have appeared during the past couple of years. The majority offer a quick and dirty way to experiment with client-side code and share with others. Typical features include:

  • color-coded HTML, CSS and JavaScript editors
  • a preview window — many update on the fly without a refresh
  • HTML pre-processors such as HAML
  • LESS, SASS and Stylus CSS pre-processing
  • inclusion of popular JavaScript libraries
  • developer consoles and code validation tools
  • sharing via a short URL
  • embedding demonstrations in other pages
  • code forking
  • zero cost (or payment for premium services only)
  • showing off your coding skills to the world!

The best feature: they allow you to test and keep experimental snippets code without the rigmarole of creating files, firing up your IDE or setting up a local server.

My favorite is Plunkr.
Plunker is an online community for creating, collaborating on and sharing your web development ideas. It has a very good support for AngualarJs.

It is just like an IDE on web. You can make changes in file see the preview online, error notification etc. Best thing is you can share the final POC with others to experiment Smile you can create multiple files in the same project. This means you can test more abstractly, and easily swap functionality in and out. Your HTML head is in your code window making it easy to see what’s getting loaded. Being able to create your own files also means being able to create external datasources, which is fantastic for playing with dataloading functionality.

plucker

However there are other and they are nice as well. Have a look at them and pick what suits your taste Winking smile

JSFiddle

JSFiddle was one of the earliest code playgrounds and a major influence for all which followed. Despite the name, it can be used for any combination of HTML, CSS and JavaScript testing. It’s looking a little basic today, but still offers advanced functionality such as Ajax simulation.

CodePen

The prize for the best-looking feature-packed playground goes to CodePen. The service highlights popular demonstrations (“Pens”) and offers advanced functionality such as sharing and embedding. The PRO service provides cross-browser testing, pair-programming and teaching options for just $9 per month.

CSS Deck

This may be named CSS Deck, but it’s a fully-fledged HTML, CSS and JavaScript playground with social and collaboration features. It’s similar to CodePen (I don’t know who influenced who!) but you might prefer it.

JS Bin

JS Bin was started by JS guru Remy Sharp. It concentrates on the basics and handles them exceedingly well. As far as I’m aware, it’s also the only option which offers a JavaScript console. Recommended.

Dabblet

Another early playground, Dabblet started life as an HTML5/CSS3 demonstration system by Lea Verou but it’s recently received JavaScript facilities. It looks gorgeous and has one killer feature — browser CSS prefixes are added automatically. There’s no need to enter that -webkit, -moz and -ms nonsense yourself.

Tinkerbin

Tinkerbin is an alpha release and one of the simpler options here. It may not offer features above and beyond the alternatives but it’s attractive and functional.

Liveweave

Liveweave is slightly unusual in that it places your HTML, CSS and JavaScript into a single file. It’s not possible to share your creation, but you can download the result and store or open it locally. It’s ideal for quick and dirty private experimentation.

What is HTML5 Web Storage?

When web developers think of storing anything about the user, they immediately think of uploading to the server. HTML5 changes that, as there are now several technologies allowing the app to save data on the client device. It might also be sync’d back to the server, or it might only ever stay on the client: that’s down to you, the developer.

Earlier, this was done with cookies. However, Web Storage is more secure and faster. The data is not included with every server request, but used ONLY when asked for. It is also possible to store large amounts of data, without affecting the website’s performance. The data is stored in key/value pairs, and a web page can only access data stored by itself.

There are several reasons to use client-side storage. First, you can make your app work when the user is offline, possibly sync’ing data back once the network is connected again. Second, it’s a performance booster; you can show a large corpus of data as soon as the user clicks on to your site, instead of waiting for it to download again. Third, it’s an easier programming model, with no server infrastructure required. Of course, the data is more vulnerable and the user can’t access it from multiple clients, so you should only use it for non-critical data, in particular cached versions of data that’s also “in the cloud”.

Session Storage and Local Storage

It is important to know that there are two types of Web Storage objects:sessionStorage and localStorage.

sessionStorage is only available within the browser tab or window session. It’s designed to store data in a single web page session.

localStorage is kept even between browser sessions. This means data is still available when the browser is closed and reopened, and also instantly between tabs and windows.

Web Storage data is, in both cases, not available between different browsers. For example, storage objects created in Firefox cannot be accessed in Internet Explorer, exactly like cookies.

Cookies Vs Local Storage

HTML5 introduces two mechanisms, similar to HTTP session cookies, for storing structured data on the client side and to overcome following drawbacks.

  • Cookies are included with every HTTP request, thereby slowing down your web application by transmitting the same data.

  • Cookies are included with every HTTP request, thereby sending data unencrypted over the internet.

  • Cookies are limited to about 4 KB of data . Not enough to store required data.

Sample code to access local Storage
<!DOCTYPE HTML>
<html>
<body>

  <script type=”text/javascript”>
    if( localStorage.hits ){
       localStorage.hits = Number(localStorage.hits) +1;
    }else{
       localStorage.hits = 1;
    }
    document.write(“Total Hits :” + localStorage.hits );
  </script>
  <p>Refresh the page to increase number of hits.</p>
  <p>Close the window and open it again and check the result.</p>

</body>
</html>

Delete Web Storage:

Storing sensitive data on local machine could be dangerous and could leave a security hole.

The Session Storage Data would be deleted by the browsers immediately after the session gets terminated.

To clear a local storage setting you would need to call localStorage.remove(‘key’); where ‘key’ is the key of the value you want to remove. If you want to clear all settings, you need to call localStorage.clear() method.

web : Difference between HTTP and HTTPS

Hypertext Transfer Protocol Secure (HTTPS) is a widely used communications protocol for secure communication over a computer network, with especially wide deployment on the Internet. Technically, it is not a protocol in itself; rather, it is the result of simply layering the Hypertext Transfer Protocol (HTTP) on top of the SSL/TLS protocol, thus adding the security capabilities of SSL/TLS to standard HTTP communications. HTTPS provides authentication of the web site and associated web server that one is communicating with, which protects against Man-in-the-middle attacks. Additionally, it provides bidirectional encryption of communications between a client and server, which protects against eavesdropping and tampering with and/or forging the contents of the communication

There are some primary differences between http and https, however, beginning with the default port, which is 80 for http and 443 for https. Https works by transmitting normal http interactions through an encrypted system, so that in theory, the information cannot be accessed by any party other than the client and end server. There are two common types of encryption layers: Transport Layer Security (TLS) and Secure Sockets Layer (SSL), both of which encode the data records being exchanged.

When using an https connection, the server responds to the initial connection by offering a list of encryption methods it supports. In response, the client selects a connection method, and the client and server exchange certificates to authenticate their identities. After this is done, both parties exchange the encrypted information after ensuring that both are using the same key, and the connection is closed. In order to host https connections, a server must have a public key certificate, which embeds key information with a verification of the key owner’s identity. Most certificates are verified by a third party so that clients are assured that the key is secure.

How Google Analytics Works

 

Google Analytics works by the inclusion of a block of JavaScript code on pages in your website. When visitors to your website view a page, this JavaScript code references a JavaScript file which then executes the tracking operation for Analytics. The tracking operation retrieves data about the page request through various means and sends this information to the Analytics server via a list of parameters attached to a single-pixel image request.
Google Analytics is enabled by including a tracking code in the template of your website. This way, Google’s indexing bots can see every page of your site and tell you all kinds of information about the traffic you’re receiving on those pages.

Inclusion of Java Script with tracking token

This is fundamentally different from how Urchin works: Urchin is installed on the same server as your website’s database, so it looks at your website from the “inside” and tracks traffic in terms of hits to the database. Since a hit is registered every time a file is requested from the database- every word, image, CSS or javascript file- consider just how many hits might be logged just from loading our homepage! Also, Urchin lumps in database files along with your pages in its reports, so don’t be surprised if “robots.txt” is the most popular file on your site. What Google understands is that you are interested in how people use your website, so they’ve built their analytics tool around that principle.
By the way, Google allows you to add IP addresses to a filter in your analytics account so that your numbers are not skewed by your own traffic on the website.It would be a real bummer to go on thinking your website is crazy popular because you spend hours clicking through it every day.

 

 

What is Google Analytics ?

Google provides GA (google analytics) service by which web master can analyze the web traffic on his web site. Like who visited, from where, for how much time, at what time etc. Once you enable it, you can see it graphically on the google analytics server on the real time basis. It is very simple to consume this service, just create one account with http://www.google.com/analytics/    and take the account tracking token for that account. Put the tracking token in base web page of web site you want to be tracked or analyzed and you are done.

Google Analytic dash board view

“Know your visitors” is the key behind creation of GA service. Once a web site is up and running, it is really important to get a sense of users by tracking their activities. It helps in making important decisions based on users locale, likings, timings etc.