To understand the material in this lesson, you need to have an understanding of HTTP and HTTP API.
Using a DOM tree does help make our sites more lively, but it's still not enough to create standalone widgets or full-fledged Single Page Application (SPA) with a backend.
Let's take a specific example. Many services make it possible to use different widgets, such as weather or currency rates. It works like this: you insert some code provided by a service into your html. Next, this code loads the widget itself and periodically requests the necessary data from the service servers. This can happen whenever the widget user clicks buttons that require new data, such as to show the weather for the next week.
A similar widget is also used on Hexlet; you can see it in the lower-right corner on every page. It lets you search through our guide and has a form so you can message support. The widget works using a special service and doesn't interact with the Hexlet backend in any way.
The key technology here is a mechanism for executing http requests directly from the browser. It is called AJAX, which stands for “Asynchronous JavaScript and XML”. Despite the name, this technology works with other things, not just xml.
Before the advent of html5, browsers provided (and still provide) a special XMLHttpRequest
object:
// example of a typical request using XMLHttpRequest
// just for reference
const request = new XMLHttpRequest();
request.onreadystatechange = () => {
if (request.readyState == 4 && request.status == 200) {
document.getElementById('demo').innerHTML = request.responseText;
}
};
request.open('GET', '/api/v1/articles/152.json', true);
request.setRequestHeader('X-Requested-With', 'XMLHttpRequest');
request.send();
It's extremely inconvenient to work with it and, in reality everyone used a wrapper from the jQuery library. There'll be more about this is in our lesson dedicated to jQuery.
With the advent of the HTML5 standard, a new mechanism for http requests appeared:
// example of a typical query using fetch
// const promise = fetch(url[, options]);
fetch('/api/v1/articles/152.json')
.then((response) => {
console.log(response.status); // => 200
console.log(response.headers.get('Content-Type'));
return response.json();
})
.then((article) => {
console.log(article.title); // => 'How do I use fetch?'
})
.catch(console.error);
As you can see, fetch
is a function that returns a promise, which means it's convenient and pleasant to work with it. And thanks to the existence of polyfills, you don't have to worry about a browser not supporting this mechanism.
Note that response.json
also returns a promise. In addition to json
data can be obtained using the blob
, text
, formData
and arrayBuffer
functions.
Sending a form with a POST request:
const form = document.querySelector('form');
fetch('/users', {
method: 'POST',
body: new FormData(form),
});
Sending the form as json:
fetch('/users', {
method: 'POST',
headers: {
'Content-Type': 'application/json'
},
body: JSON.stringify({
name: 'Hubot',
login: 'hubot',
})
})
For all its advantages, fetch
is a fairly low-level mechanism. For example, when working with json (a very common option), you'll have to set headers yourself and do various things with data that could otherwise be automated.
In practice, this has led to the creation of various libraries that work similarly, but provide many more features. Moreover, many of these libraries are isomorphic, that is, they work the same way both in the browser and on the server. One of the most popular libraries at the time the course was created is - axios.
As we know from previous courses, gluing lines to work with paths or URLs is a bad idea. You can easily make mistakes, and you'll essentially be having to do work that can easily be done by a machine. You can always use the myriad third-party libraries, or you can use browsers' built-in mechanism for this (polyfills are usually added for older browsers).
const url = new URL('../cats', 'http://www.example.com/dogs');
console.log(url.hostname); // => www.example.com
console.log(url.pathname); // => /cats
url.hash = 'tabby';
console.log(url.href); // => http://www.example.com/cats#tabby
url.pathname = 'démonstration.html';
console.log(url.href); // => http://www.example.com/d%C3%A9monstration.html
What's really nice is that fetch
is able to work with the URL
object directly:
const response = await fetch(new URL('http://www.example.com/démonstration.html'));
And here's how you can work with query parameters:
// https://some.site/?id=123
const parsedUrl = new URL(window.location.href);
console.log(parsedUrl.searchParams.get('id')); // => 123
parsedUrl.searchParams.append('key', 'value')
console.log(parsedUrl); // => https://some.site/?id=123&key=value
Unlike backend, client http requests can be used by attackers to steal data. Therefore, browsers control where and how requests are made.
You can read more about this mechanism here.
The Hexlet support team or other students will answer you.
A professional subscription will give you full access to all Hexlet courses, projects and lifetime access to the theory of lessons learned. You can cancel your subscription at any time.
Programming courses for beginners and experienced developers. Start training for free
Our graduates work in companies:
From a novice to a developer. Get a job or your money back!
Sign up or sign in
Ask questions if you want to discuss a theory or an exercise. Hexlet Support Team and experienced community members can help find answers and solve a problem.