APIs can run over HTTP and yet not be RESTful. This video covers the architectural constraints and its RESTful properties.
- [Instructor] In the previous video, we learned about how RESTful APIs use resources. In this video, we are going to take a step back and look at what REST is. Then, we will learn about the six properties or constraints that make APIs RESTful. These properties help us make our APIs scalable, that is, they can easily be expanded to handle more users. Now that we know what APIs are, we're going to take a look at what REST is. REST, which stands for Representational State Transfer, is an architecture for designing network-based applications.
In other words, REST is a way of structuring a system so that it's reality scalable. We will be able to see it better once we look at the architectural constraints. Many people have the misconception that REST is a protocol, framework or standard. REST is not a protocol. Http is a protocol that REST uses. REST is not a framework. There is no such thing as a REST SDK that we can download and plug in to our application. Finally, REST is not a standard. Every API can implement REST differently.
Now, let us move to the architectural constraints. The first constraint is the client server constraint. REST uses clients to handle the UI concerns and servers to handle the data storage logic. By separating the UI form the data storage, we improve the portability of the user interface across multiple platforms, and improve the scalability by simplifying the server components. The second REST constraint is that servers must be stateless. In other words, servers should know nothing about a client beyond what is included in a client's request.
Therefore, each request from a client to server must contain all the information necessary to understand the request. Session state is therefore kept entirely on the client. Stateless servers have development benefits as well as usage benefits. Stateless servers increase the visibility of an application execution. Since servers are stateless, developers only need to consider a failing request to develop the application. Another benefit of stateless servers is reliability. Since no state is stored on the server, if the system ever fails, all that is needed to get the system back as it was is to restart a server.
Stateless servers improve scalability. Since servers do not have sessions, they can free up memory as soon as they close a connection. This enables them to handle more requests. As with any architectural choice, stateless servers have their own drawbacks. Since clients need to send all their state information with each request, more network bandwidth will be used. If an application has clients from multiple platforms, each client should handle the logic for storing and sending it's state. This adds more complexity to the client's code.
The third constraint is caching. REST requires that all responses be labeled as cacheable or not. Caching is very important in network based applications. It reduces a user's wait time. It also reduces a server's load. Since REST also enforces stateless servers, caching can be done on the client, on the server, or in any intermediaries, such as, ISP proxies. Even if a client does not cache a response, one of the machines in a router server may have it in its cache. Thereby, reducing the request junction time.
Since many client requests are satisfied by caches, servers get fewer requests. This increases the scalability of the infrastructure. The drawback of caching is that clients may use stale data. For instance, in some places on Facebook, we see our old profile picture rather than the one we just uploaded. This usually happens in the few minutes after your profile picture is updated. Because it will still be fresh in some caches. The key point in using caches is to find the right balance that keeps our system running and our users happy.
REST API should have a uniform interface. Clients should not know whether an object is stored in a draft database or a relational database. Clients should only know URL endpoints and data presentations. And a server should be able to handle the necessary internal communication. There are four facets to our uniform interface. First is the identification of resources. All resources, whether they represent documents, collections or procedures, should be identified using similar endpoints, such as, http URLs.
Second, is manipulation of resources through representations. Clients should be able to manipulate any server side data using uniform representations, such as JSON or XML. Third is descriptive messages. Servers should include information to help clients parse the response. The fourth facet is hypermedia as the engine of application state. Also called HATEOAS. HATEOAS says that clients should not know a server's endpoints. Clients should only know one fixed endpoint that gives them the URLs to all other endpoints.
In this case, clients will never try to access endpoints that are down. Personally, most of the APIs that I use do not use HATEOAS. It is an interesting idea though, and can help make errors show up faster on the client. The drawback to a uniform interface is degraded efficiency. For instance, a mobile client may not use all the data that a desktop client uses. Since the API has a uniform interface, all clients will receive the same data, which means that a mobile client will receive data that it will not use.
The fifth constraint is a layered system. This means that a layer that is added in front of an existing layer should be able to add features by transforming the content of messages. Layering has many benefits in APIs. Layering allows for encapsulation. For example, for existing users, a legacy server with an outdated protocol, we can easily add a layer that acts as a middle man between our RESTful API and a legacy server. Layering also enables load balancing. Load balancing is the process of distributing traffic across multiple servers.
Load balancing increases an API's scalability. Layering is important for security. Layers can be added to check user permissions and only allow authorized users to access to secure resources. This is exactly what firewalls do to messages on a network. With each layer comes a processing delay and a transmission delay. This increases a client's overall wait time. For network based systems that support caching, this can be offset by the benefit of shared caches at intermediaries.
The last architectural constraint of REST is code on demand. Code on demand allows clients to be extended by downloading code. For example, a client should be able to download a module that can handle new URL endpoints. Unlike the other constraints, code on demand is not a required constraint. In this video we looked at the architectural constraints, or properties that make RESTful APIs scalable. In the next video we will set up our development environment so that we can start using Twitter's REST API.
This Node.js course gives you an overview of a RESTful API and the logical steps of creating one. It explores three different APIs, focusing on their similarities and differences and how to effectively implement one. Instructor Saleh Hamadeh starts off by defining APIs, showing how they can be built on top of HTTP and listing the properties that make an API RESTful. Learn how to develop Twitter Notes, a sample web application that lets users leave notes for their Twitter friends. Use Twitter's API to implement a login flow and then design a web API. Additionally, get a closer look at several other real-world APIs, and learn some best practices to keep APIs secure, maintainable, and efficient.
- Identifying REST resources
- Setting up the development environment
- Consuming a RESTful API
- Creating an OAuth login request
- Getting an access token
- Saving data in MongoDB
- Building a RESTful API
- Testing user-perceived performance
- Looking at APIs in the real world
- Best practices for building RESTful APIs