We’re starting an endeavour to research and provide overviews of the many ways in which different caching technologies and approaches can be used for GraphQL. This is the first of numerous posts that we will be writing on this topic here at Hasura. As an idea of what approaches we are starting out with, here is a list of some that we’ll be taking a more in depth look at in the subsequent articles.
- Web Cache — Browser & mixed local and shared caching approaches.
- Database Caching — How queries cache at the database tier, how we can most effectively use those in our subsequent application tiers such as graphql, and beyond.
- Client Cache — How can, or do a range of client technologies provide caching? For example;
- iPhone or Android App Cache.
- Android or iPad Tablet Caching.
- Operating System PC/Systems Level Cache.
- Application Client Cache
- Server Cache
- CDN (Content Delivery Network)
- Reverse Proxies
- Web Accelerators
- Key/Value Store
- HTTP / Network Level Cache
- Cache Servers — MemCache, ElastiCache, Redis, etc.
These are just a few that we’ll be covering in the coming weeks. But before we get into any of these caching options let’s discuss what exactly a cache is and the purpose of using a cache in our systems we build.
Definition: What is cache?
At a fundamental level, the term cache means to hide things in a place to conceal and preserve the items in the place they’re hidden. The definition, as it reads from the Merriam-Webster Dictionary itself
1a: a hiding place especially for concealing and preserving provisions or implements b: a secure place of storage discovered a cache of weapons
2: something hidden or stored in a cache The cache consisted of documents and private letters.
3: a computer memory with very short access time used for storage of frequently or recently used instructions or data — called also cache memory
and then there is the definition for cache being used as a verb. To cache something in the cache.
The purpose of a cache is to take the load off of something else, to make things available for another element of a process or system. This is important to note, even if it seems obvious at first. Sometimes a cache may be put into an architecture and not serve a functional purpose around redistribution of data or compute. In these scenarios it’s important to keep in mind the intent of using a cache in the first place, and determine if it should even be in place.
One of the most common reasons for implementing a cache is to improve, specifically, the performance of an application. Such as improving the performance of data displayed in the application, improving the speed of reads or writes in various interfaces, or other demands. At the root, it’s always about performance and load, and the application is easily the first place to find demands for performance!
The database can routinely become a point of focus for costs, performance, and other reasons. Putting in place a cache to take some of the load off of the database can provide huge benefits. The implementation for a cache for the database can take several forms; from eliminating a hotspot within the database itself, to caching an entire copy of data the database stores in a delivery network or by some other means.
Sometimes a reason to add a cache to a system architecture is to place the most performant IOPs (Input/Output Operations Per Second) available system by the highest IOPs demanded of the system. This can be done by placing the database on hardware that would provide extremely high IOPs such as SSD vs. spindle drives. In some scenarios maybe read IOPs are higher than write IOPs so a write-cache is put into place to store writes, and then write them as IOPs are available at a later time. The mix and match of caches in this space are numerous, and can be catered, as with other areas of caching to meet the needs of the system.
Sometimes the caching isn’t purely about performance, sometimes the emphasis is on data integrity across disparate systems. In the case of session management, many back end systems may be stateless from a distributed nature, but the need to maintain session data for clients is still necessary. A session management cache can be put into place to do specifically this. It often takes the place of an in-memory database, such as Redis.
A favorite of ours here at Hasura, is API caching. With this we can get some blazing fast query times and easily cache those to maintain an extremely high rate of response. For example, any data that isn’t changing on a frequent basis is a prime candidate for this type of caching. It can be turned on with use of query response caching.
This article serves to provide a short overview of the types of cache, what purpose they serve, and at the root of design what a cache is. Now we can take the opportunity to step into specific cache types, services, products, and implementations to see what we can gain with each type. In our next post we’ll take at the Hasura approach to caching, via Hasura cloud, in specific.
As a note of promotion, Hasura Cloud provides for Query Response Caching when you choose to use the Standard tier. If you are interested in testing it out, starting at the Free Tier gives you the experience of a fully managed, production ready GraphQL API as a service to help you build modern apps faster. Get started in 30 seconds at https://cloud.hasura.io/
Originally published at https://hasura.io on February 17, 2021.