Creating a Source Plugin
In this tutorial, you’ll create your own source plugin that will gather data from an API. The plugin will source data, optimize remote images, and create foreign key relationships between data sourced by your plugin.
Source plugins “source” data from remote or local locations into what Gatsby calls nodes. This tutorial uses a demo API so that you can see how the data works on both the frontend and backend, but the same principles apply if you would like to source data from another API.
At a high-level, a source plugin:
- Ensures local data is synced with its source and is 100% accurate.
- Creates nodes with accurate media types, human-readable types, and accurate contentDigests.
- Links nodes & creates relationships between them.
- Lets Gatsby know when nodes are finished sourcing so it can move on to processing them.
A source plugin is a regular npm package. It has a
package.json file, with optional dependencies, as well as a
gatsby-node.js file where you implement Gatsby’s Node APIs. Read more about files Gatsby looks for in a plugin or creating a generic plugin.
Source plugins convert data from any source into a format that Gatsby can process. Your Gatsby site can use several source plugins to combine data in interesting ways.
There may not be an existing plugin for your data source, so you can create your own.
Please Note: If your data is local i.e. on your file system and part of your site’s repo, then you generally don’t want to create a new source plugin. Instead you want to use gatsby-source-filesystem which handles reading and watching files for you. You can then use transformer plugins like gatsby-transformer-yaml to make queryable data from files.
The plugin in this tutorial will source blog posts and authors from the demo API, link the posts and authors, and take image URLs from the posts and optimize them automatically. You’ll be able to configure your plugin in your site’s
gatsby-config.js file and write GraphQL queries to access your plugin’s data.
This tutorial builds off of an existing Gatsby site and some data. If you want to follow along with this tutorial, you can find the codebase inside the examples folder of the Gatsby repository. Once you clone this code, make sure to delete the
example-site folders. Otherwise, the tutorial steps will already be completed.
To see the API in action, you can run it locally by navigating into the
api folder, installing dependencies with
npm install, and starting the server with
npm start. You will then be able to navigate to a GraphQL playground running at
http://localhost:4000. This is a GraphQL server running in Node.js and is separate from Gatsby, this server could be replaced with a different backend or data source and the patterns in this tutorial would remain the same. Other possible examples could be a REST API, local files, or even a database, so long as you can access data it can be sourced.
If you paste the following query into the left side of the window and press the play button, you should see data for posts with their IDs and descriptions returned:
This data is an example of the data you will source with your plugin.
Your plugin will have the following behavior:
- Make an API request to the demo API.
- Convert the data in the API response to Gatsby’s node system.
- Link the nodes together so you can query for an author on each post.
- Accept plugin options to customize how your plugin works.
- Optimize images from Unsplash URLs so they can be used with
You’ll need to set up an example site and create a plugin inside it to begin building.
Create a new Gatsby site with the
gatsby new command, based on the hello world starter.
This site generated by the
new command is where the plugin will be installed, giving you a place to test the code for your plugin.
Create a new Gatsby plugin with the
gatsby new command, this time based on the plugin starter.
This will create your plugin in a separate project from your example site, but you could also include it in your site’s
Your plugin starts with a few files from the starter, which can be seen in the snippet below:
The biggest changes will be in
gatsby-node.js. This file is where Gatsby expects to find any usage of the Gatsby Node APIs. These allow customization/extension of default Gatsby settings affecting pieces of the site build process. All the logic for sourcing data will live in this file.
You need to install your plugin in the site to be able to test that your code is running. Gatsby only knows to run plugins that are included in its
gatsby-config.js file. Open up the
gatsby-config.js file in the
example-site and add your plugin using
require.resolve. If you decide to publish your plugin it can be installed with an
npm install <plugin-name> and including the name of the plugin in the config instead of
You can include the plugin by using its name if you are using npm link or yarn workspaces or place your
example-site/plugins instead of being in a folder a step above and using
You can now navigate into the
example-site folder and run
gatsby develop. You should see a line in the output in the terminal that shows your plugin loaded:
If you open the
gatsby-node.js file in your
source-plugin folder, you will see the
console.log that produces that output in the terminal.
Data is sourced in the
gatsby-node.js file of source plugins or Gatsby sites. Specifically, it’s done by calling a Gatsby function called
createNode inside of the
sourceNodes API in the
Open up the
gatsby-node.js file in the
source-plugin project and add the following code to create nodes from a hardcoded array of data :
This code creates Gatsby nodes that are queryable in a site. The following bullets break down what is happening in the code:
- You implemented Gatsby’s
sourceNodesAPI, which Gatsby will run as part of its bootstrap process, and pulled out some Gatsby helpers (like
createNodeId) to facilitate creating nodes.
- You provided the required fields for the node like creating a node ID and a content digest (which Gatsby uses to track dirty nodes—or nodes that have changed). The content digest should include the whole content of the item (
post, in this case).
- Then you stored some data in an array and looped through it, calling
createNodeon each post in the array.
If you run the
gatsby develop, you can now open up
http://localhost:8000/___graphql and query your posts with this query:
The problem with this data is that it is not coming from the API, it is hardcoded into an array. The declaration of the
data array needs to be updated to pull data from a different location.
Some operations like fetching data from an endpoint can be performance heavy or time-intensive. In order to improve the experience of developing with your source plugin, you can leverage the Gatsby cache to store data between runs of
gatsby develop or
You access the
cache in Gatsby Node APIs and use the
get functions to store and retrieve data as JSON objects.
The above snippet shows a contrived example for the
cache, but it can be used in more sophisticated cases to reduce the time it takes to run your plugin. For example, by caching a timestamp, you can use it to fetch solely the data that has been updated since the last time data was fetched from the source:
This can reduce the time it takes repeated data fetching operations to run if you are pulling in large amounts of data for your plugin. Existing plugins like
gatsby-source-contentful generate a token that is sent with each request to only return new data.
You can read more about the cache API, other types of plugins that leverage the cache, and example open source plugins that use the cache in the build caching guide.
You can query data from any location to source at build time using functions and libraries like Node.js’s built-in
node-fetch. This tutorial uses a GraphQL client so that the source plugin can support GraphQL subscriptions when it fetches data from the demo API, and can proactively update your data in the site when information on the API changes.
You’ll use several modules from npm to making fetching data with GraphQL easier. Install them in the
source-plugin project with:
Note: The libraries used here are specifically chosen so that the source plugin can support GraphQL subscriptions. You can fetch data the same way you would in any other Node.js app or however you are most comfortable.
package.json file after installation and you’ll see the packages have been added to a
dependencies section at the end of the file.
Import the handful of Apollo packages that you installed to help set up an Apollo client in your plugin:
Then you can copy this code that sets up the necessary pieces of the Apollo client and paste it after your imports:
You can read about each of the packages that are working together in Apollo’s docs. The end result is creating a
client that you can use to call methods like
query to get data from the source it’s configured to work with. In this case, that is
http://localhost:4000 where you should have the API running.
Now you can replace the hardcoded data in the
sourceNodes function with a GraphQL query:
Now you’re creating nodes based on data coming from the API. Neat! However, only the
description fields are coming back from the API and being saved to each node, so add the rest of the fields to the query so that the same data is available to Gatsby.
This is also a good time to add data to your query so that it also returns authors.
With the new data, you can also loop through the authors to create Gatsby nodes from them by adding another loop to
At this point you should be able to run
gatsby develop in your
example-site, open up GraphiQL at
http://localhost:8000/___graphql and query both posts and authors.
Each node created by the filesystem source plugin includes the raw content of the file and its media type.
A media type (also MIME type and content type) is an official way to identify the format of files/content that are transmitted via the internet, e.g. over HTTP or through email. You might be familiar with other media types such as
Each source plugin is responsible for setting the media type for the nodes it creates. This way, source and transformer plugins can work together easily.
This is not a required field — if it’s not provided, Gatsby will infer the type from data that is sent — but it’s how source plugins indicate to transformers that there is “raw” data the transformer can further process.
It also allows plugins to remain small and focused. Source plugins don’t have to have opinions on how to transform their data: they can set the
mediaType and push that responsibility to transformer plugins instead.
For example, it’s common for services to allow you to add content in Markdown format. If you pull that Markdown into Gatsby and create a new node, what then? How would a user of your source plugin convert that Markdown into HTML they can use in their site? You would create a node for the Markdown content and set its
text/markdown and the various Gatsby Markdown transformer plugins would see your node and transform it into HTML.
This loose coupling between the data source and the transformer plugins allow Gatsby site builders to assemble complex data transformation pipelines with little work on their (and your (the source plugin author)) part.
Each node of post data has an
imgUrl field with the URL of an image on Unsplash. You could use that URL to load images on your site, but they will be large and take a long time to load. You can optimize the images with your source plugin so that a site using your plugin already has data for
gatsby-plugin-image ready to go!
You can read about how to use the Gatsby Image plugin if you are unfamiliar with it.
To create optimized images from URLs,
File nodes for image files need to be added to your site’s data. Then, you can install
gatsby-transformer-sharp which will automatically find image files and add the data needed for
Start by installing
gatsby-source-filesystem in the
Now in your plugin’s
gatsby-node.js file, you can implement a new API, called
onCreateNode, that gets called every time a node is created. You can check if the node created was one of your
Post nodes, and if it was, create a file from the URL on the
createRemoteFileNode helper from
gatsby-source-filesystem, which will download a file from a remote location and create a
File node for you.
Then export a new function
onCreateNode, and call
createRemoteFileNode in it whenever a node of type
Post is created:
This code is called every time a node is created, e.g. when
createNode is invoked. Each time it is called in the
sourceNodes step, the condition will check if the node was a
Post node. Since those are the only nodes with an image associated with them, that is the only time images need to be optimized. Then a remote node is created, if it’s successful, the
fileNode is returned. The next few lines are important:
createNodeField you’re extending the existing node and place a new field named
localFile under the
Note: Do not mutate the
node directly and use
createNodeField instead. Otherwise the change won’t be persisted and you might see inconsistent data. This behavior changed with Gatsby 4, read the migration guide to learn more.
In the previous step you only defined the
fileNode.id as a
value but at this time Gatsby can’t just yet resolve this to the
fileNode (and susequently the image) itself. Therefore, you’ll need to create a foreign-key relationship between the
Post node and the respective
File node. Use the
createSchemaCustomization API to define this relationship:
Note: You can use schema customization APIs to create these kinds of connections between nodes as well as sturdier and more strictly typed ones.
You now can query the image like this:
At this point you have created local image files from the remote locations and associated them with your posts, but you still need to transform the files into optimized versions.
sharp plugins make optimization of images possible at build time.
gatsby-transformer-sharp in the
example-site (not the plugin):
Then include the plugins in your
By installing the sharp plugins in the site, they’ll run after the source plugin and transform the file nodes and add fields for the optimized versions at
childImageSharp. The transformer plugin looks for
File nodes with extensions like
.png to create optimized images and creates the GraphQL fields for you.
Now when you run your site, you will also be able to query a
childImageSharp field on the
With data available, you can now query optimized images to use with the
gatsby-plugin-image component in a site!
To link the posts to the authors, Gatsby needs to be aware that the two are associated, and how. You have already implemented one example of this when Gatsby inferred a connection between a
localFile and the remote file from Unsplash.
The best approach for connecting related data is through customizing the GraphQL schema. By implementing the
createSchemaCustomization API, you can specify the exact shape of a node’s data. While defining that shape, you can optionally link a node to other nodes to create a relationship.
Copy this code and add it to the
source-plugin in the
author: Author @link(from: "author.name" by: "name") line tells Gatsby to look for the value on the
Post node at
post.author.name and relate it with an
Author node with a matching
name. This demonstrates the ability to link using more than just an ID.
Now running the site will allow you to query authors from the post nodes!
example-site, you can now query data from pages.
Add a file at
example-site/src/pages/index.js and copy the following code into it:
This code uses a page query to fetch all posts and provide them to the component in the
data prop at build time. The JSX code loops through the posts so they can be rendered to the DOM.
You can pass options into a plugin through a
gatsby-config.js file. Update the code where your plugin is installed in the
example-site, changing it from a string, to an object with a
Now the options you designated (like
previewMode: true) will be passed into each of the Gatsby Node APIs like
sourceNodes, making options accessible inside of Gatsby APIs. Add an argument called
pluginOptions to your
Options can be a good way of providing conditional paths to logic that you as a plugin author want to provide or limit. Read the Configuring Plugin usage with Plugin Options guide to learn how to add validation to your plugin options.
One challenge when developing locally is that a developer might make modifications in a remote data source, like a CMS, and then want to see how it looks in the local environment. Typically they will have to restart the
gatsby develop server to see changes. In order to improve the development experience of using a plugin, you can reduce the time it takes to sync between Gatsby and the data source by enabling faster synchronization of data changes. The best way to do this is by adding event-based syncing.
Some data sources keep event logs and are able to return a list of objects modified since a given time. If you’re building a source plugin, you can store the last time you fetched data using the cache and then only sync down nodes that have been modified since that time.
gatsby-source-contentful is an example of a source plugin that does this.
If you would like to add Content Sync to your source plugin here but aren’t sure what it is learn more about Content Sync here. To enable this feature in your source plugin you will need to make sure that your data source (or CMS) also works with Content Sync.
The source plugin needs to create node manifests using the
The first thing you’ll want to do is identify which nodes you’ll want to create a node manifest for. These will typically be nodes that you can preview, entry nodes, top level nodes, etc. An example of this could be a blog post or an article, any node that can be the “owner” of a page. A good place to call this action is whenever you call
An easy way to keep track of your manifest logic is to parse it out into a different util function. Either inside the
createNodeManifest util or before you call it you’ll need to vet which nodes you’ll want to create manifests for.
At the moment you’ll only want to create node manifests for preview content and because this is a newer API, we’ll need to check if the Gatsby version supports
Next we will build up the
manifestId and call
manifestId needs to be created with information that comes from the CMS NOT Gatsby (the CMS will need to create the exact same manifest), which is why we use the
entryItem id as opposed to the
entryNode id. This
manifestId must be uniquely tied to a specific revision of specific content. We use the CMS project space (you may not need this), the id of the content, and finally the timestamp that it was updated at.
Lastly we’ll want to give our users a good experience and give a warning if they’re using a version of Gatsby that does not support Content Sync
The CMS will need to send a preview webhook to Gatsby Cloud when content is changed and open the Content Sync waiting room. Follow along to learn how to implement it on the CMS side.
You will need to create a button in your CMS that does the following:
POSTto the preview webhook url in Gatsby Cloud
- Open the Content Sync waiting room
The button might look something like this:
You will need to store the Content Sync URL from a given Gatsby Cloud site as a CMS extension option. This will look something like
https://gatsbyjs.com/content-sync/<siteId>. This is often done in the CMS plugin extension configuration, this will differ from CMS to CMS depending on how extension settings are stored.
You will also need to store the preview webhook URL. This might also be stored in the plugin extension settings, but often is stored in a separate CMS webhooks settings page if your CMS supports webhooks already. Find out how to get that webhook url here
Both of these need to be user configurable in the CMS.
NOTE: The Content Sync URL can be found in the same place as the webhook url in the Gatsby Cloud site settings.
Recall that we need to create a matching manifest id in the CMS AND the Gatsby plugin. Whenever content is saved we can build up a new manifest id that will look the same as the manifest id we created in the source plugin
In the CMS extension, we should have access to
- the project id (if the CMS uses one)
- the content id
- the timestamp that the content was updated at (or some other piece of data that is tied to a very specific state of saved content)
Eager Redirects is a Content Sync feature which causes the user to be redirected to their site frontend as soon as possible. When they first preview a piece of content, they will stay in the Content Sync loading screen until their preview is ready. On subsequent previews of that same piece of content, they will be redirected as soon as the page loads. This is done by storing a “content ID” in local storage. The content ID should be a unique identifier for that piece of content which is consistent across all previews.
This content ID should be appended to the end of the Content Sync URL. See the sections below for more information.
If the CMS does not handle this part automatically we will need to tell Gatsby cloud to build a preview by
POSTing to the Gatsby Cloud preview build webhook url.
Once we’ve built a
POSTed to the preview build webhook url, we need to open a new tab/window with a modified version of the Content Sync URL. You get that by grabbing the Content Sync URL you stored in the CMS extension earlier and appending the Gatsby source plugin name, the content ID, and the content’s
manifestId that you just created,
Here are some things to keep in mind and some “gotchas” depending on how the CMS acts.
- Inside the CMS, sometimes you will need to wait to make sure you have the correct
updatedAttimestamp as some CMS may take a second to update their backend and then wait for the change to propagate to the frontend. While others will immediately update the frontend and then propagate that to the backend. You will need the most up to date timestamp when opening the Content Sync UI waiting room.
- Make sure that a preview webhook is being sent to Gatsby Cloud after the content is edited, whether it’s before you press the “Open Preview” button or the “Open Preview” is the trigger that sends the webhook.
- While developing, you can set the Gatsby
VERBOSEenv variable to
"true"to see additional logs that will help you debug what’s happening in the source plugin.
- When you click the “Open Preview” button in the CMS the
manifestIdin the URL should match the
manifestIdthat the source plugin creates from that revision.
- The node manifests get written out in the
publicdir of your gatsby site, so you can check to manifests on your local disk
/public/__node-manifests/<sourcePluginName>/<manifestId>.jsonor you can navigate directly to that piece of content
Image CDN is a feature on Gatsby Cloud that provides edge network image processing by waiting to perform image processing until the very first user visit to a page. The processed image is then cached for super quick fetching on all subsequent user views. Enabling it will also speed up local development builds and production builds on other deployment platforms because images from your CMS or data source will only be downloaded if they are used in a created Gatsby page.
You can learn more about it in the announcement blogpost Image CDN: Lightning Fast Image Processing for Gatsby Cloud
email@example.com, Image CDN and its helper functions are available inside
gatsby-plugin-utils. In addition to the
RemoteFile interface you can also use the
polyfillImageServiceDevRoutes functions to enable Image CDN support down to Gatsby 2 inside your plugin.
To add support to a source plugin, you will need to create a new GraphQL object type that implements the
It is also recommended that you add a polyfill to provide support back through Gatsby 2. To do so, wrap the
buildObjectType call with the
addRemoteFilePolyfillInterface polyfill like so:
RemoteFile interface adds the correct fields to your new GraphQL type and adds the necessary resolvers to handle the type.
RemoteFile holds the following properties:
resize(width: Int, height: Int, fit: enum): String
You might notice that
gatsbyImage can be null. This is because the
RemoteFile interface can also handle assets other than images, like PDF’s.
The string returned from
gatsbyImage is intended to work seamlessly with Gatsby Image Component just like
Since Gatsby will be fetching files from your CMS instead of your source plugin fetching those files, you may need to set request headers for Gatsby to use in those requests. This is needed if for example your CMS is locked down behind some kind of authentication. For each domain Image CDN will make requests to, set the required headers following this example:
When creating nodes, you must add some fields to the node itself to match what the
RemoteFile interface expects. You will need
filename as mandatory fields. When you have an image type,
height are required as well. The optional fields are
placeholderUrl will be the url used to generate blurred or dominant color placeholder so it should contain
%height% url params if possible.
Add the polyfill,
polyfillImageServiceDevRoutes, to ensure that the development server started with
gatsby develop has the routes it needs to work with Image CDN.
Now you’re all set up to use Image CDN! 🙌
Don’t publish this particular plugin to npm or the Gatsby Plugin Library, because it’s just a sample plugin for the tutorial. However, if you’ve built a local plugin for your project, and want to share it with others, npm allows you to publish your plugins. Check out the npm docs on How to Publish & Update a Package for more info.
Please Note: Once you have published your plugin on npm, don’t forget to edit your plugin’s
package.json file to include info about your plugin. If you’d like to publish a plugin to the Gatsby Plugin Library (please do!), please follow these steps.
You’ve written a Gatsby plugin that:
- can be configured with an entry in your
- requests data from an API
- pulls the API data into Gatsby’s node system
- allows the data to be queried with GraphQL
- optimizes images from a remote location automatically
- links data types with a customized GraphQL schema
- updates new data without needing to restart your Gatsby site
- Example repository with all of this code implemented