Static Site Generators (SSG), such as Jekyll, Next, Nuxt, Gatsby, and Hugo provide a tool to create websites that consist of static pages. The common problem with static websites, however, is that they rarely come with site search functionality out of the box.
So how to include search on a static websites and it’s pages? In short, you have to generate an index from selected content of the pages of your site, create a search and API to have access to the index, and create a user interface so your users can search the index and see search results on the web page.
While enabling site search functionality to a static page should include at least these elements, you can implement the search with various methods.
Table of Contents
Static vs. dynamic website: What is the difference?
There are two types of websites: static and dynamic. Statics websites are fixed and display the same content for every user. No elements on the page are changed when you are accessing it. They are basic pages with simple code and design elements and are usually written exclusively in HTML. The website is also fixed in terms of page numbers that are delivered just the way it is designed and stored.
Dynamic websites, on the other hand, display different content and offer user interaction. Through advanced programming and databases in addition to HTML, dynamic websites can offer interactive elements. They are more complex to build and design but they are also more flexible.
3 different ways you can implement site search functionality to a static website
- building and implimenting a simple site search to a static site with an example of Simple-Jekyll-Search
- building fully-featured search with building blocks
- simple installation of ready-made fully-featured search SaaS
What are static site generators?
Static site generator creates static websites from source files. The source files include amongst other things configuration files, templates, and content. With the newer Javascript based static site generators, the content can be requested from a database while with the more traditional static site generators the content is saved as static source files.
The static site generator uses the source files to generate static web pages. Pages can be pre-rendered, which means that the pages are compiled to static HTML files (web pages) and uploaded to the server. When the user visits these web pages from a web browser, the delivered pages match the HTML files stored in the server.
The Javascript-based static site generators, however, utilize server-side rendering, which means that the static page source is generated with Javascript as a response for each user request on the server. In other words, when the user loads a page with the browser, the Javascript app on the server creates the static page ‘on the fly’ and returns it to the browser.
3 examples of static websites
Best Static Website Generators to Use According to Github:
Jekyll
- Simple blog-aware static website generator
- Developed with Ruby
- Templating Liquid (and MarkDown)
Hugo
- Fast static website generator with an executable client
- Developed with Go
- Templating Go-based Amber and Ace
Gatsby
- Static website generator for React + GraphQL apps
- Can use multiple sources of data (CMS, SaaS services, etc.)
- Developed with Javascript
- Templating React
- Server-side rendering (SSR)
Next
- Static website generator for React apps
- Developed with Javascript
- Templating React
- Server-side rendering (SSR)
Nuxt
- Static website generator for React apps
- Developed with Javascript
- Templating Vue.js
- Server-side rendering (SSR) and pre-rendering
Building simple search functionality with Simple-Jekyll-Search
While you can choose from a wide range of static website generators, we chose Jekyll because it helps us to show the basic principles of implementing a site search to a static website with a example. As the search solution, we chose Simple Jekyll Search engine.
The idea of the search is to save content from the web pages to a file that can be referenced by the search. In this case, the file is a template that outputs specified content from the web pages to a JSON file each time the web pages of the site is compiled.
The search has a search widget which consists of the search box (the input field) where the user enters the search terms. It also has an element that displays the search results on the page.
Setting up the search is reasonably easy for those who are familiar with HTML, CSS, Javascript and know how different files are organized in the Jekyll search engine. For more detailed information on setting up the search visit the project’s GitHub repository and the tutorial blog post.
As the simple static website search example for the Jekyll search engine shows, there are three elements to the search
- Building the index from selected content from the web pages
- The search API that accesses the index when the user enters the search term
- The search UI where the user enters the search term and the element where the search displays the search results
While the methods for each element may vary, this is what most website search engine implementations consist of.
Experience the Best in Search Solutions with AddSearch – Top Rated on SourceForge! Click for Your Free, Personalized Demo Now.
Creating a fully fledged search with building blocks
What if you want to have a search on your website with a fully-featured search engine with capabilities to enhance your search results?
In general, you need the same building blocks as was the case with the simple search for Jekyll.
Also, you need a crawler and a scraper to collect and extract the relevant content from the web pages and way to export the content to an index of a search engine. In addition, you need to develop a search UI which would access the search engine’s API when the user searches and display the results for the search.
You can host the crawler (including the scraper) as well as the search engine on your server. However, sharing the resources of the web server with the crawler and the search engine may slow down the download times of your web pages. You can also purchase cloud-based solutions from Amazon (Amazon Web Services, AWS) and Microsoft (Azure) and host the crawler as well as the search engine in the cloud.
Our premise for choosing a crawler is that it can export the crawled and scraped data to the index of the search engine. The premise for selecting a search engine is that the index can be accessed using a REST API and that the data is returned as JSON. This will make it easier to develop the search UI with Javascript based libraries and frameworks.
By taking these premises into account we recommend Scrapy for crawling and scraping as well as exporting, Elasticsearch as the search engine, any Javascript-based library (Vue.js, React) or framework (Angular) for the search UI.
Crawling and scraping with Scrapy
Scrapy is an open source general purpose web crawling framework developed with Python. You can use Scrapy to crawl and extract structured data. Scrapy supports sending data (items) to Elasticsearch server for indexing.
Before you can use Scrapy you need to install it to your computer. For instructions to install Scrapy visit Scrapy documentation.
Scrapy uses an instance of spider to crawl a website. It visits the web pages of the website and uses selectors to scrape relevant items from the web pages. The items go through an item pipeline and end up to a Feed Exporter which stores the data to a file – by default in JSON format. For instructions to create spider visit the tutorial at Scrapy documentation.
Exporting the data to Elasticsearch index
Elasticsearch is an open source search engine which supports full-text search, analytics amongst other things. Elasticsearch provides REST API for accessing the index which consists of JSON documents. For more information visit Elasticsearch documentation.
Scrapy supports indexing the items to Elasticsearch with ScrapyElasticSearch module. Instead of exporting the items to a file, ScrapyElasticSearch exports the items to Elasticsearch index.For instructions on implementing the ScrapyElasticSearch Feed Exporter visit ScrapyElasticSearch Github pages.
Accessing the index with search UI
Elasticsarch’s index consists of JSON documents that you can access using REST API endpoints. As both technologies, REST API and JSON, are commonly used with Javascript-based web applications, they allow a convenient way of setting up communication between the Elasticsearch server and the search UI on the web page.
The search UI on the web page should have a search box (input field) and an element where the search results are displayed. The functionality for these elements can be implemented with Javascript libraries and frameworks. Angular a very commonly used framework which is developed with Typescript and transpiled to a Javascript web application. React and Vue.js represent the most popular Javascript libraries.
Simple installation of fully-featured AddSearch search-as-a-service
If you are looking for an easier way to include search to your static site, AddSearch is the way to go.
Setting up the search manually requires some technical expertise and time. Sharing computing resources with the search may slow down your website and the actual search. Most likely, you wouldn’t want that.
AddSearch provides a fully-featured instant, visual site search for static websites. Instead of taking multiple steps in setting up a site search, AddSearch takes only one line of code and 5 minutes to make your static website searchable.
What will you find at AddSearch?
- Very capable crawler and a proprietary search engine built on top of Elasticsearch. AddSearch will take care of the crawling and indexing as well as the search user interface.
- AddSearch gives your visitors a great user experience with our fast and visual user interface that works on any platform. You can test the search on top of this page. AddSearch hosts these services so the search doesn’t hog resources from your web server, keeping your website lightweight and fast.
- AddSearch comes fully-featured. You can prioritize relevant search results over others and promote most important pages on top of each search.
- We will support you at every step and our customer support is fast, friendly and responsive.
Enabling search on your static website is very easy. Submit your site for a free trial and we will crawl and index your site.
Static websites are fixed and display the same content for every user. No elements on the page are changed when you are accessing it.
Dynamic websites updates and changes the displayed content according to the users. Dynamic websites also offer users interactive elements.
Static webiste generators creates static websites from source files. The source files include amongst other things configuration files, templates, and content. With the newer Javascript based static site generators, the content can be requested from a database while with the more traditional static site generators the content is saved as static source files.