- How to ensure that Google and its algorithm, crawlers, and robots can correctly render a website.
- How to see a website in the same way that Google sees it.
- What does it mean that Google is getting rid of the old Ajax Tracking Scheme?
- If it is correct to detect Googlebot by a user agent and deliver pre-processed content with HTML and CSS.
- Traceability: Google should be able to crawl a site with a proper structure in all respects.
- Processing power: Google should have no problem rendering and displaying a website.
- Crawling budget: the time it would take for Google as a search engine to crawl, process, and display the website.
Client and Server-based Processing
First, in the traditional approach that is server-based rendering, a browser or Googlebot receives an HTML that fully describes the page. The copy of the content is already there, so the browser or Googlebot only needs to download the CSS and display screen content. Search engines generally have no problem with server-based rendered content as it is traditional and has been based on almost the entire web and its operation.
The Dynamic Rendering / Prerendering Method
This method sends the content rendered on the client-side to the users while the search engines get the content rendered on the server-side. So your site dynamically detects whether it is a search engine request. And for those who are wondering, no, it’s not considered cloaking because the content must be ISO.
Google no longer recommends the use of escaped fragments or the push state method. The pre-rendering implementation tools: Prerender.io, BromBone, PhantomJS result in a static cached version of your pages.
There are at least two great reasons to consider lazy-loading images for your website:
Among the frameworks, some natively have the functionality of a server-side HTML rendering for search engines: React and Angular 2.0. Other frameworks must work with a third-party pre-rendering service.
Angular and React
Newer versions of Angular (4 with Universal) and ReactJS have server-side rendering capability available, bringing several additional benefits. Upgrading to the latest version would be the perfect solution to avoid Ajax’s classic SSR rendering. This ensures that all search engines, social media, etc., can consistently and accurately read your site’s content.
React + NextJS
Since its inception, React JS has supported server rendering. It used to be called Application Universal. Today it is an SSR (Server Side Rendering) Application. We can, therefore, always make a React JS Web App on the server-side.
The difficulty comes from external requests to retrieve the data. Therefore, the calls are asynchronous, and it is necessary to manage the reception of responses before returning the application. It would be best if you can also manage the libraries to be SSR compatible.
Next, JS is ideal if you want to set up a powerful web application that search engines can index. For the management of the Title and Description meta tags, Next JS uses the next / head library instead of react-helmet. Don’t panic; it is an implementation in the same way. The equivalent for Vue JS is Nuxt JS.
Use the Google Search Console
The recommendation regarding this platform is that if you see a significant drop in ranking for a robust website, you should check the Fetch and Render section. In general, it is good practice to use Fetch and Render on a random URL sample from time to time to ensure that a website is rendered correctly.
Focus on the “onclick” Event
We must remember that Googlebot is not a real user in an obvious way, so you have to assume that you do not click, do not fill out forms, or carry out any process as you would with a real user. In reality, this has many practical implications, although only two are available below:
If you have an online store and the content hidden under the “show more” button does not appear in the DOM before clicking, it is a clear sign that Google will not see it. Important note: It also refers to menu links on faceted pages.
All links must contain the “HREF” parameter without exception. If a person only uses the OnClick event, Google will not collect these links, and it will not take them for indexing and processing. If you’re not sure about the links and whether or not Google will take them, here’s what John Mueller said about it.
There are two fundamental concepts: client-based and server-based processing.
In general, search engines do not have a problem with server-based rendered content. The reasons for that are traditional and are based on almost the entire web and its operation.
Google no longer recommends the use of escaped fragments.
Using Fetch and Render on a random URL sample from time to time ensures that a website is rendered correctly.