Top On-page SEO Factors To Use In 2020 And Beyond

Keeping up with the ever-evolving world of SEO can sometimes feel overwhelming. Especially, when it comes to on-page SEO, there are so many things to keep track of. You wouldn’t be the first person if you felt like just kicking back in your chair and let things take their course of action on their own.

Lucky for you, with this blog we’re bringing you a checklist of the most important on-page SEO tactics that you need to care about. But before we dive in, let’s briefly go through what is on-page SEO.

What is On-page SEO?

On-page SEO, like the name suggests, is everything you do on your page or website to rank it higher on Google. This can be anything from optimizing your meta tags to improving your page speed and more. On-page SEO can be considered as different from your off-page SEO, which are actions taken outside your website.

Any experienced SEOs will tell you just how much you need to prioritize your on-page SEO over everything else. This is because, without a solid foundation (which is your on-page SEO), anything that you would do will not yield the desired results.


The Top On-page SEO Tactics That Work

The tactics discussed here have worked great for us here at Link Building HQ. It all comes down to how well you execute them. Here are the top on-page SEO tactics that you need to know in 2020 and beyond.

Optimizing URLs

Your URLs need to be concise enough to appear fully on the search results and should always contain a keyword. It’s a good idea to keep your keyword close to the root domain and to be economical with the character count.

Optimized URL Example
Short URLs with a keyword close to the root domain generally rank higher in the SERPs.

Meta Titles and Description

Writing Meta titles and descriptions in a rush isn’t the smartest thing to do. This is because they can affect your Click-through Rate (CTR). After reading your meta descriptions and your titles, the searchers make up their minds whether to click on the link or not. If they click, that’s a signal to Google that your page is relevant, which can improve your rankings.

But the opposite is true, too. So always try to put some thought in your meta titles and description. Try to keep the keyword in the start and use relevant modifiers like “best” or “newly updated” to provide some sort of an incentive for the searcher to click.


Use the responsive design checker tool to see how well your website is optimized for different devices.


With a responsive web design, your site adapts to whichever screen size it’s being displayed on. This not only helps you to get in the good books of the users, but it also helps you a ton with your rankings. In fact, since 2018, Google has introduced mobile-first indexing which means that the crawling and indexing of your website would be based on your mobile version.

Page Speed

Google has categorically stated that it uses page speed as a ranking factor. And usually, the main culprits for a high load time are your images. So spend some time optimizing them. If you’re on WordPress, that can be made super simple through plugins. While you’re at it, add Alt text on images using a keyword for better SEO. Also, see what improvements you can make using a free tool from Google called Page Speed Insights.

External Links

Just like how other sites linking to yours helps a lot in your ranking, you linking to other high authority websites sends signals to Google regarding what your content is about. And it is a good practice to always link up rather than link to those sites which are relatively unknown. Use descriptive keywords in anchor text (the text through which you’re linking) to reflect that the topic of your content is similar to that of the targeting page.

Internal Links

If you’re smart with your internal linking strategy, you can give a helping hand to some of your pages that are struggling in the rankings.

Suppose you have written a blog and its ranking on page 1 of Google for the related keywords. This means this page has gathered a lot of authority for you. But it’s time to leverage it further by adding links to your other related pages which are relatively unknown and need a bit of a boost.

Internal Linking
Effective internal linking allows users to have an intuitive experience and lets Google understand the structure of your website.


An effective internal linking strategy will allow the link juice to flow from authoritative pages to other relevant pages that are struggling to rank. Tools like Ahrefs can also tell you which pages on your website have high authority so you can get going with that as well.

Apart from that, make sure that the URL structure of your website is flat rather than hierarchical. Flat structures allow all of your pages to get a fair chance to get some of that link juice flowing towards them. Whereas, with a hierarchical structure, some of your pages may be pushed way too further in the site structure for the link juice or even the crawler to reach.


Content that scores high on readability helps to retain readers for a longer period and have them coming back for more. The following are the characteristics of content with good readability:

  1. Legible text size and color
  2. Headers and paragraph breaks
  3. Optimized for skim readers – add bullet points wherever possible
  4. Use of Images, videos, GIFs, Snapshots

Check out our blog on the top 5 copywriting tips if you’re looking to know more about how to write optimized and engaging content.

Optimize for Featured Snippet

One on-page factor which is somewhat linked with readability is how well your content is optimized for featured snippets. Featured snippets can make a huge difference in improving the CTR of your content.

They appear at the top of the search results, making it easier for the searchers to find their answers.

Featured Snippet Example
Featured Snippets, also called Answer Boxes, aim at delivering the answer to the searcher’s query right on the top of the SERPs.


Schema Markups

Another way to be uniquely positioned in the SERPs is by using schema markups. They let you tell search engines how you want to be represented in the SERPs. For example, you can display reviews from happy customers to get a higher CTR from other potential customers. Another example is using FAQ schema. With that, you’re providing an answer to a specific customer question. There’s a whole library of schemas that you can look at


Google in 2014 came out with a statement saying how it uses HTTPS as a ranking signal. So it’s high time that you get an SSL certificate for your website and upgrade to HTTPS.

A Final Word

On-page SEO is a core part of any digital marketer’s to-do list. If you’re not putting in the time and effort to improve your website’s on-page elements, then you’re doing yourself a great disservice. Even though this checklist isn’t exhaustive, the on-page SEO factors listed here have proved to bring in great results. Try them out and you’re very likely to see a significant bump in your organic rankings.

Robots.txt: What You Need to Know

Robots.txt is a simple, but powerful, file that webmasters use in presenting a website to Google. However, even a small error in your Robots.txt file can wreak havoc with how your website is being crawled and indexed. In this article, we’ll discuss what is a Robots.txt file, why it’s so important, and how you can create and optimize a Robots.txt file for your website.

What is a Robots.txt File?

Robots.txt file, also called robots exclusion protocol (REP) is a text file that webmasters use to tell robots which pages on their site can be crawled and which can’t be.

The first thing a crawler does when it visits a site is to check its Robots.txt for instructions on how to crawl it.

Robots.txt files have different types. The most common command on a Robots.txt file looks like this:

Robots.txt file with a disallow directive

The asterisk after user-agent signifies that this instruction is for all web robots that will visit the site. The forward slash after disallow represents that none of the pages on the website can be visited.

Why is a Robots.txt File Important?

Now you may look at the directives above and wonder, “Why aren’t we allowing a crawler to visit our website? Doesn’t everybody want their website to be crawled and indexed?”

Well, to use the favorite words of SEOs: it depends.

In most cases, you don’t want Googlebots to crawl and index certain pages of your website. That’s why a Robots.txt file, and its Disallow and Noindex directives are so important.

One reason could be that your website may have lots of pages. And a crawler’s job is to crawl every single one of them. Having lots of pages makes it longer for the crawler to go through your entire website. This trickles down negatively on your ranking because there is a crawl budget that a crawler needs to adhere to. And there are two key considerations to it: crawl limit and crawl demand.

According to Google, crawl limit is “the number of simultaneous parallel connections GoogleBot may use to crawl the site, as well as the time it has to wait between the fetches”. The crawl limit can change based on crawl health which is how quickly a site responds to the crawler and the limit set in the search console.

Crawl demand on the other hand refers to the demand from indexing. This means popular URLs will be crawled more often to maintain freshness in the index.

To this end, we can define crawl budget as the number of URLs a crawler can and wants to crawl. With an optimized Robots.txt file, you tell the crawler which pages are most important for your website and can therefore be included in the crawl budget. That’s why Robots.txt files are super important.

Where to Find the Robots.txt File?

Here’s how you can check if you have a Robots.txt file for your website (and you can do this exercise to check for any website for that matter). Open up your browser and in the address bar type in your website’s domain and add “/robots.txt” at the end. So for example “” will look like “”.

If you see something like this, that means you have a Robots.txt file:

Check if you have Robot.txt

If it returns with nothing to show for that means your file is empty and if the results show a 404 error, you’d want to have that look into and get it fixed right away.

With this exercise, you’d know if your website has a Robots.txt file or not. You will have to create one from scratch if it doesn’t exist. For that, make sure you’re using a plain text editor like a Notepad for Windows or a TextEdit for Mac.

To look for your Robots.txt file, you need to go to your website’s root directory. The root directory can be found in your hosting website’s account and then going to the file management or FTP section. You will find something like this:

File Management Section

Look for the Robots.txt file and open it up to start editing. If there is any text, delete it and keep the file.

For WordPress users, the Robots.txt file may appear at but it may not show in the files. This can be attributed to the fact that WordPress makes virtual Robots.txt file if there are none in your root directory. If this happens with you, you’d need to create one from scratch. You will learn this in our next section.

How to Create a Robots.txt File?

Before you get going with creating the Robots.txt file, try to get yourself familiarized with the syntax used in it. Here is a link from Google with all the basic terminologies.

Then open up your plain text editor and the first thing you’d want to do is set up your user-agent term. Since we want it to apply to all web robots, we’ll use an asterisk like this:

Defining user agents in Robots.txt

Then type disallow but don’t write anything after that. We want all of our pages on the website to be visited by the crawler. Here’s how it would look:

Command to crawl all pages

To link your XML sitemap, this is what you type:

Link your XML Sitemap with a robots.txt file

This is how a basic Robots.txt file looks like. But it’s time to give it a spin in terms of optimizing it.

As mentioned earlier, the more smartly your crawl budget is used, the better it is for your SEO. So for example, there is no use of including a page in the crawl budget which is used to login to the backend of a website. Hence you can go ahead and remove it from the crawl budget like this:

Remove backend login page from crawling using a robots.txt file

A similar directive can be used to tell the crawler not to crawl specific pages. For example, if you don’t want the page “” to be included in the crawl budget, you can take the part “/abc”, add a slash at the end and write it after Disallow like this:

Remove a specific page from crawling using a robots.txt file

You can also use this directive when you have a print-friendly version of a page or when you are doing AB testing and include any one version in the Disallow directive. And since your Thank You pages should only be shown to qualified leads, you can use this directive for them as well.

Whereas the Disallow directive blocks the crawler from visiting the page, it doesn’t necessarily mean that the page won’t be indexed. So to prevent indexing, you can use the Noindex directive in unison with the Disallow directive.

Testing Everything Out!

Once you’ve created your Robots.txt file or customized it, you then need to make sure that everything is working smoothly, especially your Robots.txt file. Using the Google Search Console, you can validate your crawl directives. You can also sign in to your webmasters’ account and make use of Google’s free Robots.txt tester tool.

Once in the dashboard, select your website, and in the sidebar on the left, click “crawl”. Then click on the Robots.txt tester in the drop-down and replace any old code with your new one. Hit the “Test” button and if it turns to “Allowed”, congratulations, your Robots.txt is all good to go.

Robot.txt tester


Online Robots.txt Tester Tools

There’s an array of Robots.txt tester tools out there that helps ensure your Robots.txt file is working properly. If you don’t want to manually go through the script for robots.txt, you should consider using robots.txt file checkers. These give you an advantage to smartly analyze large-scale websites and identify if there are any URLs that are blocked as that would significantly impact your SERP rankings. Let’s have a look at some of these tester tools:

  • Google’s Robots.txt Tester: This is Google’s own tool that helps show whether your Robots.txt file blocks specific URLs from being crawled. This a browser-based tool that can help you quickly check your robots.txt, but does have its limitations. For instance, this tool will only check domains with the URL-Prefix properties. Also, you’ll need to copy and paste the content from the editor into your robots.txt file that’s stored in your server.
  • Merkle’s Technical SEO Robots.txt Validator: This is another browser-based Robots.txt file checker whether a URL is blocked, statements that are blocked. Furthermore, you can also check whether resources on the page such as CSS, JavaScript, and images are disallowed or not.
  • Screaming Frog: Their Robots.txt tester tool is a downloadable program which can crawl single and multiple URLs. You’ll be able to filter URLs that are blocked using filters – making it easy for you to analyze your URLs quickly. Furthermore, you can also export the bulk URL reports to share with other users.
  • Ryte Test Tool: Ryte is a website solution that helps website developers focus on a high-quality user experience. The solution also provides a browser-based tool that helps analyze robots.txt files for URLs that might be blocked by search engines. It has an online editor that lets you copy-paste the contents of the robots.txt file for real-time analysis.


Final Takeaway

While optimizing your Robots.txt is often overlooked in the SEO process, a little effort on that front can’t do any harm. So go ahead and implement your learnings from this blog on your website. We hope you will be pleasantly surprised with the results.