February 20, 2024

26 Damaging SEO Issues and How to Fix Them

Optimizing a website is like owning a race car. To shave off precious milliseconds and ensure 100% reliability, you have to routinely inspect the car for problems and use only the best parts. The same rings true for a website. To ensure that it outperforms competitors and gets maximum search visibility, you have to periodically perform a site audit and fix any SEO issues.

If left alone, technical SEO issues will make it hard for users and search engines to use the website, resulting in poor user experience and organic performance. This is like losing a race because you haven't replaced worn-out tires.

In this article, we'll show you common yet harmful technical SEO issues as well as take you through a step-by-step process to fix these SEO errors. We have lots of items to cover, so grab a cup of coffee and let’s get started.

1. Website has no robots.txt file

The robots.txt file guides search engine robots in navigating web pages, processing and cataloging information, and presenting it to users. Essentially, it's a way for a website to tell these bots which pages are off-limits.

If a website doesn't have a robots.txt file, search engines will visit and index all pages. This is bad because pages that have no value in SEO, such as your thank you page or duplicate content, will be crawled by search engine bots. Search engine bots could have used the time spent on unnecessary pages to crawl and index the most important parts of the website.

How to fix:

Create and upload a robots.txt file to the root directory of your website. Certain CMS can do that automatically for you but the file should contain the following information:

User-agent: *

Allow: /

Disallow: /[slug-of-unnecessary-page] (e.g. /wp-login)

Sitemap: https://[domain]/sitemap.xml (e.g. https://www.serphead.com/sitemap.xml)

2. Missing XML sitemap or the sitemap is too large

An XML sitemap is like a directory that contains a website's important pages, ensuring that search engines can easily find and index them. It also assists in making the website's structure clear to search engines. Think of a sitemap as the table of contents of a book.

Without an XML sitemap, search bots will have difficulties discovering all the pages of your website. Search bots will also have a hard time if the sitemap has over 50K URLs in it.

How to fix:

Just like the robots.txt file, XML sitemaps can be automatically generated by your CMS or a plugin. If you need to create and upload XML sitemaps manually, keep these guidelines in mind:

Note: If the sitemap contains URLs that are redirected/non-canonical/offline/non-indexable/duplicate content, they will impede search engines from crawling and indexing the whole website. These URLs will also confuse search engines as to which web page to index and display in search results.

3. HTTPS page has internal links to HTTP

If your website has an internal link that leads to an HTTP URL, a user's internet browser will display a non-secure page warning if they click on it. It can dissuade users from further interacting with your website.

http internal links

How to fix:

The solution is simple. Just replace the HTTP URL of the link with HTTPS. If there are hundreds of HTTP links, you can use CMS plugins with the find+replace function to edit the links in bulk.

4. Unsafe cross-origin links

This SEO issue occurs when links that open in a new tab (links using target="_blank") don't have a rel="noopener" attribute. This is bad for security because the external pages you link to can redirect your page to a malicious URL. Unsafe cross-origin links are also bad for SEO as your website may run the JavaScript of the external pages you link with.

How to fix:

Add rel="noopener" attribute to all links that open in a new tab.

5. Internal links have relative URLs

There are two types of relative URLs, protocol-relative and root-relative URLs. A protocol-relative URL can be opened via HTTP and HTTPS at the same time. It’s bad for SEO and website security because a website can be exposed to ”man in the middle” attacks.

Example: <link href="//about-us" />

A root-relative URL only contains the path following your domain. It does not give complete information location of your site, but instead, it conveys the address that is relative to where you are. 

Example: <link href="/about-us" />

Root-relative links pose no danger but it’s not up to Google standards. There's also the risk of accidentally creating multiple versions of pages with repetitive page paths if the relative URL is used in a link.

How to fix:

Use absolute URLs in all links (e.g., <link href="https://www.serphead.com/about-us" />

6. Missing website security headers

Without website security headers, your website is vulnerable to clickjacking, cross-site scripting, and other malicious attacks.

seo issues - incomplete website security headers

How to fix:

Implement the following security headers to the whole website:

There are multiple ways you implement website security headers. If you have Cloudflare, read this guide to configure security headers on the settings.

You can also implement these headers manually via the .htaccess. With this method, you don't have to download a plugin which can pose a security risk.

7. Search engines index redundant pages

Unimportant pages, such as the staging subdomain and duplicate content, will impede search engine bots from crawling the most critical pages of your website. This can also lead to cannibalization, where the redundant pages are competing against the actual pages you want to appear in search results.

To test this, simply go to Google Search Console, head over to Pages, and click the View data about indexed pages button. From there you can see if low-quality content and duplicate pages are indexed.

indexed pages in google search console

How to fix:

Add a noindex tag to the <head> of every redundant page:

<meta name="robots" content="noindex,nofollow">

Another way to address this SEO issue is to implement rel=canonical tags on redundant pages. This method isn’t the optimal solution though because it only solves the cannibalization issue. It won’t prevent search engines from crawling the redundant pages and duplicate content.

8. Internal and external links are either broken or redirected

Links with URLs that are redirected from one point to another can slow down page load speed and slow down a search engine crawler from discovering all your web pages.

A search bot that encounters broken links and lands on these non-existing pages will be forced to abandon the request while people who come across them will most likely leave. The SEO value of a link will also not be transferred if the link is broken.

How to fix:

Use Screaming Frog Website Crawler to detect all internal and external links with issues. Once the crawl is finished, click the Inlinks tab at the lower right section of the tool to find the location of the internal link.

how to address broken links - screamingfrog site audit

Once you know the source of the redirected or broken links, it's just a matter of replacing the URL with a new one. If your CMS has a plugin with a find+replace feature, fixing the broken links is even easier.

9. Non-alpha-numeric characters in the URL

Aside from the letters of the alphabet and numbers, only a few characters are allowed in the URL such as: - _ . ~

Non-alpha-numeric characters in the URL will be percent-encoded. This is bad news if you want to type the page address on your browser's address bar. You'll have to search for or memorize the code of a special character to access a web page.

For example, website.com/ÇÁkÈ becomes website.com/%C3%87%C3%81k%C3%88

Using space to break up words is also not recommended because the space is percent-encoded making a URL like serphead.com/sample page into serphead.com/sample%20page.

How to fix:

Use readable words and numbers rather than special characters in your URLs.

10. URLs have underscores

Underscores are not recommended under the Google Guidelines as they make it harder for search engines to identify concepts in the URL.

How to fix:

Use dashes to separate words rather than underscores in your URLs.

11. Mixed case URLs

Google stated that URLs are case-sensitive, meaning that URLs with lowercase and uppercase characters are treated as separate pages. 

For instance, serphead.com and Serphead.com are considered as different entities. Since both have the same content, mixed case URLs are considered as duplicate content.

Having mixed case URLs is bad for two reasons:

How to fix:

Use only lowercase letters in your URLs.

12. URLs with multiple slashes

Pages with URLs that end with / and those that aren’t are treated as separate pages. For example, serphead.com/ is considered different from serphead.com.

The same logic applies to URLs with multiple /. For example, serphead.com//seo/ is different from serphead.com/seo.

Pages with URLs that have multiple slashes or trailing slash are considered duplicate content as they have the same content as the canonical URL.

How to fix:

All pages must end with either a single slash / or must be without a slash. The format that isn’t chosen should be permanently (301) redirected.

For example, website.com/seo/ must be permanently redirected to website.com/seo if the chosen URL format doesn’t end with a /.

All internal links should also be updated using the chosen URL format.

13. URLs with parameters are indexed

URLs with parameters are harder for users to share and remember. It’s also harder for search engine bots to navigate and understand a website’s structure if the URL has parameters. 

URLs with different parameters can also cause multiple URLs to point to the same content and confuse search engines as to which page should appear in search results.

How to fix:

Add a self-referencing rel=canonical tag to each page so that Google knows which ones to index and display on search results. The canonical URL should be absolute and doesn't contain any parameters.

For example, our homepage has <link rel="canonical" href="https://www.serphead.com" /> so that URLs like https://homepage.com/?utm=parameter aren't indexed by Google bots.

14. URLs exceed 115 characters

URLs that exceed the optimal character limit will be truncated when shown in search results. Excessively long URLs are also harder for users to share and remember.

How to fix:

Keep URLs concise and under 115 characters.

15. URLs with www and non-www

Pages with URLs that have www and those that don’t have one are considered separate pages. For example, www.cats.com is different from cats.com.

If both the www and non-www versions are indexed, Google might struggle to determine which version to index and display in search results.

How to fix:

All pages must start with either www or non-www. The format that isn’t chosen should be permanently redirected. All internal links should also use the default URL format.

16. Unoptimized page titles

Page titles (aka title tags) are clickable headlines that you see in search engine results. They are arguably one of the most important page elements as they have a direct effect on rankings and click-through ratings.

site audit reveals unoptimized page title

A page that lacks a title or has a duplicate title will make it hard for users and Google bot to get the accurate gist of the page's content. This will have negative effects on both rankings and organic traffic. The same goes for title tags that don't utilize relevant keywords.

How to fix:

Keep these guidelines in mind when writing new page titles:

17. Unoptimized Meta descriptions

Also known as meta tags, the meta description is the summary or teaser that you see under the page title in search results. While it does not directly impact search engine rankings, the meta description can affect click-through ratings and traffic. An enticing meta description is more likely to garner attention than a bland one.

site audit reveals unoptimized meta description

Without an optimized meta description, you're missing the opportunity to present the summary of your page content to users and search engines. This will hurt both CTR and traffic.

How to fix:

Keep these guidelines in mind when writing new meta descriptions:

18. Unoptimized H1 tags

The <h1> tag is a critical part of search engine optimization because it serves as the top-level heading of the page and helps search engines better understand the page's main topic or focus.

Unoptimized H1 tags are one of the most common technical SEO issues, websites either don't have an H1 tag or use it multiple times per page. Another common problem is the lack of keywords in the heading.

How to fix:

Follow these best practices when writing H1 tags:

19. Unsequential headings

Headings are used to define the semantic structure of a page and help both users and search engines understand the content's layout. Headings range from H1 to H6. As we discussed earlier, H1 is the most important and is reserved for the main title or most significant topic of the page. H2 to H6 are often utilized as subheadings to divide content into subsections.

If the order of the headings is jumbled, this can make it harder for search bots to understand the content's semantic structure.

Unsequential headings are one of the common technical SEO problems we have encountered recently. Lazy website developers use headings to format text rather than CSS classes to change the size and weight.

How to fix:

20. Pages have low word count

High-quality content are the ones that answers questions, solves problems, or provides unique insights. They tend to do better in search rankings than AI-generated page copies. While it's not necessary for high-quality content to be extensively long, pages with scant or no text can pose challenges for search engines in comprehending what the page is all about.

How to fix:

There are no standard content length but make sure your word count is enough to cover a specific topic or to describe other content types on your page. For instance, a landing page about your service or the product you're selling must have at least 400 words that accurately describe its benefits to the user. Blog posts and other informational content require at least 800 words to cover a subject and its subtopics.

21. Missing alt tags

Since search engine crawlers and assistive technologies like screen readers can't process images, they look at the alt-tags (aka alt-text) to understand what the image is. Alt-tags also contribute to a website's relevancy for image search results.

How to fix:

To address missing alt-tags, make sure each image has a concise and descriptive alt text. Keep the character count under 100 characters.

22. Missing canonical tags

Canonical tags tell search engines where to find the original page. Without it, search engines may show a duplicate page in search results instead of the originals.

If the actual homepage URL is https://www.serphead.com and it doesn’t have a self-referencing canonical tag, search engines may select http://serphead.com/ as the page shown on search results.

How to fix:

Add <link rel=”canonical” href=”PAGE-URL” /> to the <head> of each page.

For example, in https://www.serphead.com/blog, the tag should be <link rel=”canonical” href=”https://www.serphead.com/blog” />.

To put it simply, canonical URL = same page URL. Please also note that the canonical URL should be absolute, never use relative URLs.

23. Missing/incorrect hreflang tags

The hreflang attribute controls which version of the page will appear in search results for a specific language and the region.

If hreflang URL is missing or not properly set, search engines may not find the localized versions of your pages and therefore unable to match those pages with the the appropriate users.

How to fix:

A few pointers when implementing hreflang tags. First, hreflang tags are bidirectional. Meaning that if implement an hreflang tag to an English page pointing to its localized version, then the localized page must also have an hreflang tag pointing to the English version.

It's also a good practice for each language version of a page to list itself in addition to all other language versions. For example, a French version of your page must still have rel="alternate" hreflang="fr" annotation with a link to itself that should go along with other hreflang tags.

Lastly, the URLs used in hreflang tags should be absolute. Using relative URLs will cause all sorts of problems such as duplicate content.

With these tips in mind, you can implement hreflang tags either in the HTML or via the XML sitemap.

Here's a sample so you have an idea of how to properly implement hreflang tags.

HTML implementation:

To implement hreflang tags correctly for this setup, we’d add this HTML code to the <head> section of each of our pages:

<link rel="alternate" hreflang="en" href="https://www.serphead.com/technical-seo-services" />

<link rel="alternate" hreflang="de" href="https://www.serphead.com/de/technical-seo-services" />

<link rel="alternate" hreflang="fr" href="https://www.serphead.com/fr/technical-seo-services" />

<link rel="alternate" hreflang="x-default" href="https://www.serphead.com/technical-seo-services" />

XML sitemap implementation:

<url>

<loc> https://www.serphead.com/technical-seo-services</loc> 

<xhtml:link rel="alternate" hreflang="x-default" href=" https://www.serphead.com/technical-seo-services" />

<xhtml:link rel="alternate" hreflang="en" href="https://www.serphead.com/technical-seo-services" /> 

<xhtml:link rel="alternate" hreflang="de" href="https://www.serphead.com/de/technical-seo-services" /> 

<xhtml:link rel="alternate" hreflang="fr" href="https://www.serphead.com/fr/technical-seo-services" /> 

</url>

24. Missing structured data/Schema markup

Search engines use structured data (aka Schema markup) to better understand a page's content and find contextual its meaning. Adding structured data will make a page eligible to be displayed in rich results.

How to fix:

There are two ways to create a structured data markup. First, you can create it manually and take cues from Schema.org as to which properties to set.

The second method is to use a Schema markup generator that will write the HTML code for you. All you have to do is select the markup you want to create and fill out the fields required by the generator.


structured data markup generator

Once you're done, don't forget to validate the markup to see if the code is error-free before implementing the markup on the entire site.

25. Missing/incorrect Open Graph tags

Open Graph tags instruct Facebook, Pinterest, LinkedIn, and other social networks what information to display whenever your page is shared on social media.

Site audit SEO tools, like ScreamingFrog and Ahrefs, will alert you about this issue if the OG tags are incomplete.

Another common problem is when the Open Graph URL does not match the canonical one. This will result in the non-canonical version of a page being shared on social networks. The non-canonical version may also be indexed by search engines and cause all sorts of issues.

How to fix:

<meta property="og:title" content="SAME AS TITLE TAG" />

<meta property="og:url" content="SAME AS CANONICAL URL" />

<meta property="og:image" content="URL OF THE IMAGE" />

<meta property="og:type" content="article" />

<meta property="og:description" content="SAME AS META DESCRIPTION" />

<meta property="og:locale" content="en_GB" />

Note: For og:type, use article for articles and website on the rest. Use og:locale only if your website supports multiple languages.

26. Missing Twitter/X cards

An X or Twitter card affects what information (title, description, image, etc.) are displayed whenever a URL is shared. If Twitter cards are missing, Twitter will pull data from relevant Open Graph tags.

How to fix:

Add these lines to every page of the website:

<meta name="twitter:card" content="summary_large_image">

<meta name="twitter:site" content="@TWITTER-NAME" />

<meta name="twitter:url" content="SAME AS CANONICAL URL">

<meta name="twitter:title" content="SAME AS PAGE TITLE">

<meta name="twitter:description" content="SAME AS META DESCRIPTION">

<meta name="twitter:image" content="SAME AS OG:IMAGE">

<meta name="twitter:image:alt" content="SAME AS OG:IMAGE:ALT">

For a WordPress site, you can easily implement Twitter Cards by installing Yoast SEO plugin. Once installed, go to Yoast SEO → Settings → General → Site features and fill out the required details. The plugin will automatically generate the appropriate Twitter Card data.

If you're still reading at this point, good job! We hope that the knowledge we just imparted helps you with site audit and assists in fixing the most damaging SEO issues on your website.

But if you still need help diagnosing SEO problems or applying solutions, feel free to contact us. Our team of SEO professionals are masters in this field and will help you at a moment's notice. Check out our technical SEO services section to see our on-page SEO scope of work and how we can help improve search visibility.

Sergei Ivanov

Sergei Ivanov

Co-Founder of Serphead

Sergei has more than a decade of experience blending data-driven insights with SEO strategies to enhance online visibility and user engagement. He has been providing practical guidance for businesses and individuals navigating the digital landscape since 2012.