| Founder of evisio.co
<?php echo $curauth->display_name; ?>

Michael Ramirez

Founder of evisio.co It’s messy, complicated, super technical, and sometimes feels like black magic, where you’re not sure what you’re really getting out of it.

Posts by Michael:

How to Fix Issues with Sitemap.xml

22 Mar 2023

XML (Extensible Markup Language) Sitemaps are text files that list all URLs on a website. They can also include additional information (a.k.a. metadata) about each URL, for example,  when it was last updated, its importance and if there are other copies of the URL in other languages. All of this is done to help search […]

Read More

In Knowledge & Technical SEO

How to Fix JSON-LD in Code

22 Mar 2023

JSON-LD  (JavaScript Object Notation for Linked Data) is a compact Linked Data format that enables users to easily read and write structured data on the web by using open vocabularies like schema.org. The successor to the JSON format, JSON-LD is advocated for use by the World Wide Web Consortium.  It makes it possible for linked […]

Read More

In Knowledge & Technical SEO

Fixing Issues with Hreflang Links

22 Mar 2023

If your website is available in multiple languages, hreflang tags are a simple way to make sure visitors are ending up on the page with their preferred language.  This is important for SEO as search engines may rank the wrong URL for the target language or location. If the search engine can’t determine which page […]

Read More

In Knowledge & Technical SEO

How to Fix Invalid Robot.txt File Formats

22 Mar 2023

A well-functioning website cannot operate without the robots.txt file. This helps the search engine crawlers discover which parts of a given web resource should be searched first and which can be ignored.  Two types of issues may arise from an invalid robots.txt configuration. The first problem is that it can prevent search engines from indexing […]

Read More

In Knowledge & Technical SEO

How Do You Fix Issues with Robots.txt Files?

22 Mar 2023

A robots.txt file tells search engines which parts of your site they should and shouldn’t crawl, which helps make sure bots don’t waste their crawl time on unimportant pages. This article will explain the issues related to the robots.txt file, why you need it and how to fix issues with it. Why Do You Need […]

Read More

In Knowledge & Technical SEO

How to fix incorrect pages found in sitemap.xml?

14 Mar 2023

Using the sitemap.xml file will get search bots to crawl your website’s pages, but you only want the sitemap.xml file to contain high-quality pages that are useful to your audience. Adding some pages to the sitemap.xml could actually cause an error.  One of the most common sitemap.xml errors is the ‘incorrect pages found’ error. Continue […]

Read More

In Knowledge & Technical SEO

How to fix invalid structured data errors?

14 Mar 2023

If search engine optimization is a priority, nothing is more important than using structured data on your website. Simple yet essential, structured data improves how search engines interpret your content. If you’ve covered all the SEO basics and want to improve your standing on search engine results pages (SERPs), structured data is the key.  But […]

Read More

In Knowledge & Technical SEO

How to find and fix orphaned sitemap pages?

14 Mar 2023

Orphan pages are web pages with no internal links leading to them from other parts of your website. In other words, the page is inaccessible unless the user has the precise URL. This type of page is also called an orphaned page because search engine crawlers seldom index these pages since they can’t link to […]

Read More

In Knowledge & Technical SEO

How to fix HTTP URLs in sitemap.xml for HTTPS site

14 Mar 2023

Sitemaps are lists of all the pages that can be crawled and indexed on a website. The sitemap should be designed to assist search engine spiders in discovering all of your domain’s URLs as easily as possible. One type is the HTML sitemap. Sitemaps written in HTML can exist as standalone web pages and are […]

Read More

In Knowledge & Technical SEO