SEO: The Technical Side of SEO Weights Muuuuuuuuch More Than You Think

Many SEOs and business owners are not really aware of how much “weight” the technical side of SEO have on rankings. They think writing high-quality content, putting some keywords on it, building a few links and building social signals is all that meets the eye. Well, they’re definitely mistaken. And this article was made especially to open your eyes to the necessity of upgrading and managing the technical side of SEO as well.

Because SEO is a more complex subject that many people think, especially with Google changing its algorithms periodically and even going to the next level by using Artificial Intelligence to boost its engine. Yes, the technical stuff matters.

More than you think.
Now let’s see why you have to pay attention to AT LEAST these 12 points below.
Here we go!

Since we all now know how much content matters for SEO, let’s open this with content rendering and indexability. Look, what you must understand is that although Google seems like an incredibly powerful robot who always knows everything, he still has some limitations.

And content rendering is one part of it.
But I’m speaking this section here mostly because of AJAX. Ajax is a Javascript-based technology made to do asynchronous requests to a web server and thus, taking dynamic content rendering to another level. This is very useful to create a greater user experience by using things like “hide/show content”, interaction-based content and a lot of other pretty things.

But implementing AJAX on your website without caring with Google bot is not a good idea.
Because if you want to make your website structure more VISIBLE to the eyes of Google and other search engines, you must pay attention to how indexable your content is, and Google himself gives you some guidelines on this subject:

If you’re starting from scratch, one good approach is to build your site’s structure and navigation using only HTML. Then, once you have the site’s pages, links, and content in place, you can spice up the appearance and interface with AJAX. Googlebot will be happy looking at the HTML while users with modern browsers can enjoy your AJAX bonuses.

Of course, you will likely have links requiring JavaScript for AJAX functionality, so here’s a way to help AJAX and static links coexist: When creating your links, format them so they’ll offer a static link as well as calling a JavaScript function. That way you’ll have the AJAX functionality for JavaScript users while non-JavaScript users can ignore the script and follow the link. For example:

<a href="ajax.htm?foo=32" onClick="navigate(‘ajax.html#foo=32’); return false">foo 32</a>

Note that the static link’s URL has a parameter (?foo=32) instead of a fragment (#foo=32), which is used by the AJAX code. This is important, as search engines understand URL parameters but often ignore fragments. Web developer Jeremy Keith labeled this technique as Hijax. Since you now offer static links, users, and search engines can link to the exact content they want to share or reference.

Source: Google Webmasters Central

BUUUUUUUUUUUT there’s already evidence of Google bot crawling the DOM elements and part of the Javascript code of a web page. The point here though is to remember Google sometimes runs a lot of tests, so just to be sure it is still a best practice to keep things as safe as possible regarding content indexability.

Your code should be as simple as possible so Google can understand it all because the more complex you make it, the harder will be for Google to understand your website.

Have you been working for days to update a HUGE piece of important content on your website that is KEY for your readers? Or have you been working on a range of important content that is all inter-connected? Then you must pay attention to a very “subjective” point here:


Why? Well, because it may INCREASE the BOUNCE RATE of your website A LOT. Because people might visit your website through Google and leave very fast just because that piece of content is incomplete, which will send a BAD signal to Google.

Because this tells Google your content is not good enough for that type of search, making your website drop in rankings. That’s why “hiding” incomplete content and putting it AWAY from the eyes of Google bot, sometimes is much better than let it be crawlable.

Makes sense? Think of human behavior if you didn’t understand.
Would you sell a book in half to a prospect? Of course not. That’s incomplete content.
And that ruins rankings AND authority, obviously making things worse for your business.

But of course, this is not the only signal that will tell your website is a bad place to be, but it helps to drain your ranking power for sure. After all, you have heard before that Google has more than 200 ranking factors in its algorithm, and MANY of them have more than 50 variations within a SINGLE FACTOR.

Which leads to more than 10.000 ranking factors.

But my point here is simple:


And leaving incomplete content that can increase bounce rate is something you should worry about. Because after all, why give the competition a chance to surpass you?

This part is a little more in depth than the AJAX section. Look, when building your content structure, you must consider EACH PIECE of content you have by thinking about how indexable your content is due to JS/AJAX code, along with how solid your “internal link ecosystem” really is as well. For example, many SEOs and bloggers nowadays write great articles but they isolate it on their own website like a “stand-alone article”.

What does that mean?
It means they don’t build any internal links. They don’t link to another piece of relevant content on their own website, killing opportunities to make visitors spend more time on their website.


And the more relevant links you have for users, the higher the chances are for a user to keep clicking to read your content. This will raise the amount of time people spend on your website, which will give good signals to Google in the end.

PS: and I said RELEVANT links, so don’t overload your users with unnecessary links since it kills user experience.

But hold on because there’s the other side of this section.
You must evaluate how much content is DYNAMIC CONTENT too.

Look, I’ll make you think for a moment so this can be as clear as possible in your head.
Suppose you wrote a great article but part of its content is ENTIRELY provided AFTER the user made an AJAX request, like clicking on a button to unlock the rest of the content for instance
(which is something many bloggers are doing these days).

Since the content will only appear IF the user takes some sort of action, this will be an SEO fault for you. Does it make the user experience better? Perhaps. But besides it is not “right away” indexable content, there’s also the danger of doing cloaking without realizing by providing content based on dynamic events and therefore, forcing Google to think that you’re a really bad website that is deceiving his robot and showing users other pieces of content of what you’re showing him.

That’s one point.

Another point is making your JS code so bad, that it hides links and other pieces of important javascript code which would be useful for Google to understand your page. This is why is recommended doing these things below:

  • PUT ALL JAVASCRIPT INTO AN EXTERNAL FILE — If you give WORK to Google bot and make it hunt for all the necessary JS code just to understand your website’s content, you’re doing something wrong and you’re only hurting yourself from the SEO point of view. The ideal scenario here is to put every piece of JS code into an external file, making it easier for Google bot to read it all and understand all of your dynamic content. Don’t make it hunt for code.
  • REMOVE ALL UNUSED JAVASCRIPT — Remember: the less work you give to Google, the better. So if you have any unused javascript then removing it is a good way to help search engines understand your whole content a lot faster. And if bots understand your content faster, you, Google, Bing and others will become good friends. Make him confuse, though, and he won’t say good things about your website. So keep things simple.
  • MAKE JAVASCRIPT BOOST CONTENT AND NOT REPLACE IT — The user experience can be enhanced using JS no doubt about that, but it shouldn’t REPLACE any content at all. Because this will be probably seen as cloaking by Google, which is a black hat technique used by crackers to deliver different content to users than the one that is being delivered to search engines, making Google understand you’re manipulating things.
  • MAKE AJAX CRAWLABLE — A few years ago Google updated its algorithm to make it possible to crawl AJAX code. But one limitation Google still has, is going through COMPLEX code. Complex code makes bots confused about your website’s content structure. This is why your AJAX requests should be as simple as possible so Google can read everything right, and fast. Again, keep things simple.

These are some points you should also worry about when dealing with the technical side of your SEO. Because as you can see, they are very important to both affect your rankings positively as to prevent you from getting labeled as a bad website as well.

Remember to also care about how your website redirects users, whether that is to other domains or not. Because the wrong redirect type can really harm your rankings by giving link authority to another page that wasn’t what you were aiming for. And the same happens when you do domain migrations. (I’ll cover redirects in a little more detail ahead).

But regarding domain migrations, here’s a nice infographic below so you can understand things a little bit better in a simple way:


Source: The Moz guys.

Makes more sense now right?
Great. Now let’s move on to the next technical SEO point.

For SEOs, there are some specific HTTP error codes that matter to make Google bot understand what’s really going on behind the scenes, like telling Google your website is going through a temporary maintenance routine and it will be back to normal soon.

Error codes can tell those kinds of stories.
But just part of these codes of the complete HTTP error code list are necessary for an SEO.

So let’s talk about the HTTP status codes that matters:

  • 200 OK — The request has succeeded. This is considered correct for most scenarios.
  • 301 Moved Permanently — The requested page has been assigned a new permanent URI and any future references to this resource should use one of the returned URIs. The 301 redirect should be used any time one URL needs to be redirected to another to give link juice to the proper page.
  • 302 Found — The server is responding to the user request with a page from a DIFFERENT location, which means the user access a link that will provide the resource from ANOTHER link, but the website still continues to use the original link for requests. Don’t do this, is not recommended, my friend. Because using 302 will force crawlers to treat the redirect you’re making as TEMPORARY, which won’t give the page the link authority it should receive because of that. So use 301 redirects instead.
  • 404 File Not Found — The request made to the server was useless and nothing was found. Aside from that, no indication is given to crawlers saying this is a temporary or a permanent issue causing confusion on robots and drop in rankings. A hint: a good practice is to put some useful links in a 404 page to prevent people from bouncing whenever they find a 404 error.
  • 410 Gone — The requested page is no longer available at the server and no forwarding address is known. Usually, this error is considered permanent.
  • 503 Service Unavailable — The server is currently unable to respond to the user’s request due to some temporary server problem or maintenance issue. The 503 must be used whenever your server is going through maintenance routines. By doing this, you’ll tell bots they should come back soon for another visit because the page is only down for a limited amount of time.

Use these guys right and you’ll improve your rankings more than you think.
Now let’s continue.

This is something most business owners wrongfully think is just a minor case of language change. Well, it’s not. Because as I’ll show you below, the necessary care you need to have with this subject is way more than you think.

Here’s why:

  • you’ll be targeting other regions with possibly strong, different competitors
  • you’ll be having different weights on keywords in each country
  • you’ll be having different options for URL structure (ex: , etc)
  • depending on the choice of your URL structure, you’ll double or triple your work

For example, let’s assume you are working on a fitness website that is targeting India and Sweden. Alright. In this scenario, you’ll probably have different competitors with different market shares and different online approaches to worry about, and also, totally different weights on keywords as well due to the country/region demand AND culture.

Come on, follow me up for a moment here.
Take the culture part, for example.

The CULTURE in this case, will also CHANGE part of the keywords or even the content itself you’ll be writing to the website because after all, you can’t preach about a diet plan that uses MEAT in India. Why? Because is against its culture to eat meat. But in Sweden, there’s no problem with that at all.

So if your content needs to be different for each then your content strategy for both language versions of your website….changes completely. Which will lead you to take 2 different approaches just to make sure your brand ranks better in both countries.

But hear me on, this is just one part of it.

Another point is demand. You must do some research to check if is worth to invest time, money and effort in a region that doesn’t care at all about your business. Do you see? Language versions will interfere on your SEO strategy.

But there’s one more point: The URL structure.
This part is what will decide how much work you’ll actually have.
Why? Because if you change the URL structure you’re actually changing a lot of stuff inside your servers, which is the core, the HEART of several important things that changes your SEO.

Look, I’ll explain below in a easy way each URL structure and its PROS and CONS:


  • PROS — There’s no relevance with server location in this case. The geolocation part is very clear making people understand you have a part of your business in another country. And the websites will be more organized too. Those are good points.
  • CONS — This is an expensive approach because you’ll need more infrastructure for two or more versions of the same site. There’s also the risk of not having the domain available in a certain country and the TLD requirements change for each country, which may be a bit of a problem sometimes.

URL STRUCTURE #2: Subdomains with gTLDs –,,

  • PROS — Very easy to set up and to separate websites internally. Allows different server locations and you can use Webmaster Tools geotargeting.
  • CONS — Confusing for users to recognize geotargeting just based on the URL because people will usually not understand if “de” from the URL “” is the language or the country, for example.

URL STRUCTURE #3: Subdirectories with gTLDs –,,

  • PROS — Very easy to set up and it has a very low maintenance factor since you’ll be putting 2 versions or more inside the same host. You can also use Webmaster Tools geotargeting with this option.
  • CONS — This is ALSO confusing for users to recognize geotargeting just based on the URL. And the organization/separation of websites may be diffcult in this case too.

URL STRUCTURE #4: URL parameters –,,

  • PROS — No advantage exists from an SEO point of view in this option.
  • CONS — Users probably won’t recognize geotargeting just based on the URL. Geotargeting in Webmasters tools IS NOT possible and segmentation by URL is very tricky.

Do you see? That’s why striking different regions/countries can change things completely from the SEO perspective. Completely. So do some hard thinking before implementing language versions.

This is probably the most famous technical side of SEO that people know today: analyzing traffic and speed on Google Analytics. Well, of course it is because everybody wants to check their SEO results and be happy. But although lots of business owners give attention to this area they usually check only the basics, like the amount of visits per day, without looking into the whole picture.

There’s much to know when we talk about page traffic and speed, my friend.
Here are a few important things to know about your website:

  • Which pages have higher traffic? Why?
  • Which pages have lower traffic? Why?
  • Did you had a drop or a boost on traffic? Why?
  • Which pages are taking more time to load? Why?
  • Which pages are taking less time to load? Why?

These questions are just the beginning of what you should be asking yourself in order to define which strategic approach you should take next. We MUST analyze traffic correctly without ignoring important, more hidden points to check if our latest SEO efforts were effective.

And along with page traffic, we also need to analyze page speed as well because by improving that, you will upgrade user experience and therefore, you will boost your rankings. But you need to know which pages are loading slow, which pieces of code are jamming things, how long the core of your DOM elements takes to load, etc.

A true SEO knows that there are some “secrets” on how to use Google (and other search engines) more effectively, by using “hacker-like” techniques to search for more in-depth information. And one of these secrets is to use advanced search parameters on your queries which allow you to build smarter questions to find important things, such as better link building opportunities, contact information or some specific info on the competition that you wouldn’t normally find if you had done a normal search.

Advanced search parameters can tell A LOT to a smart SEO who knows how to use with what I’m about to show you. This subject is also why the Google Hacking book sold a lot when it came out.

So here’s what is really useful to know about advanced parameters:

  • SITE – or “query” — This is used to force Google to search for your query ONLY at the specified domain. It is very useful to see which content is normally found by typing a certain keyword for instance, especially if you’re doing some research on the competition.
  • INTITLE or ALLINTITLE – intitle:”query” or allintitle:”query1″ “query2” — This specifies to Google that you want those terms in the title of any result page. This parameter together with the “site” parameter is of great help to find duplicated content, because by putting part of a title in your search after defining on which site to look into, you can find duplicate content issues just by doing a search on Google.
  • INTEXT or ALLINTEXT – intext:”query” or allintext:”query” “query2” — This tells Google you want the terms specifically on the core content of the page like the body of an article, for example. It is very useful to get that exact match as content, instead of a clickbait article.
  • FILETYPE – “query”+filetype:pdf or filetype:doc or filetype:xls etc — This is extremely useful to use together with the “intext” parameter. You can discover VERY RARE documents that can reveal to you a lot of important information like business contracts, documents, case studies, CEO/supervisor/manager emails etc. This most of the times show companies mistakes on holding their own private information, which is why many IT security analysts use this to find a security breach.
  • WILDCARD(*) – [ query * query2 ] — This is used to “fill the blanks” by placing a joker on your search. If you put “internet * marketing” in your search terms for example, you’ll get results which starts with the word internet, ends with marketing and has something in between. Useful to find specific content that is hard to normally associate.

A skillful SEO can use those commands to uncover very important information about ranking opportunities, competitors, customers and a lot of other secret things. Those who master this art, are definitely ahead their competition.

PS: The Google Hacking book that explores these secret commands is on Amazon in case you want to know more about this subject.

Duplicate content and content syndication are 2 things SEOs need to pay attention to as well. And although they may seem very similar, only the latter can be beneficial. First, you need to understand that duplicate content is an EXACT copy of a content and duplicate content can damage your rankings, that’s all. Content syndication, on the other hand, is usually duplicate content played smart.

Not clear?
Then read this:


Here’s a practical example:


Source: Conversion XL – Growth Hacking

If you click on the source link you can see that Conversion XL posts the content of the image above but its ORIGINAL source is in fact TechCrunch. Also, in this case, just a PIECE of the whole content is being syndicated, but sometimes is possible to syndicate a complete copy of it as well, instead of just part of it.

But why I’m saying all this?
How does content syndication…helps?

Because sometimes is VERY REWARDING to expose your own brand to another audience by using an exact copy of one of your awesome articles. By publishing the best work you own on a different website, you can benefit both you as the site owner. And I know what you’re thinking….

“This seems very similar to guest posting…”
Is not. Because guest posting you usually publish ONE NEW article whilst content syndication can take pieces of a content or repost a complete copy of it. The benefit of doing this is that you’re publishing something you KNOW is great because you already choose content that did well in the past with your own audience. Social proof is involved. Guest posting is usually a shot in the dark.

But to prevent you from harming yourself with this technique, I’ll be fast and give you some solutions so Google doesn’t label your strategy as duplicate content.

So, when syndicating do this:

  • USE rel=canonical — The best solution for sure when syndicating. Here you should ask the site owner to place a rel=canonical tag on the page with your article, and have that nice tag point back to your original article on your own website. By doing this, you’ll tell Google that the syndicated copy is nothing more than a copy and that you are the real hero behind that content. The other benefit of doing this is gaining authority on your website whenever people link to the syndicated content! Just awesome.
  • USE NoIndex — The 2nd best choice. This is mostly used when the other website is not that cool and doesn’t care much about managing duplicate content. So you ask nicely to have them place a NoIndex on their copy of your article because this will say to search engines: “Hey, the syndicated copy should not be indexed, is just to show the audience of that website something great!”. This will solve the duplicate content issue for you. And don’t feel bad because links from the syndicated content that leads back to your site will still pass PageRank to you.
  • USE Direct Attribution Link — The last and worst choice is to ask for a link on the syndicated article back to your website, to your original article. Google will have to “think” more but you’ll gain some benefits because after all, you’re creating a backlink. But prefer the first 2 options above whenever possible.

And of course, you can press the “WHAT THE HELL!” button and do it anyway, but you MUST (must man!) consider the whole scenario and carefully measure the benefits to prevent your business from suffering an unnecessary drop in rankings.

Also, websites that do syndication a lot MUST have an extreme care to not be seen by Google as someone who loves to publish duplicate content. So watch out.

Here you have another point to take care but that can also change your game. For example, if you run certain tools that can crawl your website or a competitor’s website, you will certainly find some hidden ranking opportunities which you can explore to upgrade your own rankings.

I will explain.
Here are a few ways to do that in practice:

  • RUN SpyFu – — This tool will reveal to you the keywords a domain uses on its Adwords campaign along with the most used organic keywords as well. So if you run this on a competitor’s website you can check which keywords/subjects you might add into your SEO campaign.
  • RUN Majestic – — By running this on a competitor’s domain (or on your own), is possible to learn a lot from the links it reveals to you. You can check each link and study its content, find the gaps they have and create one content version of it for your own website with the right keywords to rank.
  • RUN BrokenLink – — This will point out to you all broken links of a domain. It is very useful to pitch other websites by reaching out to them and warning them about any broken links they have so you can then, proactively offer a piece of your content in replace of those links instead. So if you found a broken link for a key content on other people’s website, you can write your version of it and offer yours as a replacement. And this will boost your rankings as well since you are building backlinks.

These are just 3 examples of how crawling tools can reveal to you ranking opportunities.
Explore them all.

As I said in the syndication section, sometimes you must tell Google that a certain “bad” action was taken but you knew what you were doing and you’re in control. This assures search engines that your website is a good website that has someone smart and professional behind the scenes taking good care of it. But in order to do that you must learn HOW to “talk” to Google bot and others.

And a way to talk to Google bot directly is through protocols.
Because they are part of the language Google bot speaks.

This is key to tell bots important things like whether a piece of content is not the original source, block search engines to access a directory or tell that a page shouldn’t be indexed at all. Because sometimes you must take proactive action to prevent a drop in rankings or to fix things to boost it.

So let’s see a few of these protocols that are really useful to you:

  • rel=canonical – <link rel=canonical href=”” /> — Use this to point out to Google which content he should index and consider the original (or best) source. This is useful in those cases where URL parameters cause duplicate content in a website. For example, if you label one article with 2 different blog categories then there will be 2 URLs for that article and that for Google is duplicate content. So rel=canonical tells Google which one is the original source.
  • rel=nofollow – <link href=”” rel=nofollow /> — Use this to tell search engines that the link should not be followed or be given any link juice. This tag was born to counter the spammers who were doing automated comments and link injections as a strategy to build fake authoritative websites.
  • meta-description – <meta name=”description” content=”description text”> — Use this to provide a short description (155 characters) of what your site is really about. This will usually be the first 2 lines Google shows of a result. When you don’t specify this, Google takes part of the page’s content to show as the description.
  • hreflang – <link rel=”alternate” href=”” hreflang=”en-us” /> — Use this to tell Google which language you’re using on the page.
  • robots.txt — The robots exclusion protocol (REP), or robots.txt is a text file that site owners and webmasters use to tell search engines how to crawl and index all pages and directories on their website. You should use this to determine which parts of your website are or should be crawlable. But remember one thing: malicious crawlers do exist and if you specify a folder that you want to stay hidden, you’ll probably be giving the hints to be damaged instead.
  • XML sitemaps — This is mostly used by webmasters to easily inform a list of URLs (along with metadata information about each URL) to search engines, because that way, a crawler can understand the whole picture of your website a little bit better from just one single file. In case you want to know more about this, here’s some great information about XML Sitemaps for you.

These are just the basics, there’s much more to develop on each point and other important protocols to talk about too but we will cover them in a future article.
Let’s continue.

Since 2015, all SEOs added a new motive to worry about to their list: The Mobilegeddon update. This Google algorithm update was implemented to boost the rankings of websites that provides a good mobile version to its users, and also, to drop the rankings of those websites that doesn’t care about mobile users at all.

But why Google did that?
Well, probably because mobile users surpassed desktop users online a while ago.

If most of the internet traffic is through smartphones, then they need to be taken care of.
Smart-ass Google….

That’s why TODAY is necessary to give your website a treat and either make a mobile-friendly version of your website OR build a complete mobile version of your website out of scratch. Because nowadays, those who ignore smartphone users will suffer.

But anyway, just so you have an idea of what Mobile users are expecting (according to Google), I’m pointing out below the basic important points to focus today:

  • PAGE SPEED — Mobile users usually have connectivity issues so improving your website code structure to increase page speed is a must here, along with image optimization, browser caching and fewer redirects.
  • WEBSITE DESIGN — Mobile users love to browse on their smartphones in a beautiful website. This makes them feel special and Google knows that. So make users feel like they’re at home and Google will bring you more visitors.
  • KILL FLASH — No, not the superhero. The Adobe technology is what I’m referring to here. Websites that use flash have the risk of not delivering the same user experience, due to cell phone limitations towards this old technology. You should use HTML5 instead if you want to make pretty things for your website.
  • NEVER BLOCK IMAGES/JS/CSS — Back in the day when smartphones were rare and very too slow to navigate, webmasters would hide from search engines every CSS/JS/IMAGE they could just to deliver things really fast. Today, that’s unnecessary. Go all-in. Show users everything, but optimized of course.
  • NEVER USE POP-UPS — Pop-ups on a mobile environment is usually a bad thing because when people try to close it, they end up clicking where they don’t want to and this gives them a really bad user experience making your bounce rate hit the roof. Kill whenever possible any pop-ups. They’re annoying.

Those are the basics things you should know from Mobile, my friend.
Improve these points and you should be OK.

Knowing all of this is just the beginning. Because this is the basic structure you should think on every time you embrace a project to optimize a website. As SEOs, we must take care of all points possible because we don’t know everything that is behind Google’s algorithm. Even though every SEO would love to know how the bot truly behaves.

But as Eric Schmidt said a while back in the video below, Google has a good reason to hide it from us:

So although part of it is still a secret, we should work on what we already know at least.
And a huge part of what we know is the technical side of SEO. So don’t ignore it because the technical side is just as important as the social signals and the “content is king” part.

Cover all parts and you’ll see the difference.

Take care.


This is not for everyone. If you truly want to improve your business and you REALLY care about delivering a high-quality experience to your customers, instead of just making money, then click the button below. Otherwise, I'll not be able to help your business. You must have a true Entrepreneurial Mindset. Make your choice.



Leave Your Comment Here

CommentLuv badge