Test your site on

Free trial seo report rank Also on


Videos Github Docs

General Tips

  • Be sure that each page has a unique title on each page <title>Nice page</title>, 10-70 chars, with keywords
  • also <meta name="description" content="It's really nice page <%= if @user.present? %>"> should be 70-160 chars, uniq per page
  • use one <h1> per page with keywords but do not duplicate title, use h2-h6 for other keywords.
  • use alt atribute for description of images, but also image source link should we unique and self explanatory
  • Links within your content tend to carry more weight than links within a sidebar or footer.
  • try to have more and more inbound links from other sites. Increase link count using: social media, directories, some sites links to competition, try to write them explaining why your site is better. To find inbound links use google webmaster tools,
  • if your content expires (like job posts, wheater data) than link old pages to category pages (group of similar results)
  • outbound links can be no-follow, especially outgoing links that are not relevant (do not have quality content). For example, links to a Feedburner page.
  • anchor text plays the most important role in link building. If you want to rank for ‘blue widget’ then you want the anchor text of the link to be <a>blue widget</a>
  • 25-70% should be only text (not html markup tags)
  • add XML sitemap (with correct protocol http/s, subdomain, trailing slash) if more than 100 than only most popular
  • use 301 redirect from root domain (apex) to www domain (or vice versa) do not split value to two different domains (also from IP address) (you can not redirect in nginx or rails since apex domain do not event point to your server, you need to change on dns level, for example route53 )
  • replace undrescore _ with dash (hyphens) - in urls
  • flash and frames are not indexed
  • nice mobile rendering (buttons at least 48px width/height, 32px padding around tap targets)
  • add viewport meta tag <meta content="width=device-width, initial-scale=1, maximum-scale=1, user-scalable=no" name="viewport"> and use CSS media queries to apply different style depending of screen size
  • enable gzip compresion, optimize image size using Lighthouse

  • use srcset for responsive images, specify image dimensions
    <img src="" srcset=" 1024w, 512w" sizes="100vw">
  • eliminate render-blocking javascript and css, leverage browser caching, minify css, to have small page size (less than 320Kb) and load under one second google web fonts are cached only for one day (each day, it is downloaded again). For different browsers it uses diffent files: woff2, svg … Download fonts using this heroku app. Change folder prefix to poppins/, and extract downloaded fonts (without folder) to app/assets/fonts/poppins so you can see the file app/assets/fonts/poppins/poppins-v1-latin-regular.svg. Copy/paste the content to some app/assets/stylesheets/popins.scss file and include in applications.scss with @import 'popins';

  • add favicon. It is fine just to add to the root of web site (for rails just copy your png to public/favicon.ico) Size should be 32x32. Some of the best favicons is More info on
  • custom 404 error page
  • add language <html lang="en">
  • SSL secure (http should redirect to https), add STS in header, xml sitemap, link to css files to use https
  • social media
  • optimize css
  • use seserver side rendering, which is important because of: rendering performance: ability to cache or prerender a site, easier to write tests, accessibility and no-script concerns, loading a page in its rendered state for SEO

Rails gzip

It should show in Response Headers:


# Gemfile
gem 'heroku-deflater', :group => :production

# config/environments/production.rb
  config.assets.compress = true


/robots.txt is used to tell some good crawlers wheter they should read or not. Bad crawlers will read the site anyway.

# public/robots.txt
# See for documentation on how to use the robots.txt file
# To ban all spiders from the entire site uncomment the next two lines:
User-agent: *

Disallow: /users/password/new
Disallow: /users/sign_in
Disallow: /users/confirmation/new


You can use meta tags to disable some well known robots. Add <meta name="robots" content="noodp,noydir"> to disable Open Directory Project and Yahoo Directory to provide description for your site


sitemap_generator gem is excellent tool. This gem follows protocol

echo "gem 'sitemap_generator', '~>6.1.2'" >> Gemfile
rake sitemap:install # this will generate config/sitemap.rb
vi config/sitemap.rb # put some `add path, options`

rake sitemap:refresh:no_ping # to generate sitemap to public folder without ping
less public/sitemap.xml.gz
gunzip public/sitemap.xml.gz && chromium-browser public/sitemap.xml

Use Post.find_each instead of Post.all.each since we do not want to load all posts in memory.

To run every day, you can use whenever to call rake sitemap:refresh -s (-s is silent)

SitemapGenerator::Sitemap options

  • Sitemap.default_host = '' is your default url
  • Sitemap.create_index = true always create index file <sitemapindex><sitemap><loc></loc></sitemap></sitemapindex> that will link to other sitemap files definition
  • Sitemap.sitemaps_host = '' where we will deploy sitemaps. User with Sitemap.adapter =
    • Here include_index (include <url><loc></loc></url>) be off since sitemaps_host is different than default_host and we can not have different domains in same sitemap
  • Sitemap.public_path directory used to write localy before eventual upload (default is ‘public/’).
  • Sitemap.sitemaps_path = 'my-folder/' is relative folder where to store sitemaps, it will be used to determine url in index file.

Options for add base on protocol are:

  • changefreq default is weekly
  • lastmod default is
  • priority default is 0.5
  • expires request removal expires: +

You can add some existing sitemaps with add_to_index '/my-old-sitemap.xml.gz'

Sitemap on Heroku

For heroku you need to move it on S3. It is straighforward configuration, AWS S3 does not need to be public.

SitemapGenerator::Sitemap.sitemaps_host = "http://"\
SitemapGenerator::Sitemap.sitemaps_path = 'my-sitemaps/'
SitemapGenerator::Sitemap.adapter =
  fog_provider: 'AWS',
  aws_access_key_id: Rails.application.secrets.aws_access_key_id,
  aws_secret_access_key: Rails.application.secrets.aws_secret_access_key,
  fog_directory: Rails.application.secrets.aws_bucket_name,

If your bucket is not on us-east-1 region you need to add fog_region option to adapter

  fog_region: Rails.application.secrets.aws_region

Than you need robots txt to point to that sitemap url (

# public/robots.txt

Test with heroku run sitemap:refresh.

Also interesting tool to check validity of sitemap is

Webmaster tools

Check your sitemap on google webmaster tools. Just click Add property and and verify your domain. (For AWS S3 you need to enable website hosting). Refresh robots.txt to point on new sitemap location.

From you can see that one sitemap should not have more than 50.000 links and uncompressed size 10MB. Url should be less than 2048 chars.

You can use search to see current index status Just type and you can find links there. Or you can add to

  • site: search only for this domain or even subdirectory. You can use this to see if some page is indexed, just add uniq part of page url
  • link: find links for specific domain or subdirectory or pages (this is no longer supported as of Jan 2017)
  • cache: see current archived copy of domain or page. It is better to use since google crawler can execute javascript. You can submit to index on the page, but even google renders it properly (with ajax/angular data) it will not add to index for some other reasons.

Some reasons could be post

  • server can not respond fast enough to all crawler requests, maybe it will help to reoder sitemap
  • All links in sitemap should return 200 (not redirection 3xx). Also you need to check this answer for duplicates (this is common for angular since data is fetched after page load), canonical url (this is common when you have search queries that return same results)

If you have more domains on same server than you need to iterate for each default_host and to call SitemapGenerator::Sitemap.ping_search_engines (since by default is to ping when whole sitemap process finishes, so than it will ping only last one).

Other tools

Chrome plugins

RDFa is extension of HTML to markup thinks on page. Tutorials on There is a Facebook open graph protocol or Google Schema org to add microdata in your markup like: <div itemscope><h1 itemprop="name">Duke</h1></div> or to add ld+json (recommended on this page )

with various models Person, Place, Event or on Google search gallery and test on

Main syntax json-ld

  • @context shows how to interpred keys
  • @id universal point on the web
  • @type thing/object types: Person, Event … or data type: String, Date

    { @context: {},
      // Value of type (`Person`) will expand to full URL in that particular context.
      @type: Person,
      name: Duke,
      // For data type we use expanded form using @value keyword
      birthday: {
        @value: '2000-01-01',
        @type: 'xsd:date'
      // instead of defining type inline in expanded form, we can define in
      //context (Type coercion) and use simply `birthday: '2000-01-01'`
      context: {
        birthday: {
          @id: '',
          @type: xsd:date
  • To create a links we can embed object or referencing using url
  • FAQ example
    <script type="application/ld+json">
      "@context": "",
      "@type": "FAQPage",
      "mainEntity": [
          "@type": "Question",
          "name": "Is it Free to Register on MyApp?",
          "acceptedAnswer": {
            "@type": "Answer",
            "text": "Registration on MyApphe is free"
  • Keyword aliasing when you want to use instead of obj['@id']
<script type="application/ld+json">
[  {  "@context":"",
    "description":"mycomp - my way",
    "alternateName":"mycomp UK",
    "geo":{  "@type":"GeoCoordinates",
        "latitude":  "11.4197157",
    "priceRange":"From &pound;29.95",
        "addressCountry":"United Kingdom"
          "name":"mycomp UK",

Open graph

You can add head tags for facebook open graph protocol. In Rails it will be like:

/ haml
- content_for :head do
  %meta{property: "og:type",        content: "article"}
  %meta{property: "og:title",       content: "MySite | #{@post.title}"}
  %meta{property: "og:image",       content: share_image_url(@post)}
  %meta{property: "og:description", content: @post.description}
  %meta{property: "og:url",         content: post_url(}
  %meta{property: "og:site_name",   content: "MySite"}
  %meta{name: "twitter:card",       content: "photo"}
  %meta{name: "twitter:site",       content:"@mysite"}
  %meta{name: "twitter:title",      content: "MySite | #{@post.title}"}
  %meta{name: "twitter:image",      content: share_image_url(@post)}
  %meta{name: "twitter:url",        content: post_url(}

# at least you have to have one meta tag og:image for sharing on viber
    <meta property="og:image" content="<%= asset_url("logo.jpg") %>" />

og:title usually have same content as <title> and og:description is the same as <meta name="description">. In rails you can use meta tags gem so you can set title in html and meta tag in one place. Also set language alternative url


You can enable Destination URL Auto-tagging. Automatically tag my ad destination URLs in Account Settings -> Preferences

User mailing list

If someone is linking your site or your competitor site, you can email the site owner (need a script that will search for email address on that site) saying that you have some new content, video or other material that will be interesting for his audience.

Customer Analitics

You can use some of the services: Google Analytics, Firebase anal, Amazon anal, Fabric Answers, Mixpanel, Keen, Segment, Amplitude, Localytics, Ionic Analytics

Go to and on left sidebar click Admin to add property and get UA code.

SEO search enginge optimization

hangouts tutorials

  • subdomain is not bad for rank, since google can treat as two different sites

  • url is most important and should point to single state (not multiple states). After hash # usually everythings is ignored so that should not be a router for your app. Do not use secret stuff in url like /session=1234/...?user=john
  • canonical url is the main key for this content, and can be defined using link rel canonical, redirects, sitemaps to define what is canonical url for the page you can use ```


  • imporant content that needs to be indexed should be on visible part of the page, not on ajax call since crawl will be able to index that
  • always use links with href. Onclick is ok if it uses history js api. Note that crawlers do not clicks, just looks for links. So don’t use buttons, span onclick or href to hash link (part of a page)
  • If content change often, crawler will check more often. Popular and fresh.
  • Do not load too much files, crawl budget.
  • Do not duplicate content, same content but different url, use link rel canonical ie canonicalize.
  • Prevent crawling with robots.txt, but link from other site is followed (robots is not read) so for this case use header.
  • Non existing urls should use redirection (or add meta tag) to page not found with status 403
  • Bot does not save cookies, so do not rely on persisting data. Also does not accepts geolocation
  • Do not use different parameters to same page

Off page SEO

  • backlinks, find links to your site:
  • guest blogging on some of those blogs or find where similar images are used upload and reverse search image
  • social media engagement
  • influencer marketing, ask them to share a link
  • video, podcasts, webinars
  • Kloaking is when heading seo keywoards says nice words but content sell drugs.

  • Screamingfrog shows how google bot works
  • It is important that there are 301 redirections from www to root, http to https, chain redirections do not transmit seo juice (after 3 redirections stop)
  • All 404 pages should be redirected to live pages
  • When two pages open for the same keywords, so you don’t need to use the same tags on multiple pages. Cannibalization can be detected on gsc where all pages for specific keywords are visible. If you google both, then don’t touch anything, but when one falls, then optimize it so that you don’t drag the first page as well. For example, a blog should not contain the same keywords. When you gets bored, then put 301 on the first page
  • The link builder is done by contacting a thousand blogs, 1% of them will accept to put a link for 100 euros
  • Anchor text ratio, 70% brand text
  • Ahrefs can give you a list of link texts so you can analyze the competition to use the same text ratio. buy a site that is popular and put your link
  • External links can be with rel nofollow and will not take juice from the site. You can link large authoritative sites (wikis) or small sites that are not in the same are business. You should not link a competing site in any way.