Create IAM user and attach inline policy

I usually one user for testing (keys that I export in my basrc), one user for all my apps, and one for some projects which other devs could have access. For test IAM user I attach policy for Programmatic read and write permissions with name for example full_access_to_video_uploading_demo . Keep in mind that underscore _ is not equal hyphen - . Carrierwave and paperclip need something more than put, get and delete, so I added s3:* below "s3:DeleteObject".

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "s3:ListBucket"
            ],
            "Resource": [
                "arn:aws:s3:::duleorlovic-test",
                "arn:aws:s3:::duleorlovic-test-eu-central-1",
                "arn:aws:s3:::duleorlovic-test-us-east-1"
            ]
        },
        {
            "Effect": "Allow",
            "Action": [
                "s3:*"
            ],
            "Resource": [
                "arn:aws:s3:::duleorlovic-test/*",
                "arn:aws:s3:::duleorlovic-test-eu-central-1/*",
                "arn:aws:s3:::duleorlovic-test-us-east-1/*"
            ]
        }
    ]
}

Add admin user to the account

You can create IAM user with PowerUserAccess. On https://console.aws.amazon.com/iam/home?#/users click on Add user

Put username: duleorlovic@gmx.com Select: AWS Management Console access Leave autogenerated password Next (Permissions) Tab Attach existing policies directly Search for PowerUserAccess Select PowerUserAccess Create user Click on Send email Also copy the password and put in email (it will be changed when I login). Send me the email…

Website hosting

You can use your some.domain.com to point to your bucket guide This is simple as adding one CNAME record.

Your bucket needs to have the SAME NAME AS YOUR DOMAIN some.domain.com or you need to use Amazon DNS Route 53. Also if you want to serve from root domain domain.com than you must use DNS Route 53. For default region US Standard (us-east-1) you can add CNAME record with value some.domain.com.s3.amazonaws.com on your domain provider page.

But if bucket is from different region than from CNAME you need to use full Endpoint which you can find in Properties of you bucket -> Endpoint. for CNAME

bucket-name.s3-website[-.]region.amazonaws.com

List of regions you can find on website endpoints. or http://docs.aws.amazon.com/general/latest/gr/rande.html#s3_region For frankfurt would be some.domain.com.s3-website.eu-central-1.amazonaws.com For southeast region is ap-southeast-1. Your bucket-name can not be different than your domain name! virtual hosting, regions and endpoints

Mark bucket public

You can deploy frontend code to S3 but you need to make it public. Edit bucket policy to

{
  "Version":"2012-10-17",
  "Statement":[{
  "Sid":"AddPerm",
        "Effect":"Allow",
    "Principal": "*",
      "Action":["s3:GetObject"],
      "Resource":["arn:aws:s3:::projektor.trk.in.rs/*"
      ]
    }
  ]
}

SSL on Amazon Cloud front

You can request certificate AWS Certificate Manager. It will send email to the administrator@trk.in.rs with the link to approve. Administrator of domain can just click on the link to approve it.

Once you have approved certificate you can use it on cloud front distribution. Some notes.

  • it could 15 mins for distribution to become deployed
  • when selecting Origin Domain Name do not select option from dropdown since it does not contain region (it will work only for us-east-1 buckets). You need to use bucket Endpoint here
  • update your DNS record so CNAME is something like d1ub4fmsp3scvp.cloudfront.net

  • default is 24 hours for cache to expire… you can try to invalidate some files manually

Access files

Files can we accessed in three ways.

  1. bucket_name as path. s3[-.]region.amazonaws.com/bucket-name/file-name. Go to the properties of the file and find link, for example: https://s3.eu-central-1.amazonaws.com/duleorlovic-test-eu-central-1/icon-menu.png. You will notice that for us-east-1 region, link does not contain region part, only https://s3.amazonaws.com/duleorlovic-test-us-east-1/favicon.ico
  2. bucket_name as subdomain.bucket-name.s3.amazonaws.com/filename. This is shorter since this url does not contain region https://duleorlovic-test-eu-central-1.s3.amazonaws.com/icon-menu.png
  3. enable website hosting so you can use Endpoint and append fileName http://duleorlovic-test-eu-central-1.s3-website.eu-central-1.amazonaws.com/icon-menu.png
    • note that url contains website- so website needs to be enabled
    • note that this way you can not access using SSL https protocol
    • for HTTPS you need Cloud Front and Route 53 service link or https://www.cloudflare.com/ . You can set up naked domain, just add CNAME with name @ (or you domain.com) and value Endpoint.

Region problems

CORS

Sometime you need to upload files directly on S3 which means that our javascript code needs to send data but browser prevents with this error

XMLHttpRequest cannot load
https://duleorlovic-test-southeast-1.s3.amazonaws.com/. Response to preflight
request doesn't pass access control check: No 'Access-Control-Allow-Origin'
header is present on the requested resource. Origin 'http://localhost:9000' is
therefore not allowed access. The response had HTTP status code 403.

We need to go Bucket -> Properties -> Permissions -> Add Cors configuration and add <AllowedMethod>POST</AllowedMethod> and <AllowedHeader>*</AllowedHeader> to the provided example:

<CORSConfiguration>
    <CORSRule>
        <AllowedOrigin>*</AllowedOrigin>
        <AllowedMethod>GET</AllowedMethod>
        <AllowedMethod>POST</AllowedMethod>
        <MaxAgeSeconds>3000</MaxAgeSeconds>
        <AllowedHeader>*</AllowedHeader>
    </CORSRule>
</CORSConfiguration>

This error also occurs when we mismatch bucket_name or region. To test if bucket is working, follow the link from the error.

Note that only one host or all hosts * can be defined. You can not allow two hosts.

Tools

Firefor plugin S3Fox firefox organizer is great since you can access specific buckets.

To check your DNS you can use linux command host:

  • host frankfurt.trk.in.rs

aws-sdk

You can use aws-sdk for simple uploading https://github.com/duleorlovic/securiPi/blob/master/s3.rb

# you can override global config
Aws.config.update(
  credentials: Aws::Credentials.new(credentials[:aws_access_key_id], credentials[:aws_secret_access_key]),
)

# if bucket is from different region than you will get error like
# ActionView::Template::Error (The bucket you are attempting to access must be addressed using the specified endpoint. Please send all future requests to this endpoint.):

# or use s3 directly
s3 = Aws::S3::Resource.new region: credentials[:aws_region]
bucket = s3.bucket credentials[:aws_bucket_name]
bucket.objects(prefix: 'assets').map {|o| o.public_url}

AWS CLI

https://aws.amazon.com/cli/

If you export keys, cli will use that https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-getting-started.html

Elastic Beanstalk

https://aws.amazon.com/cli/ Using EB client you need to add permissions for AWSElasticBeanstalkFullAccess for AWS keys in your env

sudo pip install awsebcli
eb init
eb create myapp-env
eb setenv SECRET_KEY_BASE=`rake secret`
eb printenv
eb logs
eb deploy
eb status
eb open

Rails sqlite database can’t be shared between instances (event when we just deploy the code). Adding database is one click. Just use proper env var names:

# config/database.yml
default: &default
  adapter: postgresql
  encoding: unicode
  database: <%= ENV['RDS_DB_NAME'] %>
  username: <%= ENV['RDS_USERNAME'] %>
  password: <%= ENV['RDS_PASSWORD'] %>
  host: <%= ENV['RDS_HOSTNAME'] %>
  port: <%= ENV['RDS_PORT'] %>

Admin access

You can create users and attach AdministratorAccess (Resource *) and create password so they can login in on app url like: https://123123.signin.aws.amazon.com/console.

Errors

  • NoSuchKey 404 Not Found error is when you remove index.html and enable static website hosting with index document index.html

Tips

  • blog https://alestic.com/