Use tags

Instead of adding inline policy, you can use groups (or roles, which is similar) and instead of using specific resource you can add condition by tag https://aws.amazon.com/blogs/security/simplify-granting-access-to-your-aws-resources-by-using-tags-on-aws-iam-users-and-roles/

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "VisualEditor0",
            "Effect": "Allow",
            "Action": [
                "ec2:Describe*"
            ],
            "Resource": "*"
        },
        {
            "Sid": "VisualEditor1",
            "Effect": "Allow",
            "Action": [
                "ec2:RebootInstances",
                "ec2:StartInstances",
                "ec2:StopInstances"
            ],
            "Resource": "*",
            "Condition": {
                "StringEquals": {
                    "ec2:ResourceTag/Team": "Developers"
                }
            }
        }
    ]
}

Create IAM user and attach inline policy

I usually one user for testing (keys that I export in my basrc), one user for all my apps, and one for some projects which other devs could have access. For test IAM user I attach policy for Programmatic read and write permissions with name for example full_access_to_video_uploading_demo . Keep in mind that underscore _ is not equal hyphen - . Carrierwave and paperclip need something more than put, get and delete, so I added s3:*. Version string "Version": "2012-10-17" should not be changed.

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "s3:ListBucket"
            ],
            "Resource": [
                "arn:aws:s3:::duleorlovic-test",
                "arn:aws:s3:::duleorlovic-test-eu-central-1",
                "arn:aws:s3:::duleorlovic-test-us-east-1"
            ]
        },
        {
            "Effect": "Allow",
            "Action": [
                "s3:*"
            ],
            "Resource": [
                "arn:aws:s3:::duleorlovic-test/*",
                "arn:aws:s3:::duleorlovic-test-eu-central-1/*",
                "arn:aws:s3:::duleorlovic-test-us-east-1/*"
            ]
        }
    ]
}

On s3 you can leave checked Block public access but you have to enable cors if you have javascript upload.

If you upload images using default rails amazon storage, then each image will first go to the server and server returns amazon url with temporary public access. If you want to use direct public url than create another storage with public: true and also enable public access (uncheck “Block all public access”) and ACLs enabled (otherwise “The bucket does not allow ACLs” error) so the uploaded image can be accessible using direct link (no redirection from server is needed anymore).

CORS

When you upload files using action storage ie directly on S3 that means that our javascript code needs to send data but browser prevents with for example error

XMLHttpRequest cannot load
https://duleorlovic-test-southeast-1.s3.amazonaws.com/. Response to preflight
request doesn't pass access control check: No 'Access-Control-Allow-Origin'
header is present on the requested resource. Origin 'http://localhost:9000' is
therefore not allowed access. The response had HTTP status code 403.

We need to go Bucket -> Properties -> Permissions -> (at the bottom) Cross-origin resource sharing (CORS) add <AllowedMethod>PUT</AllowedMethod> and <AllowedHeader>*</AllowedHeader> to the provided example: XML format is not used any more

<CORSConfiguration>
    <CORSRule>
        <AllowedOrigin>*</AllowedOrigin>
        <AllowedMethod>GET</AllowedMethod>
        <AllowedMethod>PUT</AllowedMethod>
        <AllowedMethod>POST</AllowedMethod>
        <MaxAgeSeconds>3000</MaxAgeSeconds>
        <AllowedHeader>*</AllowedHeader>
    </CORSRule>
</CORSConfiguration>

so use JSON https://docs.aws.amazon.com/AmazonS3/latest/userguide/ManageCorsUsing.html

[
    {
        "AllowedHeaders": ["*"],
        "AllowedMethods": [
            "GET",
            "PUT",
            "POST"
        ],
        "AllowedOrigins": [
            "http://localhost:3000",
            "http://localhost:3001",
            "http://localhost:3002",
            "http://localhost:3003",
            "http://localhost:3004"
        ],
        "ExposeHeaders": []
    }
]

and you can test as soon as you save cors configuration. This error also occurs when we mismatch bucket_name or region. To test if bucket is working, follow the link from the error.

Note that only one host or all hosts * can be defined. You can not allow two hosts.

Add admin user to the account

You can create IAM user with PowerUserAccess. On https://console.aws.amazon.com/iam/home?#/users click on Add user

Put username: [email protected] Select: AWS Management Console access Leave autogenerated password Next (Permissions) Tab Attach existing policies directly Search for PowerUserAccess Select PowerUserAccess Create user Click on Send email Also copy the password and put in email (it will be changed when I login). Send me the email.

IAM user can sign in on https://duleorlovic.signin.aws.amazon.com

PowerUserAccess can not create new API keys. PowerUserAccess can change S3 Permissions Block public access and CORS configuration.

Website hosting

You can use your some.domain.com to point to your bucket guide This is simple as adding one CNAME record.

Your bucket needs to have the SAME NAME AS YOUR DOMAIN some.domain.com or you need to use Amazon DNS Route 53. Also if you want to serve from root domain domain.com than you must use DNS Route 53. For default region US Standard (us-east-1) you can add CNAME record with value some.domain.com.s3.amazonaws.com on your domain provider page.

But if bucket is from different region than from CNAME you need to use full Endpoint which you can find in Properties of you bucket -> Endpoint. for CNAME

bucket-name.s3-website[-.]region.amazonaws.com

List of regions you can find on website endpoints. or http://docs.aws.amazon.com/general/latest/gr/rande.html#s3_region For frankfurt would be some.domain.com.s3-website.eu-central-1.amazonaws.com For southeast region is ap-southeast-1. Your bucket-name can not be different than your domain name! virtual hosting, regions and endpoints

Mark bucket public

You can deploy frontend code to S3 but you need to make it public. Edit bucket policy to

{
  "Version":"2012-10-17",
  "Statement":[{
  "Sid":"AddPerm",
        "Effect":"Allow",
    "Principal": "*",
      "Action":["s3:GetObject"],
      "Resource":["arn:aws:s3:::projektor.trk.in.rs/*"
      ]
    }
  ]
}

SSL on Amazon Cloud front

You can request certificate AWS Certificate Manager. It will send email to the [email protected] with the link to approve. Administrator of domain can just click on the link to approve it.

Once you have approved certificate you can use it on cloud front distribution. Some notes.

  • it could 15 mins for distribution to become deployed
  • when selecting Origin Domain Name do not select option from dropdown since it does not contain region (it will work only for us-east-1 buckets). You need to use bucket Endpoint here
  • update your DNS record so CNAME is something like d1ub4fmsp3scvp.cloudfront.net

  • default is 24 hours for cache to expire… you can try to invalidate some files manually

Access files

Files can we accessed in three ways.

  1. bucket_name as path. s3[-.]region.amazonaws.com/bucket-name/file-name. Go to the properties of the file and find link, for example: https://s3.eu-central-1.amazonaws.com/duleorlovic-test-eu-central-1/icon-menu.png. You will notice that for us-east-1 region, link does not contain region part, only https://s3.amazonaws.com/duleorlovic-test-us-east-1/favicon.ico
  2. bucket_name as subdomain.bucket-name.s3.amazonaws.com/filename. This is shorter since this url does not contain region https://duleorlovic-test-eu-central-1.s3.amazonaws.com/icon-menu.png
  3. enable website hosting so you can use Endpoint and append fileName http://duleorlovic-test-eu-central-1.s3-website.eu-central-1.amazonaws.com/icon-menu.png
    • note that url contains website- so website needs to be enabled
    • note that this way you can not access using SSL https protocol
    • for HTTPS you need Cloud Front and Route 53 service link or https://www.cloudflare.com/ . You can set up naked domain, just add CNAME with name @ (or you domain.com) and value Endpoint.

Region problems

Tools

Firefor plugin S3Fox firefox organizer is great since you can access specific buckets.

To check your DNS you can use linux command host:

  • host frankfurt.trk.in.rs

aws-sdk

You can use aws-sdk for simple uploading https://github.com/duleorlovic/securiPi/blob/master/s3.rb

# you can override global config
Aws.config.update(
  credentials: Aws::Credentials.new(credentials[:aws_access_key_id], credentials[:aws_secret_access_key]),
)

# if bucket is from different region than you will get error like
# ActionView::Template::Error (The bucket you are attempting to access must be addressed using the specified endpoint. Please send all future requests to this endpoint.):

# or use s3 directly
s3 = Aws::S3::Resource.new region: credentials[:aws_region]
bucket = s3.bucket credentials[:aws_bucket_name]
bucket.objects(prefix: 'assets').map {|o| o.public_url}

# to download object to a file use response_target
# https://docs.aws.amazon.com/sdk-for-ruby/v3/developer-guide/s3-example-get-bucket-item.html
s3 = Aws::S3::Resource.new(region: 'us-west-2')
obj = s3.bucket('my-bucket').object('my-item')
obj.get(response_target: './my-code/my-item.txt')

Push put the file

s3 = Aws::S3::Resource.new
path = 'photos/deepak_file/aws_test.txt'
object = s3.bucket('storagy-teen-dev-us').object(path)
object.upload_file(./tmp/aws_test.txt)
# for public objects
object.public_url

# signed url for private objects
# this is old v1 api
# object.url_for(:read, expires: 10.min)

# https://docs.aws.amazon.com/sdk-for-ruby/v3/api/Aws/S3/Object.html#presigned_url-instance_method
object.presigned_url(:get)

Elastic Beanstalk

https://aws.amazon.com/cli/ Using EB client you need to add permissions for AWSElasticBeanstalkFullAccess for AWS keys in your env

sudo pip install awsebcli
eb init
eb create myapp-env
eb setenv SECRET_KEY_BASE=`rake secret`
eb printenv
eb logs
eb deploy
eb status
eb open

Rails sqlite database can’t be shared between instances (event when we just deploy the code). Adding database is one click. Just use proper env var names:

# config/database.yml
default: &default
  adapter: postgresql
  encoding: unicode
  database: <%= ENV['RDS_DB_NAME'] %>
  username: <%= ENV['RDS_USERNAME'] %>
  password: <%= ENV['RDS_PASSWORD'] %>
  host: <%= ENV['RDS_HOSTNAME'] %>
  port: <%= ENV['RDS_PORT'] %>

Admin access

You can create users and attach AdministratorAccess (Resource *) and create password so they can login in on app url like: https://123123.signin.aws.amazon.com/console.

Errors

  • NoSuchKey 404 Not Found error is when you remove index.html and enable static website hosting with index document index.html

Paperclip

You can use presigned url if you want S3 links to expire, in paperclip it was using attachment.expiring_url https://www.rubydoc.info/github/thoughtbot/paperclip/Paperclip%2FStorage%2FS3:expiring_url When uploading PDF in paperclip there is rails validation error because of imagemagic Paperclip::Errors::NotIdentifiedByImageMagickError fix is to update local sudo vi /etc/ImageMagick-6/policy.xml and add read|write https://github.com/thoughtbot/paperclip/issues/2223#issuecomment-428862815

Active storage

https://edgeguides.rubyonrails.org/active_storage_overview.html

rails active_storage:install
# Copied migration 20210623091736_create_active_storage_tables.active_storage.rb from active_storage
#     create_table :active_storage_blobs do |t|
#     create_table :active_storage_attachments do |t|
#     create_table :active_storage_variant_records do |t|

You need to add gems

# Gemfile
gem 'aws-sdk-s3', require: false
gem 'image_processing', '~> 1.2'

In credentials you need to add

# rails credentials:edit
amazon:
  access_key_id: 123
  secret_access_key: 123

and config for storage:

# config/storage.yml
amazon:
  service: S3
  access_key_id: <%= Rails.application.credentials.dig(:amazon, :access_key_id) %>
  secret_access_key: <%= Rails.application.credentials.dig(:amazon, :secret_access_key) %>
  region: us-east-1
  bucket: your_own_bucket

For google cloud storage you can create and download keyfile.json on IAM -> Service accounts https://console.cloud.google.com/iam-admin/serviceaccounts?folder=true&organizationId=true&project=cybernetic-tide-90121

# app/models/user.rb
class User < ApplicationRecord
has_many_attached :images
has_one_attached :image
end

In form view

# app/views/users/_form.html.erb
<%= f.file_field :image %>

In view

<%= url_for @user.image %>

To check if attached file extension is image

user.image.blob.content_type # => 'image/png'
user.image.blob.image?
user.image.image?
# file name
user.image.filename

In controller permit that

# app/controllers/users_controller.rb
def user_params
  # in case you have has_many_attached
  # params.require(:user).permit(:name, images: [])
  # in case you have has_one_attached
  params.require(:user).permit(:name, :image)
end

Attaching existing file

# to store something in temp file
tempfile = Tempfile.new
# you can set extension for tempfile
tempfile = Tempfile.new(["any_prefix", ".pdf"])
# but instead of manually tempfile.unlink youshould use block syntaxt
Tempfile.create do |tempfile|

tempfile.binmode
encoded_image = params[:data_url].split(',')[1]
decoded_image = Base64.decode64(encoded_image)
tempfile.write decoded_image
tempfile.rewind
@receipt.signature.attach(io: tempfile.read, filename: 'signature.pdf')
tempfile.close
tempfile.delete

# or if you already have file
@user.image.attach(io: File.open('/path/to/file'), filename: File.basename('/path/to/file'))

Downloading

binary = user.image.download
# to save you need to open file
file_path = "#{Dir.tmpdir}/#{user.image.filename}"
File.open(file_path, 'wb') do |file|
file.write(user.image.download)
end

# or you can use .open
user.image.open do |file|
system '/path/to/virus/scanner', file.path
# ...
end

Deleting

@user.image.purge
# to destroy attachment later
@user.image.purge_later

To use with Digital Ocean spaces https://vitobotta.com/2019/11/17/rails-active-storage-permanent-urls-with-no-redirects-digital-ocean-spaces-cloudflare/ with direct public url to storage files ActiveStorage::Blob#key column stores filename.