AWS Image Storage with S3 and EFS: A Rails How To

Facebook
Twitter
LinkedIn

UploadA typical Rails setup in AWS contains multiple EC2 instances running your application. Your database is in RDS or installed on another EC2 instance. When users upload images, you can’t save the files locally. The images have to be stored in a location where all the EC2 instances have access.

Two AWS services that we can use are S3 and EFS.

Simple Storage Service (S3)

S3 is an object storage service from AWS where you can store a massive amount of data easily. You pay per GB of storage without any minimum. It’s used by Netflix to store billions of hours of contents and by Airbnb to store 10 petabytes of user pictures. You never have to worry about capacity planning. They’ll always have room to store your images.

Most gems support S3 as their storage so you don’t actually have to do a lot of work. Carrierwave by default stores the images on the local filesystem. We can’t do this when we have multiple EC2 instances. Some images will be saved to one server and the other images will be saved to different servers. Since the images are stored in different instances, there is no way to serve them.

If your Rails app store the images on S3, it doesn’t matter which EC2 instance processes the image. They’ll all store the images on the same S3 bucket.

To use S3 as the storage for Carrierwave, change the storage to :fog. Fog is a gem used with different AWS services including S3.

class AvatarUploader < CarrierWave::Uploader::Base
  storage :fog
end

On your configuration file, specify the S3 bucket, AWS access key, and secret access key. We recommend that you don’t put the keys directly on the file so they don’t get committed to your repository. You can use encrypted secrets in Rails or read the keys from environment variables.

CarrierWave.configure do |config|
  config.fog_provider = 'fog/aws'                        # required
  config.fog_credentials = {
    provider:              'AWS',                        # required
    aws_access_key_id:     'xxx',                        # required
    aws_secret_access_key: 'yyy',                        # required
    region:                'eu-west-1',                  # optional, defaults to 'us-east-1'
  }
  config.fog_directory  = 'name_of_s3_bucket'            # required
end

After making these changes, you’ll use Carrierwave the same way as when storing the images on the local filesystem.

You might also like:   Code Concurrency and Two Easy Fixes

S3 Bucket and Credentials

If you don’t have an S3 bucket and IAM keys, here are the steps:

  1. Go to the S3 Console and create a bucket. Enter a name and leave all the default permissions including ‘Do not grant public read access to this bucket (Recommended)’.

  2. Create a Policy on the IAM Console. Click Policies, Create Policy, and Create Your Own Policy. Enter a policy name and put the code below on the policy document. Change BUCKETNAME with the name of the bucket you created on Step 1.

    {
        'Version': '2012-10-17',
        'Statement': [
            {
                'Effect': 'Allow',
                'Action': 's3:*',
                'Resource': [
                    'arn:aws:s3:::BUCKETNAME/*'
                ]
            }
        ]
    }
    
  3. Create an IAM user on the IAM Console. Click Users, Add User. Enter the name and check the box ‘Programmatic access. Enables an access key ID and secret access key for the AWS API, CLI, SDK, and other development tools.’

    On the Permissions step, click ‘Attach existing policies directly’. Choose the policy you created on Step 2.

Elastic File System (EFS)

Elastic File System provides a shared filesystem that can be used from multiple EC2 instances at the same time. Remember when we said we can’t use a local filesystem to store our images? That’s still true but we can use a shared filesystem to get around this. We can mount the shared filesystem to /images for example. Then we’ll symlink public/uploads to /images/uploads on all EC2 instances.

Since /images is on EFS, all EC2 instances will see the same public/uploads.

We don’t need to make any changes on the Carrierwave configuration. The defaults for local file storage will work.

class AvatarUploader < CarrierWave::Uploader::Base
  storage :file

  def store_dir
    'uploads/#{model.class.to_s.underscore}/#{mounted_as}/#{model.id}'
  end
end

Follow the EFS Documentation to create a filesystem. On your EC2 instances, mount the filesystem using the standard Linux mount command.

You might also like:   The Developers Guide To Scaling Rails Apps

Summary

We recommend using S3 as it’s more tested at this point. EFS is still a good option and may simplify your setup so give it a try too.

You can use S3 even if your Rails servers are not hosted on AWS. All you need is an S3 bucket and IAM keys. It looks like you may also be able to use EFS outside of AWS with some workarounds but the S3 setup is more common.

Want more posts like this?

What you should do now:

Facebook
Twitter
LinkedIn

Easy Application Deployment to AWS

Focus on development, not on managing infrastructure

Deploying, running and managing your Ruby on Rails app is taking away precious resources? Engine Yard takes the operational overhead out of the equation, so you can keep innovating.

  • Fully-managed Ruby DevOps
  • Easy to use, Git Push deployment
  • Auto scaling, boost performance
  • Private, fully-configured Kubernetes cluster
  • Linear pricing that scales, no surprises
  • Decades of Ruby and AWS experience

14 day trial. No credit card required.

Sign Up for Engine Yard

14 day trial. No credit card required.

Book a Demo