I recently had to create a vsftpd (FTP) server on an Amazon
aws virtual server running Ubuntu 12.10.
There are some issues with the default vsftpd tool that
installs with apt-get on Ubuntu 12.10 (version 2.x… not sure which one off the
top of my head). It will not allow
virtual users access to their root directories. I wrote up a guide in May-2013 that showed
how to get around this by writing to a sub directory, but that just does not
feel right.
So this guide will go over how to do it properly and with a
virtual AWS machine.
Start an AWS micro instance with Ubuntu 12.04
First set up a security group for you ftp server.
From your EC2 console click on Security Groups
Click on "Create Security Group"
Name it ftp and click Yes, Create
Click on the inbound tab.
Enter 20 and click Add Rule.
Do the same thing for
·
21
·
22
·
4020-4220
You should have something like this.
Click "Apply Rule Changes"
Now take a moment and locate the AMI you want to use to
create your AWS instance.
Ubuntu has a nice search page at http://cloud-images.ubuntu.com/locator/ec2/
[1] that lists all the AMIs that Ubuntu has on AWS
I will am looking for 12.04 Ubuntu 64-bit on the east-1
region and it an EBS.
This returns an AMI of ami-e7582d8e
Use this ami and the security group you just created to
start your aws instance.
I prefer using the command line tools to start an instance
(this requires that you have them installed I have an install guide posted at http://www.whiteboardcoder.com/2012/10/aws-setting-up-command-line-tools-in_23.html
[2])
Run the following command to create a micro instance (Of
course use your own key)
> ec2-run-instances ami-e7582d8e -k my-keypair -b /dev/sda1=:8:true -g ftp -t t1.micro
|
This will create a micro instance with an 8GiB EBS drive.
My instance happened to start at ec2-174-129-58-120.compute-1.amazonaws.com
Log into your instance
> ssh -i .ec2/my-keypair.pem
ubuntu@ec2-174-129-58-120.compute-1.amazonaws.com
|
Create an EBS drive to save FTP data to
I prefer to create an additional EBS drive to mount to this
machine. That way I can store all the
information on this FTP drive and I can unmounts it from this virtual machine
and mount it to another on if I need to.
From the command line run this command to make a 40GiB drive
> ec2-create-volume --size 40
--availability-zone us-east-1a
|
The volume id returned for my test is vol-f2d67fa8
Name The EBS volume
> ec2-create-tags vol-f2d67fa8 --tag
Name=FTP-EBS
|
Attach EBS to volume to your
instance (make sure to use your Ids)
> ec2-attach-volume vol-f2d67fa8 --instance
i-f9503896 --device
/dev/sdf1
|
Log
into the micro instance and run the following commands to format the new EBS
drives and mount them. (for some reason
the /dev/sdf1 drive is attached as /dev/xvdf1
> sudo mkfs -F /dev/xvdf1
> sudo mkdir /ftp
> sudo mount /dev/xvdf1 /ftp
|
Run the following command to
see the hard drives.
> df -h
|
Set fstab to automount drives
Edit
/etc/fstab
> sudo vi /etc/fstab
|
Add the following
/dev/xvdf1
/ftp ext2
rw,suid,dev,exec,noauto,nouser,async
0 0
|
With these settings the drives will not be automounted at
start up, we need a script to do that.
You could automount them by changing noauto to “auto”. But this has an issue on the amazon ec2
server, if the drive is not present the server spins and cannot be logged into,
until the drive is present
Set up start up script (to mount the hard drives
> sudo vi /etc/init.d/mountHD
|
Then place the following in it.
mount /ftp
|
Make it executable
> sudo chmod 755
/etc/init.d/mountHD
|
Add it to autostart
> sudo update-rc.d mountHD
defaults
|
Reboot to test auto mount of hard drives
> sudo reboot now
|
After a reboot and
log back in, if you run this command…
> df -h
|
You should see
Special note: This extra
EBS volume is not part of the AMI machine.
So when you want to create an AMI from this machine you need to unmounts
and detach this EBS from its instance.
Install vsftp (FTP server)
As a note, I am setting up my FTP server with virtual users
using the PAM. So if you want to have
your normal linux users ftp to their directories this is not the guide for you.
The default vsftpd that can be obtained from apt-get has an
issue where you cannot write to the root directory of a virtual user
https://www.benscobie.com/fixing-500-oops-vsftpd-refusing-to-run-with-writable-root-inside-chroot/
[3]
To get around this download and install the latest build of
vsftpd (which allows the conf flag allow_writeable_chroot=YES)
Run this from the command line
> sudo apt-get install libcap2
> wget
http://security.ubuntu.com/ubuntu/pool/main/v/vsftpd/vsftpd_3.0.2-1ubuntu1.1_amd64.deb
> sudo dpkg -i
vsftpd_3.0.2-1ubuntu1.1_amd64.deb
|
Configure the vsftpd setup
> sudo vi /etc/vsftpd.conf
|
I am using some information from the following sites
Update the file to the following
listen=YES
anonymous_enable=NO
local_enable=YES
write_enable=YES
chroot_local_user=YES
allow_writeable_chroot=YES
local_umask=022
guest_enable=YES
user_sub_token=$USER
local_root=/ftp/$USER
hide_ids=YES
pam_service_name=vsftpd.virtual
virtual_use_local_privs=YES
dirmessage_enable=YES
use_localtime=YES
xferlog_enable=YES
connect_from_port_20=YES
secure_chroot_dir=/var/run/vsftpd/empty
#Set passive mode
pasv_enable=YES
pasv_addr_resolve=YES
pasv_address=ftp.whiteboardcoder.com
pasv_min_port=4020
pasv_max_port=4220
seccomp_sandbox=NO
|
The allow_writeable_chroot=YES
Allows a user to write to their own directory
And oddly enough I had what I
think is an issue with the kernel… On a local VM build I did not need the
following but on the AWS server I did seccomp_sandbox=NO
pasv_address=ftp.whiteboardcoder.com
Make sure to put your address of your FTP server here.
Setting up virtual users
Information on how to do this comes from http://howto.gumph.org/content/setup-virtual-users-and-directories-in-vsftpd/
[6]
Instead of linux users for this machine we are going to set
up virtual users using PAM
>
sudo apt-get install libpam-pwdfile
|
Set up PAM file
> sudo vi
/etc/pam.d/vsftpd.virtual
|
Put the following into the file and save it.
auth required
pam_pwdfile.so pwdfile /ftp/vsftpd/ftp.passwd
account required
pam_permit.so
|
To set up passwords you need to first install htpasswd which
is in the apache2-utils
> sudo
apt-get install apache2-utils
|
Add the passwd file
directory
> sudo mkdir /ftp/vsftpd
> sudo touch /ftp/vsftpd/ftp.passwd
|
Set up a test user
Run the following commands to create a virtual user called
pattest
> sudo htpasswd -d /ftp/vsftpd/ftp.passwd
pattest
|
Now create the directory and update permission
> sudo mkdir -p /ftp/pattest/
> sudo chmod 755 /ftp/pattest
> sudo chown ftp:ftp /ftp/pattest
|
Restart the vsftp server
> sudo service vsftpd restart
|
Test the FTP server
(first make sure your DNS entry is properly set up for your
web site)
I had a little issue when going into passive mode (which is
automatically done by the fireFTP client)
turns out the ftp server itself did not know its own address that is to
say if I ran
> dig +short
ftp.whiteboardcoder.com
|
I was not getting the correct address back. To fix this you can do one of two things.
In the /etc/vsftpd.conf file update pasv_address to the ip
address
Ex.
pasv_addr_resolve=YES
pasv_address=23.23.122.150
pasv_min_port=4020
pasv_max_port=4220
|
Or, you can wait for the DNS record to update on the
machine, eventually running dig will result in the correct IP address
returned. When that happens you are good
to go.
Test the connection via FireFTP
Click Create Account
Enter your data and click OK
(this is my example data)
Click Connect
Select a file and attempt to upload it
Success! That worked.
References
[1] Ubuntu ec2 locator
Accessed 06/2013
[2] Setting up AWS command line
Accessed 06/2013
[3] Fixing 500 OOPS: vsftpd:
refusing to run with writable root inside chroot ()
Accessed 06/2013
[4] AWS post VSFTPD in Ubuntu
instance
Accessed 06/2013
[5] Setup VSFTPD with custom
multiple directories and (virtual) users accounts on Ubuntu (no database required)
By: Julien
Bourdeau
Accessed 06/2013
[6] Setup Virtual Users and
Directories in VSFTPD
Accessed 06/2013
No comments:
Post a Comment